Speaking objectively, web pages can be analysed in terms of three main qualities:
- Technical, or ‘build’ standards
- Interaction quality or Usability
Some organisations, such as the British Government, issue their own guidelines regarding some of these subjects, and usually Government agencies are required to work to those standards. We are familiar with and happy to work to such standards if it is requested.
Technical ‘build’ quality
We measure the technical quality of our products in terms of conformance to the web standards issued by the World Wide Web Consortium (W3C). And we will guarantee that, at the time they leave our studio, all our websites conform to coding standards published by the W3C.
Unless asked to do otherwise, we will usually create webpages according to the XHTML 2.0 document type definition (DTD) although we are happy to use other DTDs if you prefer – HTML 4.01 or XHTML 1.1 for example. As the definition of these document types is the work of the W3C, web pages should be tested against the W3Cs own validation service to measure conformance.
We also validate our webpages against the relevant stylesheet (CSS) specifications. Unless requested otherwise we will code towards the CSS2 specification.
Sometimes conformance is specified in terms of a particular browser and operating system, rather than in terms of web standards. We believe this type of ‘quality’ assessment is not a good idea, and is not something that organisations serious about manufacturing quality should be interested in. Although we discourage it, we are usually willing to build to non-standard ‘standards’, providing that the criteria for conformance are clearly defined.
You should remember that it is not possible to meet recommended web content accessibility guidelines unless your web site passes both HTML and CSS validation.
Good usability is a signature of good design.
Kilroy James has been advocating the development of usable web sites, systems and software in the UK for a decade. Our Projects Director John Kilroy has advised various Government agencies (ONS, COI and GSCC) and companies (among them, Orange, BT, Land Securities) on ways to make interactive products that produce a better experience for their users.
Apart from the activities described below, we also follow research (into web trends, usability and design practices) carried out by other bodies. Our aim is to refresh and develop our ideas and working methods continually, to offer services that stay at the forefront of best practice.
Using design methods to ensure our work fits the goals of its target audience and stake holders.
We use design methods advocated by Interaction Designers – primarily profiling target users and creating use-case scenarios. These methods are simple, reliable and cost-effective, and are always a useful (sometimes indispensable) precursor to the design phase of any website.
We create ethnographic profiles, often called personas, that describe and represent archetypal system users (perhaps framed by the specification) and key stake holders (in order to model the business priorities). Wherever possible, personas are based on in situ observations and interviews. We then develop scenarios that simulate workflow interactions between the personas and the expected use-cases. Some use-cases may be known at the outset, but it is common for new ones to surface during persona development.
This activity is very useful for spotting good and bad ideas early on and, depending on when it is carried out, can form the basis for an entire design and requirements specification. It can help to validate project objectives and provides information useful to all project stake holders.
Validating the assurance procedures
Opinions vary widely on this subject, but Kilroy James views formal user testing as a part of the assessment process rather than as part of the design process (into which it fits quite awkwardly, and expensively). Also called focus group testing, this activity is useful for testing propositions, prototypes or products.
Testing can take place in a special User Testing ‘laboratory’, which is the ideal, and fine if clients can afford it, but, depending on the test programme, isn’t always absolutely necessary providing the test programme is carefully thought out. We would recommend using formal user testing at two points during a project: once when the personas and use-case scenarios have uncovered a design brief (to validate it); and once at the end when the design brief has been realised (to validate the implementation). If your budget will not stretch to two testing phases, then use it to validate the implementation.
Accessibility is about ensuring that the design of (and technology used by) your website allows access to its content to users who may be using non standard web-browsing devices. Accessibility – like usability – is not a functional feature, and must be designed into a website; it is not (not often anyway) possible to add it on successfully as an afterthought.
The W3C guidelines covering this subject are exhaustive but, particularly those covering AA and AAA conformance, require interpretation within the context of a particular web site in order to be assessed one way or another.
There are some tools and validators available that can help in the assessment of accessibility. However, even the Cynthia Tested and Bobby validators (probably the two most well-known) are not perfect and neither is able to guess context correctly (for instance, no validator can tell whether a table is used for layout or tabular data, or whether a particular piece of content is semantically important or not). An assessment of a web site’s accessibility cannot be made using automatic validation alone, it must be done by a human analyst, usually with help from Bobby and Cynthia, which is how we do it.