By Richard Cornford, edited by Garrett Smith
Under normal circumstances computer programs are written for a known environment. The programmer knows what to expect; which APIs will be available, how they will behave and so on. Java is an ideal example of this, providing a theoretically consistent set of APIs and language features across all Java Virtual Machine (JVM) implementations. But this is also true in most other circumstances. The programmer of C++ for the Windows operating system will know what MFC classes are available and how to program them. Their expectations will be rewarded, if they posses the required knowledge.
Client side javascript for the Internet, on the other hand, has the possibly unique problem of having to be authored with no specific knowledge of the environment in which it will be executed. The client side environment will usually be a web browser and web browsers do tend to have many common features (and increasingly standardised features) but the author cannot know which version of which browser will be making any HTTP request (or whether it is a browser at all). It is not even possible to tell in advance whether the User Agent will be capable of executing javascript; all of those that can include a user configurable option to disable it anyway.
Javascript authors for the Internet must realise that they are dealing with the unknown and that any, and all, scripts will fail to execute somewhere. The challenge is to get the most from your javascript when it is available but to cope with their failure in a meaningful way.
I once read a description of a variant on the game of chess, played at military academies. Two players sit at separate boards set up with only their own pieces, out of sight of each other, and a referee supervises the game. Each player makes their move in turn and the referee is responsible for informing them how the other's move impacts on their own pieces and how the other's disposition of pieces impact on their intended move. The player is informed only when one of their own pieces is taken, when one of their moves is affected by interacting with one of their opponents pieces (i.e. a player may want to move a bishop across the board but the referee may inform them that their move was stopped four squares early when the bishop took a pawn from the other side) and when one of their opponents pieces is sitting on a square adjacent to one of their own.
The game is used to teach battlefield strategy. To win a player must probe and test to deduce his opponent's disposition, while planing and executing a response that will achieve the desired checkmate. It is this sort of strategy that needs to be added to the normal programming problems in order that javascript may cope with its unknown execution environment, with the significant difference that the strategy, its tests and all of the paths of execution must be fully planed out before the code can even starts executing.
While the point of this article is to introduce techniques for handling the differences between web browsers and their DOM implementations it is also possible to avoid some types of differences especially related to the structure of the DOM that is being scripted.
If I was asked to recommend one action most likely to promote the authoring of cross-browser scripts it would be: Start from a basis of valid HTML.
Browsers presented with invalid HTML will usually attempt to error correct it in order to do the best possible job of displaying it. Some browsers, particularly IE, are tolerant of all sorts of strange formulations of mark-up. Valid HTML has a tree-like structure, elements may completely contain others but they cannot overlap, and there are rules about which elements may appear in which contexts. The DOM that is to be scripted also has a tree-like structure and there is a very simple relationship between the tree-like structure of valid HTML and the DOM constructed from it. So any browser presented with valid HTML will be able to directly translate that HTML into a corresponding DOM using well specified rules, resulting in a DOM that is of predictable and consistent structure on all of the browsers that can build a DOM.
Invalid HTML will not translate as naturally into a DOM, or even a tree-like structure. If the browser is going to build a DOM with the source provided it is going to have to apply error correcting rules and attempt to build the best DOM it can. But the error correcting rules are not standardised, not even published. So different browsers have no choice but apply different rules and that directly results in the building of DOMs with different (and in extremes, radically different) structures.
As a result, using invalid HTML directly produces differences in the DOMs produced by different browsers. No matter how good the application of techniques for dealing with the differences between browsers, it does not make sense to do anything that will provoke more differences than are unavoidable.
The authoring of invalid HTML, justified because "It works in browser XYZ", gives the authors of accompanying scripts the impression that cross-browser scripting is harder than it is. If they had started with valid HTML they would never have encountered any of the structural inconsistencies that invalid HTML can provoke.
As browsers have evolved they have offered more features to javascript. Different manufactures have adopted the features of other browsers, while adding new features, that may in turn have been adopted by (or will be adopted by) their competitors. Various sets of common features have emerged and some have been formalised by the W3C into a sequence of standard DOM specifications. Along the way an increasing number of javascript capable browsers have emerged. In addition to desktop PC browsers, javascript capable browsers exist for a whole range of devices; PDAs, mobile telephones (Microwave ovens, refrigerators).
Unfortunately it is the case that the browsers on the smaller devices cannot offer the range of features available to a desktop PC and even as the technology improves and features are added to the smaller browsers the problem will not improve as browsers will become available on a wider range of devices while the desktop PC browsers will continue to march ahead of them.
Over the years various strategies have been attempted to tackle this problem and some have failed miserably.
One of the most popular strategies for handling the differences between
web browsers was browser detecting based on the User Agent string.
Browsers possessing a navigator
object also provide a
property of that object: navigator.userAgent
containing a
string that (in theory) identifies the browser. Its application went
something like:-
/* Warning: never use this script, or any script based on, or resembling, it.
*/
var userAgent = self.navigator.userAgent;
var appName = self.navigator.appName;
var isOpera = false;
var isOpera5 = false;
var isOpera6p = false;
var isIE = false;
var isIE4 = false;
var isIE5p = false;
var isMozilla1p = false;
var isNet4 = false;
var isNet5p = false;
var operaVersion = 0;
var ieVersion = 0;
var appVersion = self.navigator.appVersion;
var brSet = false;
function brSetup(){
for(var c = 3;c < appVersion.length;c++){
var chr = appVersion.charAt(c);
if(isNaN(chr)){
appVersion = appVersion.substring(0, c);
break;
}
}
if((userAgent.indexOf('webtv') < 0) &&
(userAgent.indexOf('hotjava') < 0)){
if(userAgent.indexOf('Opera') >= 0){
var ind = (userAgent.indexOf('Opera')+6);
if(((ind+1) < userAgent.length)&&(ind >= 6)){
isOpera = true;
var bsVersion = parseInt(userAgent.substring(ind, ind+1));
if(!isNaN(bsVersion)){
operaVersion = bsVersion;
if(operaVersion >= 6){
isOpera6p = true;
}else if(operaVersion >= 5){
isOpera5 = true;
}
}
}
}else if(appName.indexOf('Microsoft Internet Explorer') >= 0){
var ind = (userAgent.indexOf('MSIE')+5);
if(((ind+1) < userAgent.length)&&(ind >= 5)){
isIE = true;
var bsVersion = parseInt(userAgent.substring(ind, ind+1));
if(!isNaN(bsVersion)){
ieVersion = bsVersion;
if(ieVersion >= 5){
isIE5p = true;
}else if(ieVersion >= 4){
isIE4 = true;
}
}
}
}else if(appName.indexOf('Netscape') >= 0){
if((self.navigator.vendor)&&
(self.navigator.vendor.indexOf('Netscape') >= 0)&&
(userAgent.indexOf('Gecko') >= 0)){
isNet5p = true;
}else if((userAgent.indexOf('Netscape') < 0)&&
(userAgent.indexOf('Gecko') >= 0)&&
(appVersion >= 5)){
isMozilla1p = true;
}else if((appVersion < 5)&&
(userAgent.indexOf('compatible') < 0)){
isNet4 = true;
}
}
}
brSet = true;
}
This version also uses some other properties of the
navigator
object; appName
and
appVersion
.
Superficially this type of script seems to be saying quite a lot about
what browser is executing the script. Knowing that the
isIE5p
variable is boolean true
seems to be
a reasonable indicator that the browser in question is Internet
Explorer Version 5 or above and from that all of the available features
on the IE 5+ DOM could be assumed to exist.
Unfortunately, if this type of script ever was an effective determiner of the browser type, it is not now. The first problem is that you cannot write this type of script to take into account all web browsers. The script above is only interested in Internet Explorer, Netscape and (some) Mozilla derived browsers and Opera. Any other browser will not be identified, and that will include a number of W3C DOM conforming fully dynamic visual browsers quite capable of delivering on even quite demanding code.
The second problem is that scripts like this one, and server-side counter-parts (reading the HTTP User Agent header) were used to exclude browsers that did not fall into a set of browsers known to the author, regardless of whether those browsers were capable of displaying the offending site or not.
As more browsers were written, their authors discovered that if they
honestly reported their type and version in their User Agent string
they would likely be excluded from sites that they would
otherwise be quite capable of displaying. To get around this problem
browsers began spoofing the more popular versions, sending HTTP User
Agent headers, and reporting navigator.userAgent
strings,
that were indistinguishable from, say, IE.
As a result, when the above script reports isIE5p
as true, it is
possible that the browser that is executing the script is one of
numerous current browsers. Many of those browsers support sufficient
features found on IE5+ to allow most scripts to execute but the
trueness of isIE5p
is not a valid indicator that the
browser will support all of the IE 5+ DOM.
Now you might decide that a browser that lies about its identity
deserves what it gets (though they started lying in order to make
themselves usable in the face of near-sighted HTML and script authors)
but is worth bearing in mind that the IE 5
navigator.userAgent
string is:
"Mozilla/4.0 (compatible; MSIE 5.01; Windows NT)"
.
IE 5 is in fact spoofing Netscape 4, and Microsoft started to do that
for precisely the same reasons that motivate many current browsers to
send User Agent headers, and report navigator.userAgent
strings that are indistinguishable form those of Microsoft browsers.
No browser manufacture wants (or ever has wanted) their browser to be
needlessly excluded from displaying a web site that it is perfectly
capable of handling just because the author of that site does not
know it by name. And to prevent that they have followed Microsoft and
taken action that has rendered the userAgent
string (and
the HTTP User Agent header) meaningless.
We are now at a point where the contents of the User Agent strings
bear no relationship at all to the capabilities and features of the
browser that is reporting it. The situation has gone so far that a
number of javascript experts have stated that a standard quality test
for an unknown script would include searching the source code of the
script for the string "userAgent"
and dismissing
the script out of hand if that string is found.
A second browser detecting strategy uses the objects present in various browser DOMs and make the assumption that the presence (or absence) of one or more objects indicates that a browser is a particular type or version. I quickly found this example of typical code of this type:-
/* Warning: never use this script, or any script based on, or resembling, it.
*/
var isDOM=(document.getElementById)?true:false;
var isIE4=(document.all&&!isDOM)?true:false;
var isIE5p=(document.all&&isDOM)?true:false;
var isIE=(document.all)?true:false;
var isOP=(window.opera)?true:false;
var isNS4=(document.layers)?true:false;
Javascript performs automatic type conversion so when a boolean result is expected from an expression that evaluates to a non-boolean value that non-boolean value is (internally) converted to a boolean value (using the rules defined in the ECMAScript specification) and that boolean is used as the result.
Take the first line:-
var isDOM=(document.getElementById)?true:false;
The conditional expression requires that the expression preceding the ?
have a boolean result. The document.getElementById
property accessor can resolve as one of two values depending on whether
the getElementById
function is supported in the browser in
question. If it is supported then the accessor resolves as a function
object, and is type converted to boolean true
. If
getElementById
is not supported the accessor resolves as
undefined, and undefined type converts to boolean
false
. Thus the expression preceding the question mark
resolves as true
or false
and based on that
result true
or false
are assigned to the
variable isDOM
.
Incidentally, this code is not the optimum method of assigning a boolean
value based on the type converted to boolean result of a property accessor.
It is better to use the javascript NOT operator ( !
) twice
or to pass the object reference as the argument to the Boolean
constructor called as a function. The not operator will type convert its
operand to boolean and then invert it so false
becomes
true
and true
becomes false
.
Passing that result as the operand for a second not operator inverts
the boolean again so a reference to a function object results in boolean
true
and an undefined reference results in boolean
false
. The Boolean
constructor called as a
function converts its argument to boolean and returns that value. The
statement would become:-
var isDOM = !!document.getElementById;
/* - or - */
var isDOM = Boolean(document.getElementById);
Which is shorter and faster than the original version and certainly more direct.
The problem with this type of browser detecting script is that it is
used to make assumptions about the browser's capabilities that are
rarely valid. For example, this isDOM
result, based on
the browser's support for document.getElementById
, is
often used as the basis for the assumption that the browser has a
fully dynamic DOM with methods such as
document.createElement
, replaceChild
and
appendChild
. Browsers do not live up to that expectation,
some are not that dynamic and while they may implement some of the Core
DOM level 1 methods such as getElementById
They do not
necessarily implement large parts of the various DOM standards,
including all of the dynamic Node
manipulation methods.
The result of the isIE5p
test is intended to indicate that
the browser is Internet Explorer 5.0 or above. However, Opera 7,
IceBrowser 5.4, Web Browser 2.0 (palm OS), Konquerer, Safari, NetFront,
iCab and others will all produce a true
value in
isIE5p
because they implement getElementById
and the document.all
collection. As a result, code that
assumes that it will have all of the capabilities of IE 5.0+
available to it when isIE5p
is true
will as
often as not be mistaken.
This problem applies to all of the tests above with the possible
exception of the window.opera
test. I am unaware of a
second browser type that has implemented an opera
object
on the window object. But then Opera 7 is a radically different, and
much more dynamic browser that its preceding versions, though they all
possess a window.opera
object.
To get around the problem that multiple browsers implement the same features (even if they start off unique to one browser) script authors have attempted to find more discriminating features to test. For example, the following script extract is intended to work only on IE 5.0+ browsers:-
var isIE5p = !!window.ActiveXObject; ... function copyToClip(myString){ if(!isIE5p) return; window.clipboardData.setData("text",myString); }
The ActiveXObject
constructor is intended to be
discriminating of an IE browser. However, this type if script still
does not work. It has placed the competing browser manufacturers in
exactly the same position as they were in when scripts tested the
navigator.userAgent
string and excluded them from
accessing a site because they honestly reported that they where not
IE. As a result I already know of one browser that has implemented
a window.ActiveXObject
function, it probably is a dummy
and exists in the browsers DOM specifically to defeat the exclusion
of that browser based on tests like the one above.
The assumptions that the existence of one (or two) feature(s) in a javascript environment infers the existence of any feature beyond the ones tested is invalid. It is only used by those ignorant of the potential for diversity, imitation and the patterns of evolution in browser DOMs.
No matter how specifically the objects from which the inferences are derived are chosen, the technique itself sows the seeds of its own invalidity, an object that may actually validly be used to infer that a browser is of a particular type/version today probably will not still be valid next year. Adding a maintenance burden to a task that already presupposes an omniscient knowledge of all browser DOMs just in order to be effectively implemented at present.
The main point of the previous discussion is to convey the idea that it is impossible to detect exactly which type of browser (or version of that browser) a script is being executed on. The use that such scripts have been put to in the past (to exclude browsers from sites that they probably could have successfully handled) has motivated the manufactures of browsers to render browser detecting nonviable as a strategy for dealing with the variations in browser DOMs.
Fortunately, not being able to identify a web browser type or version with more accuracy than could be achieved by generating a random number and then biasing the result by your favourite (meaningless, because they too are based on browser detecting and suffer exactly the same unreliability) browser usage statistics, does not need to impact upon your ability to script web browsers at all. A viable alternative strategy has been identified and developed to the point where it is possible to author javascript to be used on web pages without any interest in the type or version of the browser at all.
That alternative strategy is known as object or feature detecting. I prefer to use the term "feature detecting", partly because the resulting code often needs to test and probe a wider range of features than just those that could be described as objects, but mostly because "object detecting" is occasionally erroneously applied to the object inference style of script described above.
Feature detecting seeks to match an attempt to execute as script (or a part of a script) with the execution environment by seeking to test features of that environment where the results of the test have a direct one-to-one relationship with the features that need to be supported in the environment for the code to successfully execute. It is the direct one-to-one relationship in the implemented tests that avoids the need to identify the specific browser because whatever browser it is it either will support all of the required features or it will not. That would mean testing the feature itself (to ensure that it exists on the browser) and possibly aspects of the behaviour of that feature.
Taking the previous example that illustrated how the
ActiveXObject
constructor might be used as the basis for
a script that inferred the existence of, and ability to use, the
clipboardData
feature implemented on window IE. Rather
than inferring the browser's support for the clipboardData
feature from some other unrelated feature it should be fairly obvious
that the feature that should be tested for prior to attempting to write
to the clipboard is the clipboardData
object, and
further, that calling the setData
method of that object
should necessitate checking that it too is implemented:-
function copyToClip(myString){ if((typeof clipboardData != 'undefined')&& (clipboardData.setData)){ clipboardData.setData("text",myString); } }
In this way the tests that determine whether the
clipboardData.setData
method is called have a direct
one-to-one relationship with the browser's support for the feature. It
is not necessary to be interested in whether the browser is the
expected windows IE that is known to implement the feature, or whether
it is some other browser that has decided to copy IE's implementation
and provide the feature itself. If the feature is there (at least to
the required extent) it is used and if it is not there no attempt is
made to use it.
The above feature detecting tests are done using two operations. The
first employs the typeof
operator, which returns a string
depending on the type of its operand. That string is one of
"undefined"
, "object"
,
"function"
, "boolean"
"string"
and "number"
and the test compares the returned string with the string
"undefined"
. The clipboardData
object is not used unless typeof does not return
"undefined"
.
The second test is a type-converting test. The logical AND
(&&
) operator internally converts its operands to
boolean in order to make its decision about what value it will return.
If clipboardData.setData
is undefined it will type-convert
to boolean false
, while if it is an object or a function
the result of the conversion will be boolean true
.
However, that function is not a particularly clever application of
feature detecting because, while it avoids the function throwing errors
in an attempt to execute clipboardData.setData
on a browser
thatdoes not support it, it will do nothing on a browser that does not
support it. That is a problem when the user has been presented with a
GUI component that gives them the impression that their interaction
will result in something being written to the clipboard but when they
use it nothing happens. And of course nothing was going to happen if
the browser in use did not support javascript or it had been disabled.
Ensuring that a script will not attempt to use a feature that is not supported is not sufficient to address the design challenge of crating scripts for the Internet. Testing the browser for the features that it does support makes it practical to handle a spectrum of browser DOMs but the script design task also involves planning how to handle the range of possibilities. A range that goes from guaranteed failure to execute at all on browser that do not support javascript, to full support for all of the required features.
You can tell when the browser does not support the
clipboardData
feature from the script prior to using it
but the user has no way of knowing why a button that promised them
some action has failed to do anything. So in addition to matching the
script to the browser's ability to execute it, it is also necessary to
match the GUI, and the user's resulting expectations, to what the
script is going to be able to deliver.
Suppose the copyToClip
function was called from an
INPUT
element of type="button"
and was intended to copy a company e-mail address into the clipboard,
the HTML code for the button might look like:-
<input type="button" value="copy our contact e-mail address to your clipboard" onclick="copyToClip('info@example.com')">
We know that that HTML will do nothing if javascript is disabled/unavailable and we know that it will do nothing if the browser does not support the required features, so one option would be to use a script to write the button HTML into the document in the position in which the button was wanted when the browser provided the facility:-
<script type="text/javascript"> if((typeof clipboardData != 'undefined')&& (clipboardData.setData)&& (document.write)){ document.write('<input type="button"', 'value="copy our contact e-mail address', ' to your clipboard" onclick="', 'copyToClip(\'info@example.com\')">'); } </script>
Now the user will never see the button unless the browser supports the required features and javascript is enabled. The user never gets an expectation that the script will not be able to deliver (at least that is the theory, it is still possible for the user's browser configuration to prevent scripts from writing to the clipboard, but the user might be expected to know how their browser is configured and understand that the button is not in a position to override it).
If the copyToClip
function is only ever called from
buttons that are written only following the required feature detection
then it can be simplified by the removal of the test from its body as
it would be shielded from generating errors on nonsupporting browsers by
the fact that there would be no way for it to be executed.
The document.write
method is not the only way of adding
GUI components to a web page in a way that can be subject to the
verification of the features that render those components meaningful.
Alternatives include writing to a parent element's
innerHTML
property (where supported, see
FAQ: How do I modify the content of the current page?), or using the W3C DOM
document.crateElement
(or createElementNS
)
methods and appending the created element at a suitable location within
the DOM. Either of these two approaches are suited to adding the
components after the page has finished loading, but that can be useful
as some feature testing is not practical before that point. The
approach used can be chosen based on the requirements of the script.
If the script is going to be using the
document.createElement
method itself then it is a good
candidate as a method for inserting any required GUI components,
similarly with innerHTML
. The document.write
method is universally supported in HTML DOMs but is not necessarily
available at all in XHTML DOMs.
Other ways of handling the possibility that the browser will not support either javascript or the features required by the script used is to design the HTML/CSS parts of the page so that the script, upon verifying support for the features it requires, can modify, manipulate and transform the resulting elements in the DOM. But in the absence of sufficient script support the unmodified HTML presents all of the required content, navigation, etc.
This can be particularly significant with things like navigation menus. One style of design would place the content of the navigation menus, the URLs and text, in javascript structures such as Arrays. But either of javascript being disabled/unavailable on the client or the absence of the features required to support a functional javascript menu would leave the page without any navigation at all. Generally that would not be a viable web page, and not that good for search engine placement as search engine robots do not tend to execute javascript either so they would be left unable to navigate a site featuring such a menu and so unable to rank its content for listing.
A better approach to menu design would have the navigation menu content
defined in the HTML, possibly as nested list elements of some sort, and
once the script has ascertained that the browser is capable of executing
it and providing the menu in an interactive form it can modify the CSS
position
, display
and visibility
properties, move the elements to their desired location, attach event
handlers and generally get on with the task of providing a javascript
menu. And the worst that happens when the browser does not support
scripting or the required features is that the user (and any search
engine robots) finds the navigation in the page as a series of nested
lists containing links. Fully functional, if not quite as impressive as
it could have been had the script been supported. This is termed
"clean degradation" and goes hand in hand with feature
detecting during the process of planing and implementing a browser
script for the Internet.
An important aspect of feature detecting is that it allows a script to
take advantage of possible fall-back options. Having deduced that a
browser lacks the preferred feature it still may be possible to
achieve the desired goal by using an alternative feature that is know
to exist on some browsers. A common example of this is retrieving an
element reference from the DOM given a string representing the
ID
attribute of that element. The preferred method would
be the W3C Core DOM standard document.getElementById
method which is supported on the widest number of browsers. If that
method was not available but the browser happened to support the
document.all
collection then it could be used for the
element retrieval as a fall-back option. And for some types of
elements, such as CSS positioned elements on Netscape 4 (where the
document.layers
collection may be used to retrieve such
a reference), additional options may be available.
function getElementWithId(id){ var obj; if(document.getElementById){ /* Prefer the widely supported W3C DOM method, if available:- */ obj = document.getElementById(id); }else if(document.all){ /* Branch to use document.all on document.all only browsers. Requires that IDs are unique to the page and do not coincide with NAME attributes on other elements:- */ obj = document.all[id]; }else if(document.layers){ /* Branch to use document.layers, but that will only work for CSS positioned elements and LAYERs that are not nested. A recursive method might be used instead to find positioned elements within positioned elements but most DOM nodes on document.layers browsers cannot be referenced at all. */ obj = document.layers[id]; } /* If no appropriate/functional element retrieval mechanism exists on this browser this function returns null:- */ return obj||null; }
Although that function is not very long or complex (without its comments) it does demonstrate a consequence of one style of implementation of feature detecting, it repeats the test each time it is necessary to retrieve an element using its ID. If not too many elements need retrieving that may not be significant, but if many elements needed retrieving in rapid succession and performance was significant then the overhead of performing the feature detection on each retrieval may add up and impact on the resulting script.
An alternative is to assign one of many functions to a global
getElementWithId
variable based on the results of the
feature detecting tests, as the script loads.
var getElementWithId; if(document.getElementById){ /* Prefer the widely supported W3C DOM method, if available:- */ getElementWithId = function(id){ return document.getElementById(id); } }else if(document.all){ /* Branch to use document.all on document.all only browsers. Requires that IDs are unique to the page and do not coincide with NAME attributes on other elements:- */ getElementWithId = function(id){ return document.all[id]; } }else if(document.layers){ /* Branch to use document.layers, but that will only work for CSS positioned elements. This function uses a recursive method to find positioned elements within positioned elements but most DOM nodes on document.layers browsers cannot be referenced at all. This function is expected to be called with only one argument, exactly like the over versions. */ getElementWithId = function(id, baseLayers){ /* If the - baseLayers - parameter is not provided default its value to the document.layers collection of the main document: */ baseLayers = baseLayers||document.layers; /* Assign the value of the property of the - baseLayers - object (possibly defaulted to the document.layers collection) with the property name corresponding to the - id - parameter to the local variable - obj: */ var obj = baseLayers[id]; /* If - obj - remains undefined (no element existed with the given - id -) try searching the indexed members of - baseLayers - to see if any of their layers collections contain the element with the corresponding - id: */ if(!obj){ //Element not found /* Loop through the indexed members of - baseLayers: */ for(var c = 0;c < baseLayers.length;c++){ if((baseLayers[c])&& //Object at index - c. (baseLayers[c].document)&& //It has a - document. /* And a layers collection on that document: */ (baseLayers[c].document.layers)){ /* Recursively call this function passing the - id - as the first parameter and the layers collection from within the document found on the layer at index - c - in - baseLayers - as the second parameter and assign the result to the local variable - obj: */ obj=getElementWithId(id,baseLayers[c].document.layers); /* If - obj - is now not null then we have found the required element and can break out of the - for - loop: */ if(obj)break; } } } /* If - obj - will type-convert to boolean true (it is not null or undefined) return it, else return null: */ return obj||null; } }else{ /* No appropriate element retrieval mechanism exists on this browser. So assign this function as a safe dummy. Values returned form calls to getElementWithId probably should be tested to ensure that they are non-null prior to use anyway so this branch always returns null:- */ getElementWithId = function(id){ return null; } } /* Usage:- var elRef = getElementWithId("ID_string"); if(elRef){ ... //element was found }else{ ... //element could not be found. } */
The feature detecting tests are performed once as the page loads and
one of many function expressions assigned to the
getElementWithId
global variable as a result. The
assigned function is the one most capable of retrieving the required
element on the browser in use but it is still necessary to check that
the returned value in not null
and plan for the
possibility of failure in the element retrieval process. It is
guaranteed to fail on any browser that does not implement at least one
of the element retrieval mechanisms used as the default function just
returns null
.
It may not be sufficient to provide a function that does the best job
of element retrieval that can be done on the browser in use. Other
scripts, or parts of the script, may need to know how successful their
attempts to retrieve elements are likely to be. The
getElementWithId
version that uses
document.layers
cannot retrieve elements that are not CSS
positioned and that may not be good enough for some scripts, while it
may not matter to others. The document.getElementById
and
document.all
versions should be able to retrieve any
element given its ID
. An opting would be to set a couple
of boolean flags to indicate how successful element retrieval can be
expected to be. Defaulting the flags to false
and setting
them to true
in the branches that assign the various
function expressions. Then if other code is interested in what can be
expected from the getElementWithId
function, say in order
to decide how to configure, or default itself, it can check the
pertinent flag.
var getElementWithId; var canGetAnyElement = false; //default any element flag var canGetCSSPositionedElements = false; //default CSS positioned flag if(document.getElementById){ canGetAnyElement = true; //set any element flag to true canGetCSSPositionedElements = true; //set CSS positioned flag true getElementWithId = ... }else if(document.all){ canGetAnyElement = true; //set any element flag to true canGetCSSPositionedElements = true; //set CSS positioned flag true getElementWithId = ... }else if(document.layers){ canGetCSSPositionedElements = true; //set CSS positioned flag true /* The - canGetAnyElement - flag is not set in this branch because the document.layers collection does not make *all* elements available. */ getElementWithId = ... }else{ /* Neither flag is set when the dummy function is assigned because it is guaranteed not to be able to retrieve any elements: */ getElementWithId = function(id){ return null; } } ... if(canGetCSSPositionedElements){ /* Expect to be able to retrieve CSS positioned elements. */ ... if(canGetAnyElement){ /* Expect to also be able to retrieve any other elements that have an ID attribute. */ ... } }
The flags do not directly reflect which feature is going to be used
for element retrieval, instead they reflect what can be expected from
the getElementWithId
function on the current browser.
Allowing a script that requires a particular level of performance
(say the retrieval of any element) to determine whether it will have
that facility but without denying the facility from a script with a
less demanding requirement.
Another common task that needs to be approached differently on
different browsers is the retrieval of the extent to which the user
has scrolled a web page. The majority of browsers provide properties
of the global object called pageXOffset
and
pageYOffset
, which hold the relevant values. Some make the
equivalent browsers available as scrollLeft and scrollTop properties on
the "root" element (either in addition to the
pageX/YOffset
properties or instead of them). The task is
complicated further by the fact that which element is the
"root" element depends on various factors, it was always the
document.body
element in the past but newer (CSS)
standards compliant browsers (and browsers that can operate in various
modes, including standards compliant mode) make the
document.documentElement
the root element. Then there may
be browsers that do not make the scrolling values available at all.
Because the pageXOffset
and pageYOffset
properties are implemented on the largest number of browsers, and their
use avoids the need to worry about the "root" element, they
are the preferred values to use. In there absence the problem moves on
to identifying the "root" element, which is made easier by
the browsers that understand standards compliant mode and provide a
document.compatMode
string property to announce which mode
they are in. If the string property is missing or the value of that
string is other than "CSS1Compat"
then it is the
document.body
object that needs to be read for the
scrolling values, else the document.documentElement
should
be read. Testing for the presence of any of the scroll values
themselves needs to done with a typeof
test because they
are numeric values and if implemented but set to zero a type-converting
test would return false
but that would not be an indicator
of their absence.
The following is an example that employs feature detection to decide which scroll values to read:-
/* The - getPageScroll - global variable is assigned a reference to a function and when that function is called initially it configures the script to read the correct values, if available, and then returns a reference to the object - interface - which provides methods that retrieve the scroll values. Subsequent invocations of the getPageScroll function do not repeat the configuration, they just return a reference to the same interface object. Because the configuration stage may need to check whether the document.body element exists the function cannot be called until the browser has parsed the opening body tag as prior to that point there is no document.body element. Usage:- var scrollInterface = getPageScroll(); var scrollX = scrollInterface.getScrollX(); var scrollY = scrollInterface.getScrollY(); The interface methods return NaN if the browser provides no method of reading the scroll values. A returned NaN value can be tested for with the - isNaN - global function, but it should not be necessary to perform the isNaN test on more than the first retrieval because if the returned value is NaN it will always be NaN and if it is not it should never be. if(isNaN(scrollX)){ //No scroll value retrieval mechanism was available on this browser } (* The script performs an inline execution of a function expression which returns the function object that is assigned to the - getPageScroll - global variable. This produces a closure that preserves the local variables of the executed function expression, allowing the execution context of the function expression to provide a repository for the configuration results, keeping them out of the global namespace. The format is:- v--- Anonymous function expression --v var variable = (function(){ ...; return returnValue; })(); Inline execution of the function expression ----^^ The value returned by the inline execution of the anonymous function expression is assigned to the variable. If that returned value is a reference to an inner function object then the assignment will form a closure.) */ var getPageScroll = (function(){ /* The local variable "global" is assigned the value - this - because the function expression is executing in the global context and - this - refers to the global object in that context. The global object is usually the - window - object on web browsers but this local variable is going to be used in the configuration tests for convenience: */ var global = this; /* notSetUp - Is a flag to indicate when the script has done the setup configuration: */ var notSetUp = true; /* readScroll - Is initially a dummy object that is used to return the NaN values whenever no functional scroll value retrieval mechanism is available on the browser. It is assigned a reference to the object from which the scroll values can be read if the feature detection determines that to be possible: */ var readScroll = {scrollLeft:NaN,scrollTop:NaN}; /* Using the local variables - readScrollX - and readScrollY - to hold the property names allows the same functions to read both the pageX/YOffset properties of the global object and the scrollTop/Left properties of the "root" element by assigning different values to the variables. These are the defaults: */ var readScrollX = 'scrollLeft'; var readScrollY = 'scrollTop'; /* The - itrface - local variable is assigned a reference to an object and it is this object that is returned whenever getPageScroll() is called. The object has two properties, getScrollX and getScrollY, which are assigned the values of two anonymous function expressions. These functions are inner functions and as a result have access to the local variables of the function that contains them (the anonymous function expression that is executed inline in order to assign value to the getPageScroll global variable). The use a square bracket property accessor to read a value of whatever object has been assigned to the variable - readScroll - with a property name that corresponds to the value assigned to whichever of the variables - readScrollX - or - readScrollY - are employed, allows the functions to use the simplest code poible to provide values for all of the possible permutations resting from the feature detection derived configuration: */ var itrface = { getScrollX:function(){ return readScroll[readScrollX]; }, getScrollY:function(){ return readScroll[readScrollY]; } }; /* The - setUp - inner function is called to perform the feature detection and configure the variables that will be employed in reading the correct scroll values. It sets the - notSetUp - flag to false once it has been executed so that configuration only happens the first time that a request for the interface object is made: */ function setUp(){ /* As the paeX/YOffset properties are the preferred values to use they are tested for first. They are not both tested because if one exists the other can be assumed to exist for symmetry. The testing method is a - typeof - test to see if the value is a number. A type-converting test cannot be used because the number zero would result in boolean false and a pageXOffset value will be zero if the page has not been scrolled: */ if(typeof global.pageXOffset == 'number'){ /* If pageXOffset is a number then the value of the - global - variable (assigned a reference to the global object earlier) is assigned to the - readScroll - variable and the strings "pageXOffset" and "pageYOffset" are assigned to the - readScrollX - and - readScrollY - variables so they will be the property names used to access the - readScroll- (now the global) object. */ readScroll = global; readScrollY = 'pageYOffset'; readScrollX = 'pageXOffset'; }else{ /* If pageXOffset is undefined it is time to find out which object is the "root" element. First, does the browser have a - document.compatMode - string, if it does then is its value "BackCompat", "QuirksMode" or "CSS1Compat". Instead of comparing the string directly it is searched for the substring "CSS" which might make the script more robust in the face of possible future "CSSnCompat" modes, which are unlikely to demand that the "root" element is moved again. The tests also verifies that there is a - document.documentElement - to read and that its - scrollLeft - property is a number: */ if((typeof document.compatMode == 'string')&& (document.compatMode.indexOf('CSS') >= 0)&& (document.documentElement)&& (typeof document.documentElement.scrollLeft=='number')){ /* The - readScrollX - and - readScrollY - variables are already defaulted to the required strings so it is only necessary to assign a reference to the - document.documentElement - to the - readScroll - variable: */ readScroll = document.documentElement; /* If the browser is not in the appropriate mode the scroll values should be read from the document.body - element, assuming it exists on this browser and that the - scrollLeft - property is a number: */ }else if((document.body)&& (typeof document.body.scrollLeft == 'number')){ /* The - readScrollX - and - readScrollY - variables are already defaulted to the required strings so it is only necessary to assign a reference to the - document.body - to the - readScroll - variable: */ readScroll = document.body; } /* No other scroll value reading options exist so if - readScroll - has not been assigned a new value by this point it will remain a reference to the object with the NaN value properties. */ } notSetUp = false; //No need to repeat configuration. } /* The inline execution of the anonymous function expression returns with the following statement. It returns an inner function expression and it is that function that will be called when getPageScroll() is executed. Doing this forms a closure, preserving all of the local variables and functions defined within the executed anonymous function expression. Calling that returned function as - getPageScroll() - executes the setUp function, but only if it has not already been called, and returns a reference to the - itrface - object. */ return (function(){ if(notSetUp){ //If the - notSetUp - variable is still true. setUp(); //Execute the - setUp - function. } return itrface; //returns a reference to - itrface }); })(); //inline-execution of anonymous function expression, one-off!
The effect of this code is to match the browser's ability to provide
the scrolling information with a script's desire to read it through a
simple and efficient interface that acts based on the results of
featured detecting tests that it applies only once, and if the browser
does not support any methods of reading the scroll values the return
values form the interface method indicate that fact by being NaN. It
does not matter that Netscape 4 will be reading the global
pageX/YOffset
properties, that IE 4 will read the
scrollTop/Left
properties from document.body
or that IE 6 will read those values from one of two possible objects
based on the document.compatMode
value. What is important
is that if unknown browser XYZ provides any of those mechanisms for
reporting the scroll values the script is going to be able to use them,
without ever knowing (or caring) that it is browser XYZ that it is
executing on.
Feature detecting is not restricted to features of the DOM, it can be
extended to include features of the javascript language implementation.
For example the String.prototype.replace
function in later
language versions will accept a function reference as its second
argument, while earlier versions only accept a string in that context.
Code that wants to use the function argument facility of the
replace
method will fail badly if it is not supported on
the browser.
As usual, a feature-detecting test for the implementation's ability to
support function arguments with the replace
method has to
be a direct test on that feature. The following example test takes
advantage of the fact that a browser that only supports the string
argument version of replace
will type-convert a function
reference used in that context into a string. The replace
method uses a Regular Expression (its first argument) to
identify parts of a string and then replaces them with a string that is
provided as its second argument. If the second argument is a function,
and the browser supports the function argument, that function is called
and its return value replaces the identified parts of the string.
By providing a function expression that returns an empty string as its
second argument and a Regular Expression that identifies the entire
original string as the first argument, the operation of the
replace
method will result in an empty string if the
function argument is supported. But if only string arguments are
supported then the function will be type-converted into a string and
that string will not be an empty string so the result of the
evaluation of the replace
method will not be an empty
string. Applying a NOT (!
) operation to the resulting
string type-converts the empty string into a boolean value, inverts
it and returns true
, the non-empty string would result
in false
.
/* The original string is the one letter string literal "a". The Regular Expression /a/ identifies that entire string, so it is the entire original string that will be replaced. The second argument is the function expression - function(){return '';} -, so the entire original string will be replaced with an empty string if the function expression is executed. If it is instead type-converted into a string that string will not be an empty string. The NOT (!) operation type-converts its operand to boolean and then inverts it so the results of the test is boolean true if function references are supported in the second argument for the - replace - method, and false when not supported: */ if(!('a'.replace(/a/, (function(){return '';})))){ ... //function references OK. }else{ ... //no function references with replace. }
The common thread with feature detecting is that it is the code that is going to use the features, and the nature of those features, that defines how support for the required feature needs to be tested. Once you get used to the idea it starts to become obvious which tests need to be applied to verify a browser's support for a feature, and then it is time to work on the efficient application of feature detection.
Javascript as a language is not that complex, it may have its quirks but it can be defined entirely in the 173 pages of the ECMA specification (3rd edition). The challenge of authoring javascript comes form the diversity of execution environments. When authoring for the Internet nothing is known about the receiving software in advance, and even when that software is a web browser that will execute javascript, there is still a spectrum of possible DOM implementations to contend with.
The combination of the facts that it is impossible to determine which browser is executing the script, and that it is impossible to be familiar with all browser DOMs can be rendered insignificant by using feature detection to match code execution with any browser's ability to support it. But there is still going to be a diversity of outcomes, ranging from total failure to execute any scripts (on browsers that do not support javascript, or have it disabled) to full successful execution on the most capable javascript enabled browsers.
The challenge when designing scripts is to cope with all of the possibilities in a way that makes sense for everyone. As those possibilities will always include browsers incapable of executing javascript at all, the starting point must be pages based on (valid) HTML that contain all of the required content, allow the necessary navigation and are as functional as they purport to be (possibly with the backing of server-side scripting, which does not have any of the problems of client side scripting). On top of that reliable foundation it is possible to layer the scripts. Feature detecting and adding scripted enhancements when the browser is capable of supporting them, cleanly degrading to the underlying and reliable HTML when it is not.
A well designed script, implementing a suitable strategy, can enhance the underlying HTML page, exploiting the browser's capabilities to the maximum extent possible and still exhibit planed behaviour in the absence of any (or all) desired features and degrade cleanly where necessary. Nobody should either consider themselves a skilled Internet javascript author, or deprecate javascript as a language and/or browser scripting as a task, until they have demonstrated an ability to write a non-trivial script that achieves that goal.