21

I am a web developer currently writing my own site (fullstack). There are methods for identifying the browsers users use to interact with the site.

I would like to prepare the server to offer a cut-down version of the site (with "safe" HTML) for old browsers (Netscape and Dreamcast's come to my mind). Does anybody know if they forward data I could use to make the switch?

3
  • Comments are not for extended discussion; this conversation has been moved to chat.
    – Chenmunka
    Commented Nov 22, 2022 at 8:42
  • 2
    As a Pale Moon user I'd like to say I truly appreciate you going to this effort. PM isn't even ancient: it's current, it just doesn't support a couple of things like Web Components and ES6 modules - yet many sites rely on these nowadays and have no polyfill or graceful degradation, going against what we were all taught to do 10-20 years ago.
    – Keiji
    Commented Nov 23, 2022 at 13:07
  • @Keiji: count yourself lucky there even is a website. Those ES6-dependent websites probably consider they're making a grand concession by offering anything other than an app. Commented Nov 23, 2022 at 19:43

3 Answers 3

59

If you want server-side detection, you’d probably have to rely on the User-Agent sent by the browser.

A better approach would be to serve the safe HTML version of the site by default, and enrich it on more capable browsers. This is beneficial even on new browsers since the user gets the content immediately.

5
  • 14
    This also has the distinct advantage that it will give non-visual users a working site in most cases without you having to code for their browsers separately. Commented Nov 21, 2022 at 12:47
  • 1
    I realize this goes beyond the scope of a retro-computing question, but this answer begs the question, "If you initially serve 'old' HTML by default, and you assume the browser doesn't support javascript, then how would the page 'enrich itself' on more capable browsers? Is this a matter of sending the javascript and 'seeing what happens'?
    – Geo...
    Commented Nov 23, 2022 at 15:14
  • @Geo... that’s how HTML + JS was used back in the day (I did lots of HTML + JS development in the early 2000s). You’d have a complete HTML page with CSS, and references to JS; if the JS loaded, it would enrich HTML controls and provide additional behaviour (up to and including full-blown AJAX). The whole thing (HTML, CSS, JS) had to be written carefully so that it would be compatible with the desired target browsers, which could be painful, but it was doable. Commented Nov 23, 2022 at 15:37
  • @Geo... that’s also some of the thinking, at least for some developers, behind modern HTML5 + CSS; it’s possible to build highly interactive web pages using only HTML and CSS, and then if necessary, more complex behaviours can be implemented on top in JS. Commented Nov 23, 2022 at 15:39
  • @StephenKitt, thanks for the explanation. I was doing desktop applications during that time period so I missed all the fun associated with early web development. I remember trying to build a page (as an experiment, maybe 1997-ish) that would receive a live-feed of data from my server and refresh/update content before (I think) AJAX was a thing (I might have just been ignorant of AJAX)... After some trial and error, jiggery-pokery, a few iframes, etc... I got it sort of working, but it was at that moment I decided web development was not for me. :-)
    – Geo...
    Commented Nov 23, 2022 at 16:01
4

There are various versions of HTTP; a really old browser will only support HTTP/1.0, while newer once, depending on their age, will support HTTP/1.1, HTTP/2, or HTTP/3.

See also https://superuser.com/questions/1659248/how-does-browser-know-which-version-of-http-it-should-use-when-sending-a-request for some more information.

So depending on how fullstack you want to go, you can use that information to detect really old browsers. If the initial request from the browser is GET / HTTP/1.0, it's probably ancient; if it sends at least GET / HTTP/1.1, it's at least somewhat newer. Unfortunately, this won't work with stuff that's newer, as HTTP/2 always wants TLS.

Also, you could have your response set a cookie and include a css link. Really old browsers won't read the css, so the fact that the css gets read means at least some support for it. At that point, the HTML is already sent, so it might be too late for your landing page, but you could use the information in following pages. (This also means that people using text-based browsers like lynx, or people automating things with curl, will get the non-css-version, which is probably beneficial in this context).

But be aware that none of this is foolproof. The user might use a proxy, use HTTP/1.0 between the browser and the proxy, but something newer between proxy and your site. The user might be behind some "web accelerator" that evaluates and downloads links itself even before the browser starts reading the page, so the accelerator reads the css even if the browser doesn't. If you really want to make sure a netscape 1.0 user can access the downgraded HTML, it's probably best to provide a link to it that can be used even if the rest of the site looks like gibberish.

2
  • 8
    A really old browser might only support HTTP/0.9.
    – psmears
    Commented Nov 21, 2022 at 14:53
  • 3
    Really old browsers don't support cookies: Internet Explorer, for example, didn't gain support until version 2.
    – Mark
    Commented Nov 22, 2022 at 4:20
3

The key thing to learn here is you generally should not be trying to identify the browser (because this is always a game of cat-and-mouse with a mountain of edge cases), but rather you should serve content which will work correctly regardless of browser.

Browsers that don't support some JS feature will generally raise some exception or not run the JS at all, and the best way to work with that is to provide basic HTML and server-side functionality that works as it should even if JS is completely disabled, and then use JS to replace this functionality with an optimised client-side implementation - that way, browsers that support the necessary JS will never end up triggering the server-side functionality, and in those where JS is disabled, doesn't run or errors out, the server-side functionality will run, and both groups of users will have a good experience.

Even in the case where somehow the server-side functionality gets prevented but the JS still doesn't work properly, the user at least has an easy workaround: disable JS entirely for your site, and now the server-side functionality will work. (You should of course try to avoid this, e.g. by making the "prevent default" step the last thing JS does, but by allowing your site to work without JS which you need to do anyway for... browsers without JS, you'll have this workaround available for free.)

Additional tips you might consider:

  • Give anything that could be considered a "page" its own URL.
  • Make sure that going to a page's URL renders all the appropriate content for that page on the server. You can easily test this by using a browser with JavaScript disabled.
  • If you want to use JavaScript to improve performance of navigating between pages when <a href="..."> links are clicked, learn about history.pushState (if you haven't already).
  • If you're doing anything interactive, use a form with a submit button; make sure that if JS is disabled, the form submission navigates (or posts, as appropriate) to a page that will correctly process the action and render the result on the server side; meanwhile, use CSS to style the form in whatever way you like (it doesn't even have to look like a form - hence why this can be used for just about any interactive activity) and use JS to intercept the form submission and handle it via appropriate means (AJAX, pushState, etc.); this way, browsers will automatically handle things client-side if they can and fall back on server-side if they can't.
  • If you want JS to work in as many places as possible (rather than relying only on server-side functionality for older or independent browsers), then avoid using modern syntax, because if a browser doesn't support some particular syntax feature, the entire script will fail to run at all because of a parse error. You can still use modern syntax for development, if you have a build process which transpiles to older JS, say ES5, and a way of verifying that your transpiled output is indeed valid under that version of JS. (At the end of the day though, if you do this you need to make a decision on how old is old enough, and it's going to come down to downloading a bunch of old or niche browsers and testing on all of them - you'll always miss something this way, hence why it's vital to also have the server-side fallback.)
  • Remember that CSS can also cause potential trouble for old or independent browsers. For example, here on Stack Exchange, certain buttons become invisible when hovering over them in Pale Moon; that's not a disaster (because they're fine when not hovering over them), but I'm sure it's not the intended experience! I'm not sure what exactly to suggest to avoid causing CSS-related problems, but it's worth bearing in mind and researching how fallback tends to work in CSS in general, for an intuition of where to expect problems.
1
  • In fact, having any CSS at all, as a separate resource, can crash some versions of Netscape Navigator 4... Commented Nov 23, 2022 at 14:00

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .