Skip to main content
16 events
when toggle format what by license comment
Aug 9, 2023 at 10:33 comment added Chris H @R..GitHubSTOPHELPINGICE another side of the same problem is that academic pages used to be just text, images and layout. Then an unholy alliance of marketing and IT departments decided it would be a nice idea to spend lots of money on a content management system that made everyone's page look the same and that required no skill to update beyond the ability to put up with a dreadful interface. But that CMS is now a source of vulnerabilities, both security flaws directly and the often-bigger issue that the provider is taken over by a larger rival, and the product no longer supported
Aug 9, 2023 at 9:54 comment added JackRed @hobbs now, obviously it doesn't mean we should just delete content and just to hell with it. As raised in the questions and some of the other answers, a lot of reasons exists to also keep those pages alive. In a perfect world, the university would then migrate all "previous" personal webpage to an up to date system. However, you'd probably need the authorization of each author to do that, pay someone to do this work, etc, just like replacing a bunch of books would cost money and time. However books libraries are here to provide books, thus they'd do it. Universities may have other interests
Aug 9, 2023 at 9:51 comment added JackRed @hobbs realistically no. There may have been less code involved, as it was "simple" http, and maybe less people involved in finding security issues, and certainly the security culture at the time was way different than nowadays. But no, things were not without security issues, just like no computer stuff. The web is not broken, but things need to change. If you find out books are printed with hard cover with lead, you would remove them for security issues...
Aug 8, 2023 at 21:51 comment added R.. GitHub STOP HELPING ICE @avid: If the server has no capacity for hosted pages to run any server-side code, then for the most part you can assume it's not problematic.
Aug 8, 2023 at 21:36 comment added avid @R..GitHubSTOPHELPINGICE Indeed, many legacy websites will present minimal risk. However, it is hard to identify the few that may be problematic without digging through the guts of every single site.
Aug 8, 2023 at 21:12 comment added R.. GitHub STOP HELPING ICE @IMSoP: It's garbage when you're using it in place of what should be a static site, not when you're actually doing something with it.
Aug 8, 2023 at 20:30 comment added hobbs @IMSoP point being, we need a format for stuff we want people to actually be able to read in the future. Currently that's PDF, which is extremely disappointing.
Aug 8, 2023 at 20:22 comment added hobbs @IMSoP um, very easily? About as hard as typing gopher sdf.org. Or, if I'm away from a reasonable machine, there's a web proxy.
Aug 8, 2023 at 20:09 comment added IMSoP @hobbs I think that's meaningless nostalgia. The CGI standard for running programs based on a web request dates back 30 years, only a couple of years younger than the web itself. From the very beginning, Tim Berners-Lee envisioned the WWW as something interactive, with early prototypes operating more like a wiki, with viewing and editing all part of the same experience. And one of its great benefits was that it did standardise formats and access schemes; how easily do you think you could access a Gopher page right now?
Aug 8, 2023 at 19:43 comment added IMSoP @R..GitHubSTOPHELPINGICE Apart from the obvious irony of typing "web application garbage" into a web application, I've seen plenty of academic sites with interactive demos, using a variety of obsolete technologies - Java applets, Flash widgets, out-dated JavaScript... And that's before you get to server-side languages - ever see a page with "cgi-bin" in the URL? that's almost certainly running some very dodgy old server code, that might have remotely exploitable flaws, or just plain not work if copied onto a newer server.
Aug 8, 2023 at 19:38 comment added R.. GitHub STOP HELPING ICE In my experience academic pages like this are proper documents not web application garbage. The idea they're software-like and have to be maintained for security is nonsense.
Aug 8, 2023 at 18:05 comment added Richard Rast @hobbs Yes indeed, it's a problem. Once upon a time websites were mostly text and layout information, and preserving that is not very difficult. But now they are better described as software. Running software requires sporadic maintenance, or it rots. If the professor has left, nobody is maintaining that software; realistically it will run for a while on its own, but eventually it will stop working.
Aug 8, 2023 at 18:02 comment added hobbs @JackRed which only means that the web is utterly broken. HTTP used to be able to serve simple content to visitors without "security issues". Imagine if libraries had a policy of removing all books over 10 years old because they weren't typeset using the latest process or because the paper manufacturer was "no longer supporting that version".
Aug 8, 2023 at 15:09 vote accept charmoniumQ
Aug 8, 2023 at 13:38 comment added JackRed I agree with security reasons. Especially when a lot of personal pages are setup uniquely rather than through a common setup easily maintainable by the Uni. Having to check 30+ differents pages, created over different decades with very different (but probably all legacy) code, and probably hosted on different parts of the infrastructure (because of the different school/research institute), sounds like a nightmare. Obviously you can just not check it, but then security issues will arise
Aug 8, 2023 at 9:20 history answered avid CC BY-SA 4.0