Last month, in part 1 I concluded that the problem was too much choice and not enough curation, but promised that an upcoming blog series was going to change all of that.
A lofty goal, for sure, and as yet undelivered. This blog series is still very much on my to-do list, and not something that I need a kickstarter project to do. The “next few weeks” commitment was deliberately vague, and a better estimate would be to say “coming in 2014”.
My first thought to this was along the lines of “what outrageous nonsense!” however rather than becoming a monster, perhaps Ruby Rogers can teach us all a lesson in seeing the other persons viewpoint.
So let us examine and assess this in a bit more detail. The most obvious thing to do is to compare web pages from popular sites using the way back machine. Screenshots for Microsoft are at the top of this page, and you can take a look at Apple and Amazon’s best efforts from 15 years ago below. But is this a straw-man argument? Just because the sites look very different, that doesn’t mean the programming behind them is very different.
So what were we using and what was client side programming like back in 1999? Netscape Navigator was a big deal, but the hottest thing was the new Internet Explorer version 5. There was no Firefox, and Safari was only known as a tourist trip to Africa.
Unfortunately caniuse.com doesn’t go back as far as 1999 and the closest we can get on that site is to compare today’s browsers against IE5.5, which was released in July 2000.
Interestingly, Internet Explorer 5 did provide a means for doing XML HTTP Requests using it’s ActiveX object, and iframe technology is even older, allowing asynchonous fetching since 1996. However according to Wikipedia in 1999 “asynchronous Web technologies remained fairly obscure” and the term AJAX was not coined until many years later, in 2005.