If you see a spinning Pebble smartwatch above you are using a WebGL enabled browser! Otherwise you’ve fallen back to a precomputed spinnable 360 degrees format which isn’t even close….
The History from the Standpoint of Applications and the Web
If you are inpatient or pressed for time you can skip down past this – but having a feel for the background really makes the appreciation sink in. The history with respect to mobile devices is in part 2, web 3d is coming up in part 3 and part 4 will wrap things up by explaining how WebGL on browsers should pull all app development into its fold and leave the other targets as second thoughts.
Let’s start at the beginning (where most things usually start) with the schizophrenic, 1.5 steps forward – 1.2 steps back nature of the computer world where the ecosystem can change over night – a Darwinian process that works well in the end but is far from optimal with respect to efficiency. In the beginning everything was proprietary to processors…. great job security for developers. Some reasonable guys created C to be “a portable machine language” and for a while things moved closer to the write once, modify for various targets and be good to go model. A useful orthogonal model of development – 1.5 steps forward. And remember these systems were very different under the hood – but conveniently were mostly crunching data and spitting out text. Over time things exploded as essential proprietary libraries were created for various targets that broke many of the most useful elements of this paradigm – only 0.5 steps back. These systems did in fact have different capabilities, processors and memory footprints and so a universal abstraction layer couldn’t be generated in many cases. So this did make some sense.
As we enter more modern times – hardware began to be nearly ubiquitous across operating systems – you probably had the same hardware underneath whether you were running Windows, Unix (including the Mac, being Unix of course), Linux, etc. However operating systems that implemented a large number of the same applications (everyone wanted Photoshop to run the same way on all targets) were proprietary in nature with respect to their code. Due to the power of the fast new machines, code could be written with abstraction layers that made it possible a more write once and build for multiple targets; however many programmers still chose the easier, more feature rich approach of writing code specifically for each target. For companies that did nothing but port this was a cash cow.
When the internet took off the ultimate force multiplier for a homogeneous, ubiquitous development abstraction layer was in motion – oddly disguised as a hypertext viewer. In throws and spurts thanks primarily to Microsoft compatibility games (in Internet Explorer, Java etc.) getting the web experience to be reliable and consistent was a process of never-ending testing. This did get fixed over time to a good degree. And as these things really settled down in recent years the stage was almost set, cross-browser compliance was close enough. There were key missing ingredients however.
Things were getting very close…. 2.5 steps forward for the concept of write once run anywhere. Everyone who was anyone wanted to have a web app built or customized for their company. Over the course of several years, web-based app frameworks started to take hold – and sprout up everywhere. The only thing standing in the way to a the universal “OS killer” app environment in browsers at this point was performance and a more unified, cohesive development experience (client side functionality instead of everything on the server, support for multiple programming languages, something closer to a symmetry between client side and server-side code, etc.).
Many would think Flash might have helped bridge these gaps – but the binary executable blob concept simply never sat well with a generation of developers wined and dined on complete transparency of underlying code and implementation that came with the standard web development paradigm. Flash was doomed well before Steve Jobs pulled a Microsoft and didn’t permit it on iOS devices. Was he the benevolent guru of user experience as he claimed – keeping the masses from poorly performing flash apps? Of course not, his Bill Gates spidey senses were at work – flash apps could be as strong as app-store apps and would be completely out from Apple’s thumb. This is the same reason iOS devices don’t support WebGL – it makes uncontrolled high quality apps possible. But Apple will cave in with iOS just as Microsoft did with WebGL in Internet Explorer – we’ll talk more about this later.
Mobile phones came on the scene with Android and iOS and …… the old days were revisited – proprietary “OS apps” were back in full swing. Let’s once again set back the clock and take a big step back…….
More coming in part 2!