Category Archives: Software Technology

WebGL is finally ready for prime time – watch everything change! – Part 1 of 4

WebGL_1500

If you see a spinning Pebble smartwatch above you are using a WebGL enabled browser!  Otherwise you’ve fallen back to a precomputed spinnable 360 degrees format which isn’t even close….

With the latest round of WebGL supported browsers and the hard push towards optimized JavaScript compilers, JavaScript “assembly” libraries that play well with optimizing compilers(asm.js), support for compilers that generate asm.js compliant JavaScript (LLVM with Emscripten), and direct support for LLVM virtual machines and even totally safe sand-boxed code execution (NaCL in chrome only) we are FINALLY ready for a 3D web. And much more than that – these technologies will signal the death knell of traditional OS specific apps….. again FINALLY!

The History from the Standpoint of Applications and the Web

If you are inpatient or pressed for time you can skip down past this – but having a feel for the background really makes the appreciation sink in.  The history with respect to mobile devices is in part 2, web 3d is coming up in part 3 and part 4 will wrap things up by explaining how WebGL on browsers should pull all app development into its fold and leave the other targets as second thoughts.

Let’s start at the beginning (where most things usually start) with the schizophrenic, 1.5 steps forward – 1.2 steps back nature of the computer world where the ecosystem can change over night – a Darwinian process that works well in the end but is far from optimal with respect to efficiency. In the beginning everything was proprietary to processors…. great job security for developers. Some reasonable guys created C to be “a portable machine language” and for a while things moved closer to the write once, modify for various targets and be good to go model. A useful orthogonal model of development – 1.5 steps forward. And remember these systems were very different under the hood – but conveniently were mostly crunching data and spitting out text. Over time things exploded as essential proprietary libraries were created for various targets that broke many of the most useful elements of this paradigm – only 0.5 steps back. These systems did in fact have different capabilities, processors and memory footprints and so a universal abstraction layer couldn’t be generated in many cases. So this did make some sense.

As we enter more modern times – hardware began to be nearly ubiquitous across operating systems – you probably had the same hardware underneath whether you were running Windows, Unix (including the Mac, being Unix of course), Linux, etc. However operating systems that implemented a large number of the same applications (everyone wanted Photoshop to run the same way on all targets) were proprietary in nature with respect to their code. Due to the power of the fast new machines, code could be written with abstraction layers that made it possible a more write once and build for multiple targets; however many programmers still chose the easier, more feature rich approach of writing code specifically for each target. For companies that did nothing but port this was a cash cow.

When the internet took off the ultimate force multiplier for a homogeneous, ubiquitous development abstraction layer was in motion – oddly disguised as a hypertext viewer.  In throws and spurts thanks primarily to Microsoft compatibility games (in Internet Explorer, Java etc.) getting the web experience to be reliable and consistent was a process of never-ending testing.  This did get fixed over time to a good degree.  And as these things really settled down in recent years the stage was almost set, cross-browser compliance was close enough.  There were key missing ingredients however.

One of these missing ingredients was performance – you just couldn’t get close enough to native performance out of the apps that were developed for the web without security risks a mile wide AKA Active X.  These first generation JavaScript ajax apps were really far from the ideal mark – a huge step back in look and feel from native apps.  However, they were relatively safe and sand-boxed(after some time).  Most importantly collaboration was available in a larger way as application installation wasn’t an issue and a rate limiting factor to  deployment and acceptance.  So they were good enough and created a strange, ever-expanding environment of cross-breed of internet applications.  In our guts most developers realized these environments were first class kluges making a square peg round hole metaphor look like an insane understatement.  Html documents were never intended (or designed for) application development and this environment really create a freak show of clunky technologies.  But they became irresistibly de facto – their limited functionality was simply to useful to end users.

Things were getting very close…. 2.5 steps forward for the concept of write once run anywhere.  Everyone who was anyone wanted to have a web app built or customized for their company.  Over the course of several years, web-based app frameworks started to take hold – and sprout up everywhere.  The only thing standing in the way to a the universal “OS killer” app environment in browsers at this point was performance and a more unified, cohesive development experience (client side functionality instead of everything on the server, support for multiple programming languages, something closer to a symmetry between client side and server-side code, etc.).

Side Note: 

Many would think Flash might have helped bridge these gaps – but the binary executable blob concept simply never sat well with a generation of developers wined and dined on complete transparency of underlying code and implementation that came with the standard web development paradigm.  Flash was doomed well before Steve Jobs pulled a Microsoft and didn’t permit it on iOS devices.  Was he the benevolent guru of user experience as he claimed – keeping the masses from poorly performing flash apps?  Of course not, his Bill Gates spidey senses were at work – flash apps could be as strong as app-store apps and would be completely out from Apple’s thumb.  This is the same reason iOS devices don’t support WebGL – it makes uncontrolled high quality apps possible.  But Apple will cave in with iOS just as Microsoft did with WebGL in Internet Explorer – we’ll talk more about this later. 

HTML 5 was coming down the pipe and JavaScript engine optimizations were being implemented – and even rudimentary 3D using canvas and “software rendering” was coming along.  Things were getting so close you could almost smell it in the air.  And then the machines turned into a wrench (literally and figuratively)…….

Mobile phones came on the scene with Android and iOS and …… the old days were revisited – proprietary “OS apps” were back in full swing.  Let’s once again set back the clock and take a big step back…….

More coming in part 2!

Why Silver Bullets Tarnish Over Time – New Paradigms Getting Thrashed in the Field

This concept really applies to much of life, however, computer science in particular.

Why is is that new ways of doing things can have remarkable success when they are at “the discovery stage” and end up being useful but disappointing in the field? Object oriented programming turns into spaghetti class interdependencies and deep hierarchies in day to day use. A giant leap forward and two steps back. Software patterns turn into misuse and abuse of the Singleton pattern and inappropriate shoehorning. Extreme programming becomes an excuse to write unmaintainable code that passes each and every unit test. Simplicity becomes an excuse to build things so bare bone as to become unusable. It seems like each new “promising technique” becomes troublesome once it goes mainstream. Still worth the trouble, but troublesome all the same.

You may find yourself scratching your head about why such issues aren’t discovered before they gain mass acceptance. Why when things are in “the R&D” phase they perform so well. Even when these same things are tested in limited circles on “real world problems” they continue to seem like the next silver bullet.

The answer unfortunately is as hard to swallow as it is simple. Early on the people developing and working with these new technologies are quite often some of the best in the field and the smartest guys out there. By the time the concept migrates to the other 90% of the coding world it finds itself slammed against the middle and trailing ugly end of the bell curve. These portions of the curve will often times misunderstand and misuse the new tools and techniques presented to them. It’s during these phases that the backlash usually kicks in. Guidelines and rules must be created to keep the average programmer from hanging himself with too much rope.

Often times new paradigms go through a few separate stages of development and acceptance. In stage 1 the really smart R&D types hit it. These people are actually great at creating new solutions that are both groundbreaking and elegant. Once these “conceptual entrepreneurs” have firmed things up it is ready for stage 2. In stage 2 the brightest of the “real world group” is brave enough to tackle integrating these concepts into production code. They become evangelists of the techniques they have had so much success with. Stage 3 involves more risk takers embracing the techniques advocated by stage 2. They pour over the concepts involved, grok them and integrate them into their code bases. It’s at stage 4 that the tide starts to turn. At stage four the middle of the bell curve is beginning to be pierced. The new mantra is gaining success. By stage 5 you have everyone accepting the new paradigm as “the way to go” based more on appeal to authority and peer pressure. At this stage a significant number may not understand the new techniques at all. They will advocate them but only graft them on top of their existing ways of doing things.

I like to think of myself as a stage 3 integrator. At this point things have hit my comfort zone and they are worth a try. I will do everything I can to understand a new technique, but by no means have created it or been the true dare devil integrating it at stage 2 into production code. Later down the pipe when I see these concepts misused and abused – I avidly follow the “shoring up” techniques to keep people from blowing body parts off my misusing a new technique.

No matter what the latest silver bullet appears to be, it is merely getting it justly deserved 15 minutes of fame during its integration period. These techniques will live on but the focus will move on. It’s important not to throw the baby out with the bath water. New techniques may disappoint after several years of growth – but in the end they often stay part of our process. Object orientation, design patterns, components, agile programming, the list goes on. Don’t forget that silver bullets are still bullets even when they tarnish. Just because a concept hasn’t completely lived up to its hype doesn’t make it a great and useful part of your process.