For nearly my entire career, Microsoft has been trying to kill OpenGL and replace it with DirectX. I've never been entirely sure why. They used to support it, (the first-gen Windows NT drivers were pretty good) but then they got struck by the DirectX thunderbolt, and decided they weren't going to use an open standard already implemented by the.biggest names in graphics, they were going to make their own.
There was even a period in the early 2000's where it looked like DirectX might win, because it really did perform better on that generation of hardware. Perhaps because of the high degree of involvement Microsoft had with those vendors. Features were enabled for DirectX that took months or years to appear as OpenGL extensions, for no obvious reason. It was a symbiotic relationship that sold a lot of 'certified' graphics cards and copies of Windows. If it hadn't been for Quake III Arena, OpenGL probably would have died on windows entirely. (All hail the great John Carmack.)
But then, oh then, there was the great Silverlight debacle. I don't think there's any concise way to explain that big ball of disappointment. Perhaps there are internal corporate projects that have real-time media streams and 3D models zooming all over, but on the web it was a near-complete fail. A lot of people put their trust in Microsoft that it would be cross-platform enough to replace Flash. A lot of code got written for that theoretical future, and then it only properly worked on IE.
Here's the secret about web standards, and HTML5 in particular... they lie dormant until some critical 'ubiquity' percentage is passed, and then they explode and are everywhere. Good web developers want their site to work on all browsers, but they also want to use the coolest stuff. Therefore they use the coolest stuff which is supported by all browsers.
Of course, "All Browsers" is a stupidly wide range, so really a metric like "95-98% of all our visitors" is what gets used. So once a technology is available in 95% of people browsers, that's the tipping point.
Silverlight didn't make it, because it didn't properly work on Linux or Mac. Or even most other Windows browsers. Flash used to enjoy this ubiquity, but Apple stopped that when it hired bouncers to keep Adobe out of iOS.
But the pressure kept building for access to the 3D card from inside a browser window. 3D CSS transforms meant the browser itself was positioning DIV elements in 3D space (and using the 3D card for compositing) and this deepened the ties between the browser and the GPU. Microsoft had already put all it's eggs in the Silverlight basket, but everyone else wanted an open standard that could be incorporated quickly into HTML without too much trouble. There really wasn't any choice.
In a couple of hours, a delivery person should be dropping off a shiny new Google Nexus smartphone. If you run a special debug command, WebGL gets activated. On iOS, the WebKit component will do WebGL in certain modes (ads) but not in the actual browser. The hardware is already there (3D chips drive the display of many smartphones) it's just a matter of APIs.
WebGL's "Ubiquity" number is already higher than any equivalent 3D API. The open-ness of the standard guaranteed that. And as GPUs get more powerful, they look more and more like the Silicon Graphics Renderstations that OpenGL was written on, and less like the 16-bit Voodoo cards that DirectX was optimized for. OpenGL has been accused of being "too abstract", but that feature is now an advantage because of the range of hardware it runs on.
Lastly, WebGL is OpenGL-ES. That suffix makes a lot of difference. The 'ES' standard is something of a novelty in the world of standards... it's a reduction of the previous API set. Any method that performed an action that was a subset of another method was removed. So all the 'simple' calls went away, and left only the 'complex' versions. This might sound odd, but it's a stroke of genius. Code duplication is removed. There are less API calls to test and debug. And you _know_ the complex methods work consistently (aren't just an afterthought) because they're the only way to get everything done.
Rebuilding a 'simple' API of your own on top of the 'complex' underlying WebGL API is obviously the first thing everyone does. That's fine. There's a dozen approaches for how to do that.
In short, WebGL is like opening a box of 'Original' LEGO, when you just had four kinds of brick in as many colours. You can make anything with enough of those bricks. Anything in your imagination. Sure, the modern LEGO has pre-made spaceship parts that just snap together, but where's the creativity? "Easy" is not the same as "Better".
There will quickly come the WebGL equivalent of JQuery. Now that the channel to the hardware is open, we can pretty up the Javascript end with nice libraries. (Mine is coming along, although optimized in the 'GPU math' direction which treats textures as large variables in equation) There are efforts like "three.js" which make WebGL just one renderer type (although the best) under a generic 3D API that also works cross-platform on VRML and Canvas. (Although you have less features, to maintain compatibility)
The point is, while the WebGL API is a bit of a headspin (and contains some severe rocket science in the form of homogeneous co-ordinates and projective matrix transforms) you almost certainly won't have to use it directly. The ink on the standard is barely dry, and there are at least five independant projects to create complete modeler/game engines. Very shortly we will be 'drawing" 3D web pages and pressing the publish button. The protypes are already done, and they work.
No comments:
Post a Comment