Monthly Archives: January 2007

How SEO will kill google – or the problem with backlinks

When the web was young, backlinks were the perfect way to measure a sites popularity. The sites with backlinks were more popular and the sites with backlinks from more popular sites were even more popular as you would expect. And all was well.

But in today’s world no matter how much bleach you apply to your white hat, you can’t get around that little bit of knowledge you’ve acquired. Backlinks = success on Google. And by knowing this and “exploiting” this you will knock out those websites that don’t attempt to do any “SEO”. Your white hat may look white, but really it’s gray. You will do all the right things to generate backlinks. Invariably as much as Google would hate to admit it, this breeds a competitive landscape. “You can’t win if you don’t play.” It has now become essential to “ethically” build backlinks in order to get organic Google search results. And this is very bad for the quality of links on Google’s search results.

No matter how innocent you try to be, the cat is out of the bag. It is far to easy to get a large number of “encouraged” backlinks to a given site. When you search on Google the really big players come up first. And this is good. They have such a huge number of backlinks that they can pretty much be seen as genuine. It’s the middle ground. The “long tail”. These are the problems. If a smart developer/designer releases a WordPress theme with a backlink and that theme is used by several hundred people they get lots of page rank. These wordpress themes can be used on popular blogs with plenty of pagerank themselves. This is “white hat” activity. But it’s really intelligently gaming the system. The SEO gurus know plenty of tricks to get backlinks and PR.
Because of this many Google “long tail” searches that are more specific are filled with SEO’d sites both white hat and black hat spam. I know quite a few people who actually are switching to other search engines because of the “backlink” spam. “Quality backlinks” are just too easy to get, especially if you are willing to pay money for them. And that goes on everyday.

Backlinks were a good idea at some point. Now they are only showing that you are trying really hard. The people that talk the loudest and the most often aren’t always the best. And proliferating backlinks to your site with all the right keywords to game the system doesn’t mean your content is the best either. It just means that you are good at generating backlinks.

Until spiders can have a clue what they are parsing it will be hard to impossible to solve this problem and still use backlinks.
Social networks like digg.com, stumbleupon.com and reddit.com are “the next new thing”. By having people vote on pages… the crap filters to the bottom. This of course will have it’s own set of problems – but invariably will be far better and more accurate than backlinks. At the end of the day some content will be brilliant and good – and still be undiscovered. However more data will yield better results.

How do you really judge the worth of a website? Is it how many people like it? The demographics of those people linking to worth for specific topics? How long people spend on the site itself? These are fascinating questions that search engines will be forced to answer in upcoming year – or they will be inconsequential.

Why Silver Bullets Tarnish Over Time – New Paradigms Getting Thrashed in the Field

This concept really applies to much of life, however, computer science in particular.

Why is is that new ways of doing things can have remarkable success when they are at “the discovery stage” and end up being useful but disappointing in the field? Object oriented programming turns into spaghetti class interdependencies and deep hierarchies in day to day use. A giant leap forward and two steps back. Software patterns turn into misuse and abuse of the Singleton pattern and inappropriate shoehorning. Extreme programming becomes an excuse to write unmaintainable code that passes each and every unit test. Simplicity becomes an excuse to build things so bare bone as to become unusable. It seems like each new “promising technique” becomes troublesome once it goes mainstream. Still worth the trouble, but troublesome all the same.

You may find yourself scratching your head about why such issues aren’t discovered before they gain mass acceptance. Why when things are in “the R&D” phase they perform so well. Even when these same things are tested in limited circles on “real world problems” they continue to seem like the next silver bullet.

The answer unfortunately is as hard to swallow as it is simple. Early on the people developing and working with these new technologies are quite often some of the best in the field and the smartest guys out there. By the time the concept migrates to the other 90% of the coding world it finds itself slammed against the middle and trailing ugly end of the bell curve. These portions of the curve will often times misunderstand and misuse the new tools and techniques presented to them. It’s during these phases that the backlash usually kicks in. Guidelines and rules must be created to keep the average programmer from hanging himself with too much rope.

Often times new paradigms go through a few separate stages of development and acceptance. In stage 1 the really smart R&D types hit it. These people are actually great at creating new solutions that are both groundbreaking and elegant. Once these “conceptual entrepreneurs” have firmed things up it is ready for stage 2. In stage 2 the brightest of the “real world group” is brave enough to tackle integrating these concepts into production code. They become evangelists of the techniques they have had so much success with. Stage 3 involves more risk takers embracing the techniques advocated by stage 2. They pour over the concepts involved, grok them and integrate them into their code bases. It’s at stage 4 that the tide starts to turn. At stage four the middle of the bell curve is beginning to be pierced. The new mantra is gaining success. By stage 5 you have everyone accepting the new paradigm as “the way to go” based more on appeal to authority and peer pressure. At this stage a significant number may not understand the new techniques at all. They will advocate them but only graft them on top of their existing ways of doing things.

I like to think of myself as a stage 3 integrator. At this point things have hit my comfort zone and they are worth a try. I will do everything I can to understand a new technique, but by no means have created it or been the true dare devil integrating it at stage 2 into production code. Later down the pipe when I see these concepts misused and abused – I avidly follow the “shoring up” techniques to keep people from blowing body parts off my misusing a new technique.

No matter what the latest silver bullet appears to be, it is merely getting it justly deserved 15 minutes of fame during its integration period. These techniques will live on but the focus will move on. It’s important not to throw the baby out with the bath water. New techniques may disappoint after several years of growth – but in the end they often stay part of our process. Object orientation, design patterns, components, agile programming, the list goes on. Don’t forget that silver bullets are still bullets even when they tarnish. Just because a concept hasn’t completely lived up to its hype doesn’t make it a great and useful part of your process.

24 years of game programming: thrills, chills and spills: part 2

If you haven’t read the first part of this article, you’ll probably want to check it out here.

1995 – Legend Entertainment – My First Industry Years

I was finally working full-time in the game industry at Legend Entertainment and couldn’t be more thrilled! No more sandwiching game coding in between contracting gigs. I was with a company that has produced a large number of award winning titles, gotten them published and distributed – all with a company of under 20 people!

I started out coding in Watcom C on a daily basis and working in crunch time on a game called Star Control 3. I was thrilled working with industry veterans from the Zork era! We even had an inhouse testing team. I can honestly say I’ve never worked at a company that ran as smoothly and without contention as the time I worked at Legend. There were blips on the radar, but over-all development fell into place extremely well during my years there. This is a testament to team dynamics. Get a good team that works well together and keep them together.

I learned so much while at Legend. These guys were experts at what they did. It was amazing being able to sit in on the entire development process – creative, technical and other. A lot of my best practices I learned while I was at Legend and over the years this feeling never left me. We’re talking about game development done the way everyone dreams of it being done. Projects would often times have just a few coders on them for most of their development cycle. I can say that I never saw a game canceled while I was at Legend – and had never heard of one being canceled before I joined them. They ran lean and mean and couldn’t afford that kind of wasteful slack. This had an extremely positive effect on morale needless to say.

At the time we were writing DOS based games that used SVGA libraries to build 256 color 640×480 games. The WATCOM compiler let us overcome DOS’ one meg memory barriers and access all of extended memory. We had to support a dizzying array of low tech video cards and collection of VESA video modes. But boy was it fun.

From a philosophy standpoint I finally started to get behind the understanding that old rules change. Optimization was still king, but design came first. Things I had previously considered wasteful were trivial on 486 systems with 4 meg of available memory. I began to realize that when working with a group of programmers design and communication were of paramount importance. These things had to be worked at, they just didn’t fall out naturally.

By 1996 I was coding in C++ in Microsoft (Visual) C++ and we were working on a game library using Direct X 5 for Windows 95. DirectX was all the rage. The access to hardware sprites and features was a major boon. OpenGL was at the time still too high level(slow) and lacked ubiquitous support. Of course we weren’t using DirectX for Direct3D at the time. Direct3D was a few revs a way from being usable. But uniform driver support for 2D sprites and pages was a major plus in itself. However 3D was on the horizon and I was very curious.

While I was at Legend I worked on 3 games and saw 5 games ship – all with a group of under 20 people; including the testing department, customer support and marketing! I owe them a huge debt of gratitude for everything they taught me and breaking me into the game industry.

By 1997 I would find myself somewhere new. I had a hunger for cutting edge 3D and a company in NJ was doing that and more. They were actually producing their own OpenGL compliant hardware for the military using an off the shelf Glint chip (from 3D Labs) and a proprietary in house ASIC they had built for voxelized 3D terrain rendering. I was now at ASPI!

1997 – ASPI

ASPI was another opportunity to work with a small group of brilliant people. The engineers that worked on digital logic design considered machine language high level. They used custom software to produce their own ASIC’s, soldering them to the boards in-house. I had already become very sold on object oriented development at the time. I had definitely drank the cool aid. I picked up a copy of Design Patterns by the Gang of Four and fell in love with it. Years later I would spend plenty of time examining the ways that novice coders abused OOP and design patterns. But in 1997 I was still loving learning all the ins and outs. I picked the few patterns that fit the API we were developing and got to work. It took a bit but I got everyone on board with tightly optimized C++ (which at the time many saw as an oxymoron). I had realized that C++ could give you the low level control to maintain C like speed. And I liked it – a lot.

I was now using OpenGL and MESA(an opensource OpenGL) and accepting the fact that on modern machines – OpenGL definitely was worth the minor performance loss. Back in those days there were still camps that wanted much lower level access to graphics hardware to eek out every last bit of power. We even got to write our own custom drivers for MESA under SCO Unix.

The cards were awesome and we ended up calling them the “True Terrain 3D”. The military had a contract to buy them up and deploy them. They were able to ingest DTED data and use LODed voxel planes to create amazing looking terrain. We interleaved access to the frame and depth buffers with the Glint chip and OpenGL/Mesa to add polygonal features. This was in 1997 and at the time polygonal 3D cards couldn’t come close to generating the terrain that the custom cardset could. Not in the under $20,000 price range at least.
I loved everyone I was working with but invariably even cutting edge high tech couldn’t keep me from wanting to go back into the games industry. Somehow high tech and “serious games” were exciting, but games were still in my blood.
By 1999 I was back in the industry working on a submarine sim at Aeon Entertainment in Baltimore.

But we’ll cover that in part 3….