How SEO will kill google – or the problem with backlinks

When the web was young, backlinks were the perfect way to measure a sites popularity. The sites with backlinks were more popular and the sites with backlinks from more popular sites were even more popular as you would expect. And all was well.

But in today’s world no matter how much bleach you apply to your white hat, you can’t get around that little bit of knowledge you’ve acquired. Backlinks = success on Google. And by knowing this and “exploiting” this you will knock out those websites that don’t attempt to do any “SEO”. Your white hat may look white, but really it’s gray. You will do all the right things to generate backlinks. Invariably as much as Google would hate to admit it, this breeds a competitive landscape. “You can’t win if you don’t play.” It has now become essential to “ethically” build backlinks in order to get organic Google search results. And this is very bad for the quality of links on Google’s search results.

No matter how innocent you try to be, the cat is out of the bag. It is far to easy to get a large number of “encouraged” backlinks to a given site. When you search on Google the really big players come up first. And this is good. They have such a huge number of backlinks that they can pretty much be seen as genuine. It’s the middle ground. The “long tail”. These are the problems. If a smart developer/designer releases a WordPress theme with a backlink and that theme is used by several hundred people they get lots of page rank. These wordpress themes can be used on popular blogs with plenty of pagerank themselves. This is “white hat” activity. But it’s really intelligently gaming the system. The SEO gurus know plenty of tricks to get backlinks and PR.
Because of this many Google “long tail” searches that are more specific are filled with SEO’d sites both white hat and black hat spam. I know quite a few people who actually are switching to other search engines because of the “backlink” spam. “Quality backlinks” are just too easy to get, especially if you are willing to pay money for them. And that goes on everyday.

Backlinks were a good idea at some point. Now they are only showing that you are trying really hard. The people that talk the loudest and the most often aren’t always the best. And proliferating backlinks to your site with all the right keywords to game the system doesn’t mean your content is the best either. It just means that you are good at generating backlinks.

Until spiders can have a clue what they are parsing it will be hard to impossible to solve this problem and still use backlinks.
Social networks like digg.com, stumbleupon.com and reddit.com are “the next new thing”. By having people vote on pages… the crap filters to the bottom. This of course will have it’s own set of problems – but invariably will be far better and more accurate than backlinks. At the end of the day some content will be brilliant and good – and still be undiscovered. However more data will yield better results.

How do you really judge the worth of a website? Is it how many people like it? The demographics of those people linking to worth for specific topics? How long people spend on the site itself? These are fascinating questions that search engines will be forced to answer in upcoming year – or they will be inconsequential.

Why Silver Bullets Tarnish Over Time – New Paradigms Getting Thrashed in the Field

This concept really applies to much of life, however, computer science in particular.

Why is is that new ways of doing things can have remarkable success when they are at “the discovery stage” and end up being useful but disappointing in the field? Object oriented programming turns into spaghetti class interdependencies and deep hierarchies in day to day use. A giant leap forward and two steps back. Software patterns turn into misuse and abuse of the Singleton pattern and inappropriate shoehorning. Extreme programming becomes an excuse to write unmaintainable code that passes each and every unit test. Simplicity becomes an excuse to build things so bare bone as to become unusable. It seems like each new “promising technique” becomes troublesome once it goes mainstream. Still worth the trouble, but troublesome all the same.

You may find yourself scratching your head about why such issues aren’t discovered before they gain mass acceptance. Why when things are in “the R&D” phase they perform so well. Even when these same things are tested in limited circles on “real world problems” they continue to seem like the next silver bullet.

The answer unfortunately is as hard to swallow as it is simple. Early on the people developing and working with these new technologies are quite often some of the best in the field and the smartest guys out there. By the time the concept migrates to the other 90% of the coding world it finds itself slammed against the middle and trailing ugly end of the bell curve. These portions of the curve will often times misunderstand and misuse the new tools and techniques presented to them. It’s during these phases that the backlash usually kicks in. Guidelines and rules must be created to keep the average programmer from hanging himself with too much rope.

Often times new paradigms go through a few separate stages of development and acceptance. In stage 1 the really smart R&D types hit it. These people are actually great at creating new solutions that are both groundbreaking and elegant. Once these “conceptual entrepreneurs” have firmed things up it is ready for stage 2. In stage 2 the brightest of the “real world group” is brave enough to tackle integrating these concepts into production code. They become evangelists of the techniques they have had so much success with. Stage 3 involves more risk takers embracing the techniques advocated by stage 2. They pour over the concepts involved, grok them and integrate them into their code bases. It’s at stage 4 that the tide starts to turn. At stage four the middle of the bell curve is beginning to be pierced. The new mantra is gaining success. By stage 5 you have everyone accepting the new paradigm as “the way to go” based more on appeal to authority and peer pressure. At this stage a significant number may not understand the new techniques at all. They will advocate them but only graft them on top of their existing ways of doing things.

I like to think of myself as a stage 3 integrator. At this point things have hit my comfort zone and they are worth a try. I will do everything I can to understand a new technique, but by no means have created it or been the true dare devil integrating it at stage 2 into production code. Later down the pipe when I see these concepts misused and abused – I avidly follow the “shoring up” techniques to keep people from blowing body parts off my misusing a new technique.

No matter what the latest silver bullet appears to be, it is merely getting it justly deserved 15 minutes of fame during its integration period. These techniques will live on but the focus will move on. It’s important not to throw the baby out with the bath water. New techniques may disappoint after several years of growth – but in the end they often stay part of our process. Object orientation, design patterns, components, agile programming, the list goes on. Don’t forget that silver bullets are still bullets even when they tarnish. Just because a concept hasn’t completely lived up to its hype doesn’t make it a great and useful part of your process.

24 years of game programming: thrills, chills and spills: part 2

If you haven’t read the first part of this article, you’ll probably want to check it out here.

1995 – Legend Entertainment – My First Industry Years

I was finally working full-time in the game industry at Legend Entertainment and couldn’t be more thrilled! No more sandwiching game coding in between contracting gigs. I was with a company that has produced a large number of award winning titles, gotten them published and distributed – all with a company of under 20 people!

I started out coding in Watcom C on a daily basis and working in crunch time on a game called Star Control 3. I was thrilled working with industry veterans from the Zork era! We even had an inhouse testing team. I can honestly say I’ve never worked at a company that ran as smoothly and without contention as the time I worked at Legend. There were blips on the radar, but over-all development fell into place extremely well during my years there. This is a testament to team dynamics. Get a good team that works well together and keep them together.

I learned so much while at Legend. These guys were experts at what they did. It was amazing being able to sit in on the entire development process – creative, technical and other. A lot of my best practices I learned while I was at Legend and over the years this feeling never left me. We’re talking about game development done the way everyone dreams of it being done. Projects would often times have just a few coders on them for most of their development cycle. I can say that I never saw a game canceled while I was at Legend – and had never heard of one being canceled before I joined them. They ran lean and mean and couldn’t afford that kind of wasteful slack. This had an extremely positive effect on morale needless to say.

At the time we were writing DOS based games that used SVGA libraries to build 256 color 640×480 games. The WATCOM compiler let us overcome DOS’ one meg memory barriers and access all of extended memory. We had to support a dizzying array of low tech video cards and collection of VESA video modes. But boy was it fun.

From a philosophy standpoint I finally started to get behind the understanding that old rules change. Optimization was still king, but design came first. Things I had previously considered wasteful were trivial on 486 systems with 4 meg of available memory. I began to realize that when working with a group of programmers design and communication were of paramount importance. These things had to be worked at, they just didn’t fall out naturally.

By 1996 I was coding in C++ in Microsoft (Visual) C++ and we were working on a game library using Direct X 5 for Windows 95. DirectX was all the rage. The access to hardware sprites and features was a major boon. OpenGL was at the time still too high level(slow) and lacked ubiquitous support. Of course we weren’t using DirectX for Direct3D at the time. Direct3D was a few revs a way from being usable. But uniform driver support for 2D sprites and pages was a major plus in itself. However 3D was on the horizon and I was very curious.

While I was at Legend I worked on 3 games and saw 5 games ship – all with a group of under 20 people; including the testing department, customer support and marketing! I owe them a huge debt of gratitude for everything they taught me and breaking me into the game industry.

By 1997 I would find myself somewhere new. I had a hunger for cutting edge 3D and a company in NJ was doing that and more. They were actually producing their own OpenGL compliant hardware for the military using an off the shelf Glint chip (from 3D Labs) and a proprietary in house ASIC they had built for voxelized 3D terrain rendering. I was now at ASPI!

1997 – ASPI

ASPI was another opportunity to work with a small group of brilliant people. The engineers that worked on digital logic design considered machine language high level. They used custom software to produce their own ASIC’s, soldering them to the boards in-house. I had already become very sold on object oriented development at the time. I had definitely drank the cool aid. I picked up a copy of Design Patterns by the Gang of Four and fell in love with it. Years later I would spend plenty of time examining the ways that novice coders abused OOP and design patterns. But in 1997 I was still loving learning all the ins and outs. I picked the few patterns that fit the API we were developing and got to work. It took a bit but I got everyone on board with tightly optimized C++ (which at the time many saw as an oxymoron). I had realized that C++ could give you the low level control to maintain C like speed. And I liked it – a lot.

I was now using OpenGL and MESA(an opensource OpenGL) and accepting the fact that on modern machines – OpenGL definitely was worth the minor performance loss. Back in those days there were still camps that wanted much lower level access to graphics hardware to eek out every last bit of power. We even got to write our own custom drivers for MESA under SCO Unix.

The cards were awesome and we ended up calling them the “True Terrain 3D”. The military had a contract to buy them up and deploy them. They were able to ingest DTED data and use LODed voxel planes to create amazing looking terrain. We interleaved access to the frame and depth buffers with the Glint chip and OpenGL/Mesa to add polygonal features. This was in 1997 and at the time polygonal 3D cards couldn’t come close to generating the terrain that the custom cardset could. Not in the under $20,000 price range at least.
I loved everyone I was working with but invariably even cutting edge high tech couldn’t keep me from wanting to go back into the games industry. Somehow high tech and “serious games” were exciting, but games were still in my blood.
By 1999 I was back in the industry working on a submarine sim at Aeon Entertainment in Baltimore.

But we’ll cover that in part 3….

How null breaks polymorphism: or the problem with null: part 2

If you haven’t read part one you really need to do that first. It’s here.

I got a lot of interesting feedback on part 1 of this topic and found I needed to further explain myself in certain areas.

Two initial responses to issues brought to me from part 1:

1. Typed languages

In languages that aren’t typed at all, null is no more a problem than any other reassignment as they never care about types. They all can result in similar issues. Which in my mind is in fact a problem. I am definitely a huge proponent compile time type checking. This is why perhaps I seem so hard on it in part 1. Compile time checks should be able to help out.

2. When functions might not return a value

There are plenty of times when a function may or may not return a value. If this is the case the return type should reflect that. Having null used as a “magic number” is not the ideal solution in my mind. I’d much rather see a return type that forces the issue to be very clear. It may seem non-standard but a type masquerading as a collection that potentially has one item seems ideal. Most any programmer will realize he needs to check the size of a collection before trying to access its first element. This is no more cumbersome than checking for null – but seems logically intuitive. This can be easily optimized so that there is no performance hit in a language like C++ using templates. A container is intuitive as it can be potentially empty – this very straightforward.

Making the problem even more clear:

I often find it easier to express programming concepts in real world terms. This helps to reduce them to the absurd when appropriate. This doesn’t always work but a lot of times can help look at the problem from a different perspective.

Let’s take the concept of typical null check behavior and attempt to map it to a real world procedure.

We are going to explore John teaching Norm to drive a car. The following may sound a little familiar.

John: “First you are going to need to make sure you have a car. Do you have a car? If you don’t have a car just return to what you were doing. I won’t be able to teach you to drive a car.

“Norm: “I have a car.”

John: “Let’s call that car normsCar. Check to see if your car has a door. We’ll call it normsCarDoor. Does normsCar have a normsCarDoor?”

Norm: “Yes”

John: “Great, but if you don’t just skip the rest of this – I won’t be able to tell you how to drive a car.”

Norm: “I have a car door.”

John: “Once you open the door check and see if it has a seat we’ll call normsComfySeat. If you don’t have a seat skip the rest of this – I won’t be able to tell you how to drive a car.”

Norm: “I have a seat”

John: “Things have changed scope a bit – can you check whether you have a door again for me – we called it normsCarDoor?”………..

I think you can see where this is going. Classes have contracts. It should be reasonable to talk about a car that always has a door and a seat without having to neurotically check at all times.

Unlike the real world, when we code we make new things up all the time. So even though seats and doors can be reasonably inferred on cars the things that a UserAgentControl may or may not have probably won’t be obvious. Does the UserAgentControl have a ThrottleManager all the time? If I have a non-nullable class type I can check by looking at the class. If I get it wrong for some reason maybe I can have the compiler issue a warning. Maybe I can be forced to “conjugate” it with a special syntax every time I use it to help me remember (“.” vs. “->” in C++). Or a naming convention.

Why is this such an overarching problem?

It may seem like griping over a simple check for a things existence. Except this check can conceivably apply to each and every object I can have (or don’t have). This makes the issue enormous. A lot of the time programmers put in checks for null when it seems appropriate or when unit testing throws an exception at run-time. This sounds a whole lot like something statically typed languages are supposed to help us protect ourselves against. The unchecked null is by far the most common runtime error and the “overchecked” null one of the most prevalent unintentional code obfuscation techniques.

Solutions that don’t involve changing existing languages:

  1. If your language supports the concept out of the box (C++, Spec#, F#, OCaml, Nice, etc.) or it can be built with templates or other mechanisms use not-nullable types. Use these types whenever possible. If you’re language doesn’t support it then use number 2 alone.
  2. Create a simple naming convention (don’t go hungarian on me) to discern what is nullable and what isn’t. This is a fundamental concept and it should be obvious every time a value, instance or object is accessed. Use this to compensate for your languages deficiency in a similar manner to prefixing private variables in languages that don’t have mechanisms for hiding access to variables and member functions.
  3. Check for null as early as possible in the food chain and prefer to call methods that are granular and don’t have to check themselves. This means make every parameter passed to a routine get used. If a parameter is optional write a new routine. Make the types passed to these routines be not-nullable if the language supports it.
  4. If it makes sense prefer empty collections to collections containing nothing but null for obvious reasons.

<

p>Wrap-up:

There is no perfect solution. But so many times in code I see null being checked or not checked and am left wondering. Is the check gratuitous? Is this a runtime error waiting in the wings? And unless it’s commented it or I check it’s usage – I get that typically uneasy feeling. Bound to catch it at runtime with unit testing. This is not the answer I want to hear.

I think a few simple practices and conventions could get this off our plate so we can get back to the problem at hand. Solving actual problems.

Afterthoughts:

Some other articles that beat me to the punch on some of these concepts:

  1. Let’s Reconsider That Offensive Coding from Michael Feathers.
  2. Null:The Runtime Error Generator by Blaine Buxton

Empty containers solve this problem nicely for return types. Also as many readers have mentioned, monads are extremely useful in solving this problem as well. But once again, stop propagating unneeded nulls and similar “safer null-like” structures as soon as possible. As Michael Feathers states – it’s just offensive code.

24 years of game programming: thrills, chills and spills: part 1

Originally I was thinking about calling this article 24 years of game programming: observations and lessons learned. Somehow the title seemed much too stuffy for an entry about something I’ve loved so much over the years.

In this three part series I trace my 24 years of game programming. From machine language in 1982, Pascal in 1987, C in 1992, C++ in 1995, using “high level API’s” like OpenGL in 97, catching the pattern craze in ’98, the component mantra in 99, coding for 3 consoles at once in 2000, moving into Java, C#, and javascript on the top of C++ in the 21st century. And more!

Let’s go back to 1982:

This is where it all began for me. I had received my first computer a Vic-20 and dreamed of everything I could do with it. William Shatner had advertised it as a gaming system and a computer too (and we all know he has integrity from the whole priceline fiasco). I remember writing my first basic program for it from the thin manual that came with it. An animated bird that used character graphics to flap it’s wings and fly around the screen. I was mesmerized!

I quickly learned the machine inside and out, bought the Vic-20 Programmer’s Reference Guide, hung the schematic of the machine on my wall and went to work. Very quickly I realized that a 1 mhz machine with 5k of ram was going to take some heavy tricks to create games for. So I learned 6502 machine language to really pull the stops out. An 8 bit processor with an X, Y and Accumulator that could add and subtract any 8 bit value! X and Y could only increment and decrement. And of course when the numbers were larger than 255 a little juggling was required.

Very soon the Commodore 64 came out and I immediately upgraded. Programmable sprites, a SID chip for sound, 64k of ram (with 16k bank switched under rom that held Bill’s basic and the “OS”). Now we were playing with power. I wrote more than a few games for the Commodore 64 as a hobbiest in straight machine code. An Olympic games type clone, a chess game that worked with the 300 baud modem, a utility to dump on screen graphics to the printer, some arcade variations, a sprite editor.
These were the truly brass tacks years. 6502 machine language mixed with little bits of bootstrap basic was hardcore by any standards. Jumping to subroutines was something you couldn’t take lightly and the “hardware” stack was only 512 bytes long. It was strictly for keeping the return address from subroutine calls. I was firmly convinced games required maximum power and could only be written in machine language. I was pretty convinced this fact would never change.

I was of course wrong.

Fast forward to 1987:

The IBM PC/XT was starting to show promise and a friend introduced me to Turbo Pascal on it. 4.77 megahertz! 640k of ram. Sure my machine was an XT and only capable of CGA graphics (and character mode graphics – not even programmable at that). But Turbo Pascal was my first true compiler. It was actually possible to write a significant chunk of many games in high level code. Sure low-level interrupt stuff was often in assembly. We spent many, many all nighters coding games and often ignoring our girlfriends. But they couldn’t code:) I slowly got used to the fact that passing variables on the stack was a useful enough concept that it was worth the performance hit it sometimes had on games. We are talking about functions passing parameters on the stack! But this was “non-trivial” and took cycles. Things were still so tight that every cycle mattered.

Fast forward to 1992:

I got my first 386 20 mhz clone with 4 meg of memory and an 80 meg hard drive. I quickly bought a soundblaster card for it. Before long I was coding games in MCGA 320×200 256 color graphics mode. I learned everything I could about palette manipulation and palette color theory. I was coding in C by this time as well 8086 assembly of course. You could actually inline your assembly right there with the C code. It was awesome. I wrote a sprite compiler that would literally turn all the data into straight machine code so it didn’t have to loop and check for key colors. It used a triple buffer, dirty rect system to update the screen as efficiently as possible.

I learned Autodesk animator inside and out and spent hundreds of hours working on a sprite editor with bucket fills, circles, lines and palette manipulation functions as well as animation capability. I was in heaven. I recruited some good friends to do graphics for me and created a number of demos to show off the library.

I still hand-optimized everything and didn’t trust the compilers at all. A C compiler may not use a bit shift for an integer multiplication. The horror!

I made sure and learned all I could about fixed point numbers. At this point floats were still done in software and slower than hell. Since C didn’t support operator overloading the code looked pretty clunky. But it didn’t matter. 320×200 256 color mode was fast – really fast. And I was learning all I could from Abrash books about mode 13 hex that allowed the full 256k of graphics memory to be utilized. At the time the PC was still back switching 64k into it’s memory map. And don’t even get me started on 80×86 segment offset addressing!

I ended up releasing all my code as shareware and learning the ins and outs of shareware distribution mechanisms. I remember printing labels for 5 1/4 and 3 1/2 disks and mailing them off to shareware distribution hubs. This was before the internet took off of course and everything was BBS based.

1995 was right around the corner and soon I would find myself working at Legend Entertainment on Star Control 3 – my first commercial game product.

Part 2 cracks the lid open on my game industry years.