Bunny

We Undersell Human Computing Power

Return to Main | Blog Index

First off I really suck at being consistent at kicking out blog posts.  I blame my scatter brained ADHD nature.  Then again I get these bugs in my brain that I just must put to paper and therefore you find yourself before another one of my articles.

I think a lot of of the tech bros and AI enthusiasts are misunderstanding something that is going to set them up for some serious disappointment.  And I feel it all comes from the science fiction portrayal of all computerized thinking systems and how they are always portrayed has being far superior computing software then the human mind.

I remember in Star Trek how data was portrayed as being a highly analytical being who was able to computer information so much faster then humans.  I remember a scene in a movie where Data said he seriously considered a proposition for a second, and for an android that's an eternity.  It's generally the portrayal of computerized beings though, always come up with the right answer based on the data instantly without any pondering completely and utterly unlike how long it takes for you to compile a copy of a second life viewer (without good hardware you're going to be waiting a half hour). 

Consider then the fact that your brain actually out computes any computer on a measure of raw computational power.  It just isn't designed around the concept of numbers and such.  It uses some biological computation that we cannot even begin to map out how it works.

Imagine catching a baseball.  In the first few moments of that baseball being released you know how far it is from you and how quickly it is approaching you.  You know this because your eyes gave your brain two points of reference on the ball's position which it could use to triangulate the ball's distance.  It then looked at how that position changed and combined that with its familiarity of the rate of fall from gravity and was able to project a trajectory for the ball in a blink of an eye.  Based on this knowledge your brain then adjusted the tension of hundreds of muscles based on its calculations of your body and how it moves and how gravity is acting on it in order to shift your hands into a position where they would be in the path of the ball, and then from there.  That was an insane amount of calculations.  So much so that we struggle to convey this to computer to replicate the feat. 

Sure, we've gotten some bots playing soccer, moving in a rather slow fashion.  I notice a lot of the bipedal robots tend to fudge some things to get to the bipedal easily rather then deal with the inherent difficulties of the human form moving.  Consider Asimo or those soccer playing robots.  Their feet are always broad they always step in a fashion that is real small steps keeping one foot firmly planted, never far off from deeply upsetting balance.  Having a broad foot like that makes it far easier to keep your footing without complex calculations.  It also deeply limits your gait when you aren't driving off your foot.  A human soccer player would be able to play circles around any current robot.  We solved problems of locomotion and balance not through making the computers smarter so much as making the problem easier by simplifying the physics around it.

Neural networks is the way we have been training things to do tasks we don't know how to describe the task but we know how to describe the goal.  I could go into detail on how it works but CGP Grey has an excellent video on the topic already.  In short you tell the trainer what the desired outcome is, give it the inputs of things doing or giving that outcome and through the magic of survival of the fittest you eventually get that output.  Though it may take a lot of time to train this has been used since the 90s for scientists to produce purpose programs that do rather particular tasks, like picking out very specific signal times out of a heap of data.

It can be real good at a very specific singular task.  Much like how a dog trained to detect bombs is good at detecting bombs.  But you really can't make it good at generally everything.  You'd have to individually train it in an innumerable amount of tasks and then create a task chooser algorithm to switch between which task needs to be done when.

A good example of this being done and a clear sign that the chatbots are inherently flawed is their inability to do math.  When given a mathematical problem they can't figure out the answer, their math is worst then a first grader's math.  The solution to this was to create a math routine that is seperate from the chat bot and when it hits a math problem it invokes the math routine.  This means the chatbot doesn't know the answer still, they handed it a calculator. 

The chat bot is fed the sum of every possible output of the internet and the inquiries that lead to those outputs and told to match the patterns.  This is something you've all known and have been told over and over.  It's pattern matching search results to answers.  And you also probably know these answers are wrong almost half the time. And you have gone through this immeasurable amount of training to make it good at the task of producing an answer that looks convincing.  And it's good at that task, but it isn't good at any other thing, like the accuracy of that answer or handling citations or anything.

Now you can run this chat-bot stuff on your PC.  It doesn't guzzle as much energy to get a simple answer as some people would like you to believe (it's the TRAINING that requires the massive amounts of data centers guzzling immense amounts of power).  And when you run this chat-chat and ask it a question, it takes it a moment before it kicks out any answer.  So, your desktop computer thinking about as long as you would comes up with an answer that only might satisfy you and might be wrong.  And to do so it had to be trained offa a vast data center consuming insane amounts of energy.  And in the end you get a mindless talking machine.  Immense amounts of energy and resources thrown into creating something that can mimic just a fraction of our mind's ability to answer questions.  And our brains have the ability to admit they are wrong, or go research the topic and come up with an answer later.

And you need to take a moment to appreciate they haven't even come CLOSE to our mental power here.  And now there's studies that show our brains might be quantum computers, which is bad news for computer AI, because quantum computing blows anything a conventional computer can do out of the water.

Ever do a task so many times that you start to be able to intuit facts about the task via gut feeling?  Like being able to just know how much yarn your current knitting job is going to consume with surprising accuracy?  Or knowing if you can make that Jump shot?  Or being able to diagnose a car engine based on a few noises and some quick observations?  That's when you have done a task and gotten so deep in it that your brain has collected enough data to become computationally very good at that specific task.  Since your brain doesn't natively handle the concept of math, it gives you answers the the form of vibes and feels.  You just know the answers.  Except in cases where your brain is wrong in which case you just know all the wrong answers.  It's funny, that in trying to make a thing more human they create a thing that learns and believes all the wrong answers like a human.

Best way to use AI is the same way it has been used for a long period of time.  Train it to a single task.  Like an algorithm that forwards twitter posts, or suggests your next video to watch, or figures out how proteins fold.  These are all tasks that AI has been used to to great effect.  John Carmack of id Software fame got interested in AI when it was trending and started to tinker with it.  He came to the conclusion (which he posted in twitter and I do not have the time to dig it back out), that the people vibe-coding are going about it ass backwards.  He said it was ideal for human programmers to write your code.  Use an AI to flag problem code.  And then have humans look over that code and fix as necessary or flag it a false positive.

Now if you look there's a variety of articles saying it's taking just as much if not more work to fix the screwups or poorly designed vibecoded work.   Not only that, one of my programmer friends mentioned she has a new tool at work,.  It's an AI that works in the fashion that John Carmack called Qodana.

Time and again the reality becomes clear.  Trying to get a generalized computer program that reacts to requests like a human and works on them does not generate good results.  Using neural networks to make computer programs that do very specific focused tasks real well, like a trained dog, tends to always be the winning case.  Computers are nowhere near the level of computational power you are.