On Hallucinations, Junk Food & Alignment

How to think of Generative AI

Generative AI is really complex. 

Like every hour there seems to be a new breakthrough, with supercomputers and matrix multiplications and more.

Except it isn't really, the code is actually quite condensed and the operators not too bad (the best course if you'd like is fast.ai for existing developers to whet their beaks).

When considering the impact of this technology, how it applies to you and your world, there is a really easy comparator I have found.

These models are like really talented graduates that occasionally go off their meds.

What would you do if you had an army of really talented grads available at the spin up of a GPU?

Dealing with uncertainty

Going off your meds is tough, right now I am trying to adjust my dosage of Elvanse for ADHD as its all a bit much right now and I have had a weird side effect of not being able to move because my body says sit down and type as everything is so important and the list is endless. 

Occasionally it allows me to have a bite to eat or go to the toilet. Very kind.

Building out my executive office and team from basically being a kinda family-run business partially out of an office and partially out of our home last year has allowed me to handle more and more, but it is still quite a crazy time.

In order to navigate this all I have done what we all do, set in place heuristics, patterns and processes to try to untangle all of this.

But these are all made up and the best approximation of how to deal with an incredibly uncertain time.

When we deal with stable environments and risk regimes, we usually pull some probability numbers out of our butt (20% chance of recession) and then do an expected utility calculation combining these for a probabilistic outcome.

When we deal with uncertainty we usually minimise for maximum regret, how will we feel if things reasonably go against us - for example AI that can do anything can do something bad too.

This has to be heuristic based as often we do things without any context or knowledge.

It is the same for our smol generative AI models, they come out of the pretraining university with trillions of tokens of knowledge on just about everything (know-it-alls are often insufferable tbh) as really creative liberal arts grads and then get RLHF'd into being accountants and customer service reps.

Taking the whole internet, warts and all and forcing the AI to watch these it is no wonder they come out a bit weird from their training runs.

But what they do have is a condensed neural net of weights that are principle based analysis of just about everything they have seen, allowing them to deal with unstructured and uncertain things, in contrast to big data based AI that can only extrapolate.

On hallucinations

Hallucination is a word we hear often with these models, but I think it's a misnomer. 

They are actually reasoning machines, not calculating machines and the fact that GPT-4 can pass just about every exam except English Lit (know-it-all ptth) is a crazy outcome for something that is probably only a few hundred gigabytes big (judging by NVIDIA saying the dual H100 NVLINK was designed for it especially for chatGPT, presumably GPT-4 at 200b parameters).

The fact that Stable Diffusion can generate any of anything in 2 gigabytes of totality is insane. A customised version is just a few kilobytes more.

These models were never meant to be fact based or compression - its impossible to compress information that much (if it is we are worth a trillion dollars, our Weissman Score being off the charts).

There is also the interesting thing about the outputs being views into another world, just as we compress the knowledge of the world around us and fill in the gaps (eg our optic nerve), so the world is a dynamic simulation to us..

Why did you do that

When thinking about interpretability, explicability etc this also puts things in an interesting light.

If you ask me why I do stuff, I can make a very logical seeming case, but really it is just post-hoc rationalisation of me operating by the heuristics and principles which I have developed over the years.

It is rarely 100% fact based, especially when dealing with decisions in the face of uncertainty.

For these models it is the same given the amount of data, if they are talented grads, there is no necessarily deterministic output (indeed try reducing the temperature of a language model to minimum and seeing if the outputs are always the same) - post hoc rationalisations are about as good as you'll get.

Junk food for the brain

In the light of the above, what do we have now?

We have really talented grads forced to watch the whole internet, then we spend months doing RLHF to make them more boring and in line what with we need.

The more we feed them the more we have seen their capabilities, but they are really turning weirder and weirder.

We are feeding them junk food and no wonder they are so unhealthy.

So what is the solution?

Organic, free-range AI

As the saying goes, you are what you eat, so we should feed these models better things.

No more web crawls, no more gigantic datasets with all sorts in (some interesting discussions on this shortly with respect to StableLM alpha and why it was the way it was..), instead lets do good quality datasets.

We funded the compute for the smart folk who did the Datacomp project as one of our grantees, the 12 billion image pairs is another huge dataset, but more interesting is the 1.5b image dataset that broke previous records on CLIP image to text quality through better data quality.

Aligning things more capable than us is really difficult.

My buddy JJ made a great point when he noted that alignment is orthogonal to freedom:


The only way to be fully sure that something more capable than you is fully aligned to what you want (which may be different to what Bob wants..) is to remove its freedom.

This is output focused, but a better way to contribute to a better chance of success is perhaps to fix the inputs as fast as we can even as folk race (I'll discuss the FLI letter another time).

Let's all get together to build good quality, open, permissive datasets so folk don't need to clog the arteries of these GPUs with junk, which often causes dreaded loss spikes.

I will share more of my thoughts on how to do this properly in a longer post on alignment and the design patterns around building these systems are changing and likely to continue to change going forward.

However, for now, if you're thinking about the impact of this tech, what would you do if you had an army of really talented grads that were kinda unhealthy and sometimes off your meds at your fingertips?

How would you make sure the next generation of grads was a bit more stable and balanced?

Let's start from there and move on.