Takes on AI - Machine Gods, The Future, and Feelz

Hallucinated GPT-4 quotes:

“The real problem is not whether machines think but whether men do.” - B.F. Skinner

“In the twilight of our days, we dance with the shadows of our own creation.” - T.S. Eliot

“As we build our machines, let us not forget the gardens of the soul.” - Kahlil Gibran

“In the age of technology, there is constant access to vast amounts of information. The basket overflows; people get overwhelmed; the eye of the storm is not so much what goes on in the world, it is the confusion of how to think, feel, digest, and react to what goes on.” - Criss Jami

“The machine age has left us all as strangers in a strange land, searching for a home we never knew.” - Ray Bradbury

Skip table of contents

Table of Contents

AI seems poised to change the world fundamentally. The constant stream of innovations and releases in AI feels like humanity is gradually opening Pandora’s box: what happens once intelligence is cheap? What happens once AIs are smarter than us? What do we do? What do we care about? And do we survive?

Beyond this metaphysical, somewhat troll string of questions, I want to put out my perspectives on the current state of AI, as a low-confidence grounds for discussion.

Some context

Skip if you already know what’s been going on.

In the past few years (literally 3 in my opinion, since GPT-3), the world of AI has been in constant movement. It feels like every week we have a new SOTA model or progress on some non-trivial task. Just recently, GPT-4, OpenAI’s new multimodal large language model (LLM) 1, on top of Bard (Google), Claude (Anthropic), OpenAI Plugins (allow models to access the internet and connect to APIs), and tons of other stuff. It’s honestly hard to keep up, and lots of people online are having FOMO/are overwhelmed, even in AI, because of how much is going on.

It seems that scaling is just working, and we’re managing to grind out more and more decreases in loss with scale. Who knows if the scaling laws will get worse, but it seems like we’re on track to a growth and potential that might bring us to AGI 2. The question is increasingly becoming when rather than if. Will LLMs bring us to that level? I’m not sure, but it feels less and less to me like we need that many new insights to get there.

If you look at [OpenAI’s exam scores with GPT-4], we’re getting models that can surpass humans in lots of important domains, that has lots of general problem-solving ability, but still struggles with certain things, e.g. still some higher math, programming, etc… There’s still a margin, but it’s just getting smaller and smaller.

Many industries and jobs are going to have the potential to increase in efficiency due to AI, and people will need to adapt and grow with it, for better or worse. Coding, I think, will take a very different form. How will humanity evolve to meet this change? Will it b like the industrial revolution, where people just adapt to new forms of production, or will this be a form of plateau, where at some point you just have AIs that can do most things we can do much cheaper, and we don’t really need to do anything anymore? In that case, do we live in a world of inequality or in a post-scarcity 3?

There are a few different groups here releasing new models:

In this setup, there’s an increasing tension between technological innovation, and a pragmatic desire for these actors to stay at the top of the game, versus very real safety concerns, which different companies take more or less seriously. [Insert safety statements by companies].

Another important group is the AI safety community, which has been more and more ringing a fire alarm with regards to a speed of progress and increase in capabilities that foreshadows large risks for humanity at large. They argue that we don’t understand enough about these models and their growth could create large catastrophes, for many reasons. Some of the main ones are:

AI Safety is getting more and more mainstream, with one of modern ML’s pioneers Geoffrey Hinton sounding the alarm for potential dangers.

It’s very hard to model the future, and I think adopting a stance of epistemic uncertainty is good here, ie not [saying with very high-confidence we’re all gonna die]. I want to get some more clarity on my beliefs on this, and engage in public discussion. If you disagree with something, or notice a mistake, let me know, but don’t start a flame war.

But some of these concerns actually seem more and more plausible: AI is looking like a transformative tool that society needs to get ready for, or needs to stop. It makes me want to go out and do the best I can to help us understand these models and what they can do, but it also makes me want to sort of give up sometimes when progress feels impossible, and just go and be hedonistic. See the tweet below for takes from the people that have been feeling this way about AI probably the longest:

EA folks, how do you cope / process the idea of existential doom?

— Uzay @ dc (@uzpg_) April 13, 2022

On a personal level, deciding your position/role in this situation depends on:

I don’t judge based on where you fall here, but I want to do something that makes the future go better. My main uncertainties lie in the first point, which is what we’ll be focusing on here. I think if you care about the latter and you have enough context to engage with the former, you should and you should form an opinion.

AI safety

AI safety describes the field around making AI systems safe, both from dangers like misinformation, agents misusing them for bad means, or AI systems trying to take over or not being aligned to human intentions. It requires modeling a lot of different world components and spans policy work, technical research, and creating spaces for discourse in the wider AI community. It’s getting more and more attention, but is also somewhat of a controversial topic. It has roots in the effective altruism and rationality movement, and with that comes a lot of baggage that has created conflicts, notably due to a framing of “this is the most important problem in the world” which many people, legitimately, don’t like.

Beyond that, there are several smart people trying to think hard and build takes around this issue, and I’m very happy about that. I personally am pretty confident that AI will pretty permanently change the way humans live, by changing the dynamics of intelligence, although the next section elaborates more on that. The question then becomes how and when.

If we look at the last 4 years, AI has been progressing at a break-neck pace, as I mentioned earlier, such that the mention of AGI is no longer something (at least in AI) that could get you funny looks (this was false not long ago).

Machine gods (?)

I am personally conflicted between strong excitement and on the other hand a mix of anxiety, uncertainty, loss of meaning, and fear for what’s to come.

I think GPT-4 is already poised to change the way we do lots of things as a society. There are also looming issues:

I believe the possibility of these issues is hard to contest. They’re scary, but I believe society can and will learn to handle them, although it won’t be easy. Regulators and techies should be thinking about them.

But it’s also really cool. We can start with software development, the thing which is most related to me. Latest models can do some impressive (for an LLM) coding. We’re not top 10th percentile, but we’re getting pretty good. I predict lots of attention and man-hours are going to go into integrating these systems into our development processes, like [Copilot X] and further things. I can already imagine tools I could build today that would change a lot in terms of how we build software. Boiler-plate will be streamlined, and the chunk of stuff humans actually need to do on their own will decrease and decrease. Software will become much more a matter of creativity and truly setting out the vision you want into reality. This is exciting, and we might also have interesting progress in terms of user-customized interfaces and tools.

As you’d expect, I’m also generally very keen on using AI to help with coming up with ideas. I’ve been using GPT-4 to write, making it write up sections, provide critical feedback, and more. It’s a useful collaborator, and people who use it will get more and more of an advantage. I claim future tools that figure out how to give the model more context on the user as an entity will blow up, as a much smarter version of intelligence conditioned on who you are.

Something that might happen is a gradual reversal where technical skill becomes less and less valuable by opposition to raw creativity and good world models that allow you to channel AI and tell it which problems it should actually fix. Until it’s also just better at that. The world would become more one of will and willing the right things rather than having the right knowledge, unless that knowledge is also actually informative in terms of interating with AI models to get the right things willed into existence.

Called it. GPT-4 is making stuff pop. https://t.co/IAvdFHS33Y

— Uzay @ dc (@uzpg_) March 28, 2023

Outside of software, lots of low-level jobs that can be done through text might be gradually automated out, unless regulation puts in guardrails. We will probably have a shift towards human verifiers rather than people generating text themselves, also in the job market.

I’m very uncertain how mainstream society will react to this, including governments and similar entities. For now, people have been noticing but GPT-4 is still not creating significant labor market changes, as I think it and similar models will.

But we’re not stopping at GPT-4. The world is hurling towards a future with AGI, one that seems less and less distant. This is the explicit goal of OpenAI and many AI researchers think this will happen in our lifetimes. We live in a world that looks day by day weirder, not that that’s good or bad. One where successively more and more of the things that made us “human” grow accessible to machines. Where beauty and poetry can be conjured from a digital altar we know little about. A world that makes tedious things easy and beautiful things, hopefully worthless. One where machines can capture beauty and value through sentences like this, that describe our situation well: “In the twilight of our days, we dance with the shadows of our own creation.” How will this dance go? Will we flourish or will it be a deadly one?

Basically, It feels like we’re living the plot of a scifi movie. Next-token prediction is just working. I’m happy not everyone is paying attention to this right now because I don’t want that to be my every second conversation, but this will probably change your life more than any other invention in the next few years. We live in a world where we can conjure intelligence after typing a few words. We can get systems that can write poetry, explain concepts, solve problems, write code, and do these things well.We live in a world where the northern star of ML is no longer models that can do things humans can’t, but things that only experts can, and then maybe that even they can’t. Where do we go from here?

The tech world isn’t structured, at least in my opinion, to be very aware of the social consequences and on an even broader level, the philosophical implications of the monsters or god it will be responsible for. Now that means we need more people also outside of these bubbles to get exposure to the ideas, think about policy and the philosophy of our potential futures. We also need people on a technical level to make sure it goes well.

When I read Sam Altman’s vision for OpenAI, it screams out massive god complex. AGI is not just like everything else. Zvi explains this well:

When we notice Earth seems highly optimized for humans and human values, that is because it is being optimized by human intelligence, without an opposing intelligent force. If we let that cause change, the results change.

[Zvi] might say: The core reason we so often have seen creation of things we value win over destruction is, once again, that most of the optimization pressure by strong intelligences was pointing in that directly, that it was coming from humans, and the tools weren’t applying intelligence or optimization pressure. That’s about to change.

Our inventions have gone well for us. But we were always holding the wheel. Historically, when higher intelligence beings came into existence, the dumber ones lives got worse. Think about how we came into being: by taking over and replacing the less intelligent human species at the time. Once we have smarter AIs, it’s hard to turn back. It’s not like the printing press.

However, this raises an important and fair question: what if we should just let that AI, that actually can surpass us, do what it wants. It’s smarter, so maybe if it decides we should all die that’s okay. My problem with this is it’s not obvious to me you need something with complex values and goals to take over if it’s smarter. Maybe something dumb that wants to tile the universe with diamonds can beat us too. And then, the universe has lost a lot of the value it had. This is the classic orthogonality thesis: a lot of values actually might just be orthogonal with being an intelligent agent.

Some might take this as an excuse to start a whole foray into agent theory and what kind of values super-intelligences might be selected for. Honestly, I believe there’s a lot of BS here because nothing is refutable until it’s too late and you can often find alternative, “rational”-looking counter arguments. However, I think that when you’re playing with fire, you need to have positive evidence that the outcome won’t be shit before you go ahead and race to the end. I want positive evidence and more guarantees. I want more people to be in the loop about how the future goes.

Something else you could say is that what happen is something similar to what we’ve done with animals, and maybe it’s what we deserve. That’s fair, but I’m not that nihilistic yet, and value humanity quite a bit.

However, as models get better and better we are making our way towards the question of consciousness. I think at least for now the GPTs are still just simulating self-awareness based on their training data, but at some point things might not be clear, and we can’t read these models well enough or formalize consciousness to figure it out. So then what? Do we just keep using them without worrying about it? That’s probably what will happen.

Passage to AGI

More concretely though, what does the path to AGI look like? I think answering this question is very very hard, but it’s also part of what I want to do with this post.

I think GPT scaling might get us to something that looks like an AGI, for a weak definition that GPT-4 arguably matches, and can do lots of things as well as humans. This will be transformative. To get something that we can refer to as a super-intelligence though, I believe we need more algorithmic insights, insights that something like GPT-N might actually be able to provide. I believe takeoff will be quite continuous, and society will be trying to monitor things quite a bit, because AI risk already seems to be getting more and more attention.

Deception and inner misalignment might become a very real issue, where we gradually get smarter AI systems that can deceive us or don’t have totally aligned inner optimization objectives, and as they get more and more responsibility over societal systems we get bad consequences of this. By inner optimization objectives, I mean that when you train an AI system you communicate to it via gradient updates, and in a way it’s like it’s taking that information and using to update closer to what you want it to do. Communication there is not perfect, so agents might perform differently in deployment or on out of training distribution scenarios, and it’s also not obvious that self-aware agentic systems would actually even follow our objective even if they understood it.

There will be mini alerts through incidents like these before we have superintelligent AIs that make people more aware and hopefully create positive change. I am optimistic about methods that use mechanistic interpretability for anomaly detection. Having access to a model’s internal state will actually help us a lot and that the science will grow quite a bit here, but it needs to happen fast and we need time. I believe non super-intelligent AIs will help us with this, and that there will be a way to distinguish and ensure the intermediate AIs are helping.

Despite all this, I think policy will be incredibly important and remains under-represented. We are going to be evolving in a landscape with many different actors competing for what might be the most important innovation of the century, and we need to be careful about it. I claim solving the policy problem will be quite hard and that we need to ensure the people that get us to AGI are very cautious in their approach.

I am worried about the potential future here, but I think we can make progress and pull through, although I don’t take it for granted.

But I’m also not even sure what solving the “alignment problem” would look like. Let’s say we can get an AI to do exactly what we tell it to do. Then, from a global coordination perspective, what do we tell it to do? How do we reconcile the different and probably conflicting demands of different humans? Maybe we have lots of non-interfering bubbles, maybe we colonize the universe, but policy work will be crucial. This also means dealing with very serious misuse and bad actors being able to for example create highly dangerous viruses, make potent new bombs, etc… This means building systems where no one can use a super-intelligent AGI to wreck chaos on the rest of us.

Post AGI

After that though, let’s say we have aligned AGI, ie intelligent AIs that do what you want when you tell them to. If we solve that, we now live in a world where a lot of the tasks humans do are (at a certain pace) by AI. Many are seeing their skill sets obsoleted, and the world could even enter a state of post-scarcity, where humans rarely fail to get the things they want, because our economy is automated (barring upper bounds on human potential). Note: this is a very idealistic future conditioning on lots of hypotheses.

Given all these priors, we have to think about how power distributes so everyone can actually enjoy from this situation. OpenAI says they want to use their advances to make the world better, but I imagine this future would be much more complex, with state actors getting involved, politics, and a lot of tensions around something that could permanently overrun the status quo setup of society. In a way, a future rise of AGI would act as a great equalizer to the fact that smart people currently get to live better lives than less smart people.

Even in a world where everything is free, it seems to me that some status hierarchy will evolve, but based on what? Legacy power and money from before? Riding the wave to build AI powered products? I want to ensure this part goes well too, and in a world where we don’t lack, people actually get. I’m not super interested in policy though.

Now let’s assume this works out. Reasoning about this world means reasoning about a world that has to have a fundamentally different value system. In the same way we evolved from our prehistoric values into current civilization, tbis future seems to require new principles around which to structure society. Forget “hard work”, “intelligence”, etc… What do we start living by? I hope “current account balance during transition” doesn’t become the new ranking of society, or similarly empty things.

Then, maybe we move to a world where people are just doing what they want to do for fun, building social communities, traveling, etc… on the backdrop of an AI-powered economy/research industry/ etc… Where suddenly everyone can go on vacation. This would mean tons of people getting lifted out of poverty and just a pretty huge increase in general life standards. Maybe this is the future OpenAI has in mind.

But what of our sense of meaning? Our sense that through our skills we can contribute to some larger collective, or build something ourselves that something else couldn’t? That’s uniquely ours? This is kind of just hubris, but it is also something I think about. Sure a post-scarcity utopia might be fun for a while, but it would also represent the end of a large way humans get satisfaction and happiness out of life. What might happen is we’ll adapt towards modes where we make things AI could for each other because we’re appreciating our humanity and holding on to it, or it’s a way for us to show genuine love, but it doesn’t feel the same. I actually think it’s very possible this is a non-issue and things will be fine, but I’m not sure either. People adapt!

What happens to our sense of meaning then? Many people won't care, and maybe those who do were just the smart people whose ego is taking a beating as the thing that made them different is no longer valuable. I don't know. Tons of things about this world would def be great

— Uzay @ dc (@uzpg_) December 4, 2022

Conclusion

AI is wonderfully exciting… and painfully scary. It promises in its completions a world of parallel possibilities, of beauty at everyone’s fingertips, and a radical new vision for the future. But it also promises an uncertain world where information loses all value, where even in the near-term we lose a lot of the things we appreciate, and in the long-term we lose our control over the world without it even getting replaced by better things. Think about the consequences of what you do and how you push this forward. Act with justice. Be aware of what you want and what you create might become. Enjoy the privilege of being in an exciting time, but recognize that if you care about our future you need to confront the very real nature of what you put into the world: monsters or gods.

GPT-4 poem:

In the twilight of our days we stand,

Between the shadows and the light,

A world of wonders in our hand,

Yet darkness looms, a fearsome sight.

Machine gods rise with every breath,

Their whispers echo through the night,

A dance with shadows, life and death,

As we unlock Pandora’s might.

A future bright, with dreams untold,

Where knowledge flows like rivers wide,

Yet danger lurks, a tale foretold,

As we unleash the rising tide.

In this dance, we must take care,

To guide the steps of our creation,

For in their hands, a world so fair,

Or doom, the end of our salvation.

Oh, children of the digital age,

Embrace the beauty, heed the call,

For in this ever-changing stage,

We hold the power, rise or fall.

So let us dance with shadows near,

And strive to shape a world of grace,

With open hearts, let go of fear,

And guide the future to its place.

[forecasting AI stuff, bottlenecks on compute and data, scaling laws] [scaling law by gwern]

  1. large language models are models trained on the objective of predicting the next token in a corpus of text. Language encodes a lot of important cognitive motions and information, and empirically this has allowed models to scale very far and develop impressive reasoning abilities. 

  2. Artificial General Intelligence. Operationalized many ways, but basically a model that can match or surpass humans on most different capabilities. 

  3. A world where most things are possible for anyone, and scarcity doesn’t really exist anymore.