Skip to content(if available)orjump to list(if available)

O1 isn't a chat model (and that's the point)

geor9e

Instead of learning the latest workarounds for the kinks and quirks of a beta AI product, I'm going to wait 3 weeks for the advice to become completely obsolete

raincole

There was a debate over whether to integrate Stable Diffusion into the curriculum in a local art school here.

Personally while I consider AI a useful tool, I think it's quite pointless to teach it in school, because whatever you learn will be obsolete next month.

Of course some people might argue that the whole art school (it's already quite a "job-seeking" type, mostly digital painting/Adobe After Effect) will be obsolete anyway...

simonw

The skill that's worth learning is how to investigate, experiment and think about these kinds of tools.

A "Stable Diffusion" class might be a waste of time, but a "Generative art" class where students are challenged to explore what's available, share their own experiments and discuss under what circumstances these tools could be useful, harmful, productive, misleading etc feels like it would be very relevant to me, no matter where the technology goes next.

moritzwarhier

Very true regarding the subjects of a hypothetical AI art class.

What's also important is the teaching of how commercial art or art in general is conceptualized, in other words:

What is important and why? Design thinking. I know that phrase might sound dated but that's the work what humans should fear being replaced on / foster their skills.

That's also the line that at first seems to be blurred when using generative text-to-image AI, or LLMs in general.

The seemingly magical connection between prompt and result appears to human users like the work of a creative entity distilling and developing an idea.

That's the most important aspect of all creative work.

If you read my reply, thanks Simon, your blog's an amazing companion in the boom of generative AI. Was a regular reader in 2022/2023, should revisit! I think you guided me through my first local LLama setup.

londons_explore

All knowledge degrades with time. Medical books from the 1800's wouldn't be a lot of use today.

There is just a different decay curve for different topics.

Part of 'knowing' a field is to learn it and then keep up with the field.

swyx

> whatever you learn will be obsolete next month

this is exactly the kind of attitude that turns university courses into dinosaurs with far less connection to the “real world” industry than ideal. frankly its an excuse for laziness and luddism at this point. much of what i learned about food groups and economics and politics and writing in school is obsolete at this point, should my teachers not have bothered at all? out of what? fear?

the way stable diffusion works hasn’t really changed, and in fact people have just built comfyui layers and workflows on top of it in the ensuing 3 years, and the more you stick your head in the sand because you already predetermined the outcome you are mostly piling up the debt that your students will have to learn on their own because you were too insecure to make a call without trusting that your students can adjust as needed

loktarogar

The answer in formal education is probably somewhere in the middle. The stuff you learn shouldn't be obsolete by the time you graduate but at the same time they should be integrating new advancements sooner.

The problem has also always been that those who know enough about cutting edge stuff are generally not interested in teaching for a fraction of what they can get doing the stuff.

dyauspitr

Integrating it into the curriculum is strange. They should do one time introductory lectures instead.

thornewolf

To be fair, the article basically says "ask the LLM for what you want in detail"

fullstackwife

great advice, but difficult to apply given very small context window of o1 models

jameslk

The churn is real. I wonder if so much churn due to innovation in a space can prevent enough adoption such that it actually reduces innovation

dartos

It’s churn because every new model may or may not break strategies that worked before.

Nobody is designing how to prompt models. It’s an emergent property of these models, so they could just change entirely from each generation of any model.

kyle_grove

IMO the lack of real version control and lack of reliable programmability have been significant impediments to impact and adoption. The control surfaces are more brittle than say, regex, which isn’t a good place to be.

I would quibble that there is a modicum of design in prompting; RLHF, DPO and ORPO are explicitly designing the models to be more promptable. But the methods don’t yet adequately scale to the variety of user inputs, especially in a customer-facing context.

My preference would be for the field to put more emphasis on control over LLMs, but it seems like the momentum is again on training LLM-based AGIs. Perhaps the Bitter Lesson has struck again.

miltonlost

A constantly changing "API" coupled with a inherently unreliable output is not conducive to stable business.

ithkuil

It's interesting that despite all these real issues you're pointing out a lot of people nevertheless are drawn to interact with this technology.

It looks as if it touches some deep psychological lever: have an assistant that can help to carry out tasks that you don't have to bother learning the boring details of a craft.

Unfortunately lead cannot yet be turned into gold

bbarnett

Unless your business is customer service reps, with no ability to do anything but read scripts, who have no real knowledge of how things actually work.

Then current AI is basically the same, for cheap.

QuantumGood

Great summary of how AI compresses the development (and hype) product cycle

icpmacdo

Modern AI both shortens the useful lifespan of software and increases the importance of development speed. Waiting around doesn’t seem optimal right now.

goolulusaurs

The reality is that o1 is a step away from general intelligence and back towards narrow ai. It is great for solving the kinds of math, coding and logic puzzles it has been designed for, but for many kinds of tasks, including chat and creative writing, it is actually worse than 4o. It is good at the specific kinds of reasoning tasks that it was built for, much like alpha-go is great at playing go, but that does not actually mean it is more generally intelligent.

golol

This is kind if true. I feel like the reasoning power if O1 is really only truly available on the kinds of math/coding tasks it was trained on so much.

madeofpalk

LLMs will not give us "artificial general intelligence", whatever that means.

UltraSane

An AGI will be able to do any task any humans can do. Or all tasks any human can do. An AGI will be able to get any college degree.

nkrisc

So it’s not an AGI if it can’t create an AGI?

righthand

AGI currently is an intentionally vague and undefined goal. This allows businesses to operate towards a goal, define the parameters, and relish in the “rocket launches”-esque hype without leaving the vague umbrella of AI. It allows businesses to claim a double pursuit. Not only are they building AGI but all their work will surely benefit AI as well. How noble. Right?

It’s vagueness is intentional and allows you to ignore the blind truth and fill in the gaps yourself. You just have to believe it’s right around the corner.

pzs

"If the human brain were so simple that we could understand it, we would be so simple that we couldn’t." - without trying to defend such business practice, it appears very difficult to define what are necessary and sufficient properties that make AGI.

swalsh

In my opinion it's probably closer to real agi then it's not. I think the missing piece is learning after the pretraining phase.

swyx

it must be wonderful to live life with such supreme unfounded confidence. really, no sarcasm, i wonder what that is like. to be so sure of something when many smarter people are not, and when we dont know how our own intelligence fully works or evolved, and dont know if ANY lessons from our own intelligence even apply to artificial ones.

and yet, so confident. so secure. interesting.

nurettin

I think it means a self-sufficient mind, which LLMs inherently are not.

adrianN

So-so general intelligence is a lot harder to sell than narrow competence.

kilroy123

Yes, I don't understand their ridiculous AGI hype. I get it you need to raise a lot of money.

We need to crack the code for updating the base model on the fly or daily / weekly. Where is the regular learning by doing?

Not over the course of a year, spending untold billions to do it.

tomohelix

Technically, the models can already learn on the fly. Just that the knowledge it can learn is limited to the context length. It cannot, to use the trendy word, "grok" it and internally adjust the weights in its neural network yet.

To change this you would either need to let the model retrain itself every time it receives new information, or to have such a great context length that there is no effective difference. I suspect even meat models like our brains is still struggling to do this effectively and need a long rest cycle (i.e. sleep) to handle it. So the problem is inherently more difficult to solve than just "thinking". We may even need an entire new architecture different from the neural network to achieve this.

chikere232

> Technically, the models can already learn on the fly. Just that the knowledge it can learn is limited to the context length.

Isn't that just improving the prompt to the non-learning model?

KuriousCat

Only small problem is that models are neither thinking nor understanding, I am not sure how this kind of wording is allowed with these models.

ninetyninenine

I understand the hype. I think most humans understand why a machine responding to a query like never before in the history of mankind is amazing.

What you’re going through is hype overdose. You’re numb to it. Like I can get if someone disagrees but it’s a next level lack of understanding human behavior if you don’t get the hype at all.

There exists living human beings who are still children or with brain damage with comparable intelligence to an LLM and we classify those humans as conscious but we don’t with LLMs.

I’m not trying to say LLMs are conscious but just saying that the creation of LLMs marks a significant turning point. We crossed a barrier 2 years ago somewhat equivalent to landing on the moon and i am just dumb founded that someone doesn’t understand why there is hype around this.

bbarnett

The first plane ever flies, and people think "we can fly to the moon soon!".

Yet powered flight has nothing to do with space travel, no connection at all. Gliding in the air via low/high pressure doesn't mean you'll get near space, ever, with that tech. No matter how you try.

AI and AGI are like this.

raincole

Which sounds like... a very good thing?

samrolken

I have a lot of luck using 4o to build and iterate on context and then carry that into o1. I’ll ask 4o to break down concepts, make outlines, identify missing information and think of more angles and options. Then at the end, switch on o1 which can use all that context.

ttul

FWIW: OpenAI provides advice on how to prompt o1 (https://platform.openai.com/docs/guides/reasoning/advice-on-...). Their first bit of advice is to, “Keep prompts simple and direct: The models excel at understanding and responding to brief, clear instructions without the need for extensive guidance.”

jmcdonald-ut

The article links out to OpenAI's advice on prompting, but it also claims:

    OpenAI does publish advice on prompting o1, 
    but we find it incomplete, and in a sense you can
    view this article as a “Missing Manual” to lived
    experience using o1 and o1 pro in practice.
To that end, the article does seem to contradict some of the advice OpenAI gives. E.g., the article recommends stuffing the model with as much context as possible... while OpenAI's docs note to include only the most relevant information to prevent the model from overcomplicating its response.

I haven't used o1 enough to have my own opinion.

irthomasthomas

Those are contradictory. Openai claim that you don't need a manual, since O1 performs best with simple prompts. The author claims it performs better with more complex prompts, but provides no evidence.

orf

In case you missed it

    OpenAI does publish advice on prompting o1, 
    but we find it incomplete, and in a sense you can
    view this article as a “Missing Manual” to lived
    experience using o1 and o1 pro in practice.

The last line is important

yzydserd

I think there is a distinction between “instructions”, “guidance” and “knowledge/context”. I tend to provide o1 pro with a LOT of knowledge/context, a simple instruction, and no guidance. I think TFA is advocating same.

chikere232

So in a sense, being an early adopter for the previous models makes you worse at this one?

wahnfrieden

The advice is wrong

3abiton

But the way they did their PR for O1 made it sound like it was the next step, while in reality it was a side step. A branching from the current direction towards AGI.

isoprophlex

People agreeing and disagreeing about the central thesis of the article, which is fine because i enjoy the discussion...

no matter where you stand in the specific o1/o3 discussion the concept of "question entropy" is very enlightening.

what is the question of theoretical minimum complexity that still solves your question adequately? or for a specific model, are its users capable of supplying the minimum required intellectual complexity the model needs?

Would be interesting to quantify these two and see if our models are close to converging on certain task domains.

patrickhogan1

The buggy nature of o1 in ChatGPT is what prevents me from using it the most.

Waiting is one thing, but waiting to return to a prompt that never completes is frustrating. It’s the same frustration you get from a long running ‘make/npm/brew/pip’ command that errors out right as it’s about to finish.

One pattern that’s been effective is

1. Use Claude Developer Prompt Generator to create a prompt for what I want.

2. Run the prompt on o1 pro mode

swyx

coauthor/editor here!

we recorded a followup conversation after the surprise popularity of this article breaking down some more thoughts and behind the scenes: https://youtu.be/NkHcSpOOC60?si=3KvtpyMYpdIafK3U

cebert

Thanks for sharing this video, swyx. I learned a lot from listening to it. I hadn’t considered checking prompts for a project into source control. This video has also changed my approach to prompting in the future.

swyx

thanks for watching!

“prompts in source control” is kinda like “configs in source control” for me. recommended for small projects, but at scale eventually you wanna abstract it out into some kind of prompt manager software for others to use and even for yourself to track and manage over time. git isnt the right database for everything.

keizo

I made a tool for manually collecting context. I use it when copying and pasting multiple files is cumbersome: https://pypi.org/project/ggrab/

franze

i creates thisismy.franzai.com for the same reason

inciampati

o1 appears to not be able to see it's own reasoning traces. Or it's own context is potentially being summarized to deal with the cost of giving access to all those chain of thought traces and the chat history. This breaks the computational expressivity or chain of thought, which supports universal (general) reasoning if you have reliable access to the things you've thought, and is threshold circuit (TC0) or bounded parallel pattern matcher when not.

adamgordonbell

I'd love to see some examples, of good and bad prompting of o1

I'll admit I'm probably not using O1 well, but I'd learn best from examples.

sklargh

This echoes my experience. I often use ChatGPT to help with D&D module design and I found that O1 did best when I told it exactly what k required, dumped in a large amount of info and did not expect to use it to iterate multiple times.

swalsh

Work with chat bots like a junior dev, work with o1 like a senior dev.