Skip to content(if available)orjump to list(if available)

OSI readies controversial open-source AI definition

didibus

> Maybe the supporter of the definition could demonstrate practically modifying a ML model without using the original training data, and show that it is just as easy as with the original data and it does not limit what you can do with it (e.g. demonstrate it can unlearn any parts of the original data as if they were not used).

I quite like that comment that was left on the article. I know some models you can tweak the weights, without the source data, but it does seem like you are more restricted without the actual dataset.

Personally, the data seems to be part of the source to me, in this case. I mean, the code is derived from the data itself, the weights are the artifact of training. If anything, they should provide the data, the training methodology, the model architecture, the code to train and infer, and the weights could be optional. I mean, the weights basically are equivalent to a built artifact, like the compiled software.

And that means commercially, people would pay for the cost of training. I might not have the resources to "compile" it myself, aka, run the training, so maybe I pay a subscription to a service that did.

lolinder

A lot of people get hung up on `weights = compiled-artifact` because both are binary representations, but there are major limitations to this comparison.

When we're dealing with source code, the cost of getting from source -> binary is minimal. The entire Linux kernel builds in two hours on one modest machine. Since it's cheap to compile and the source code is itself legible, the source code is the preferred form for making modifications.

This doesn't work when we try to apply the same reasoning to `training data -> weights`. "Compilation" in this world costs hundreds of millions of dollars per compilation run. Cost of "compilation" alone means that the preferred form for making modifications can't possibly be the training data, even for the company that built the thing in the first place. As for the data itself, it's a far cry from source code—we're talking tens of terrabytes of data at a minimum, which is likewise infeasible to work with on a regular basis. The weights must be the preferred form for making modifications for simple logistics reasons.

Importantly, the weights are the preferred form for modifications even for the companies that built them.

I think a far more reasonable analogy, to the extent that any are reasonable, is that the training data is all the stuff that the developers of the FOSS software ever learned, and the thousands of computer-hours spent on training are the thousands of man-hours spent coding. The entire point of FOSS is for a few experts to do all that work once and then we all can share and modify the output of those years of work and millions of dollars invested as we see fit, without having to waste all that time and money doing it over again.

We don't expect the authors of the Linux kernel to document their every waking thought so we could recreate the process they used to produce the kernel code... we just thank them for the kernel code and contribute to it as best we can.

dragonwriter

> A lot of people get hung up on `weights = compiled-artifact` because both are binary representations,

No, that's not why weights are object code. Binary vs. text is irrelevant.

Weights are object code because training data is declarative source code defining the desired behavior of the system and training code is a compiler which takes that source code and produces a system with the desired behavior.

Now, the behavior produced is less exactly known from the source code than is the case with traditional programming, but the function is the same.

You could have a system where the training and inference codes were open source and the model specified by the weights itself was not — that would be like having a system where software was not open source, but the compiler use to build it and the runtime library it relies on were. But one shouldn't confuse that with an open source model.

lolinder

This is a much more compelling explanation than any I've seen so far.

What do you do with the fact that no one (including the companies who do the initial training) modifies the training data when they want to modify the work? Are the weights not the preferred form for modifying a model?

didibus

The source is not open, so not open source. It's as simple as that to me.

That doesn't mean you can't allow the modification of your weights, but a model is not open source because it lets you modify its weights.

If we take the JDK for Java. It is open source, but the actually built JDKs of it are not all free to use or modify. It's quite annoying to build those, patch in source from newer ones into the builds of older ones, cherry pick and all that.

So it enables an economy of vendors that do just that, and people willing to pay for the simple act of building the JDK from its open source.

I'm not even sure it makes sense to license weights, they're not part of any creative process, I don't think weights should even be copyrightable. Weights are part of the product you sell, maybe a EULA applies, terms of use. It's like with video games, you're not always allowed to modify the binary (cheating), and if you do, you break the EULA and are revoked the right to play the game.

smolder

If you can't bootstrap a set of weights from public material, the weights aren't open source, because it's derivative content based on something non-open.

Trying to draw an equivalency between code and weights is [edited for temperament, I guess] not right. They are built from the source material supplied to an algorithm. Weights are data, not code.

Otherwise, everyone on the internet would be an author, and would have a say in the licensing of the weights.

lolinder

> Trying to draw an equivalency between code and weights is ridiculous, because the weights are not written in the same way as source code.

By the same logic, the comparison between a compiled artifact and weights fails because the weights are not "compiled" in any meaningful sense. Analogies will always fail, which is why "preferred form for making modifications" is the rod we use, not vague attempts at drawing analogies between completely different development processes.

> They are built from the source material supplied to an algorithm. Weights are data, not code.

As Lispers know well, code is data and data is code. You can't draw a line in the sand and definitively say that on this side of the line is just code and on that side is just data.

In terms of how they behave, weights function as code that is executed by an interpreter that we call an inference engine.

advael

Lots of people like that analogy because it's extremely self-congratulatory. It also makes no fucking sense

Despite the fact that people keep insisting on the buzzword "AI" to describe these large neural networks, they are more succinctly defined as approximate computer programs. The means by which we create them is through a relatively standardized family of statistical modeling algorithms paired with a dataset they are meant to emulate in their output

A computer program that's specified in logic is already a usable representation that can be used to understand every aspect of the functioning code in its entirety, albeit some of it may be hard to understand. You don't need to consult the original programmer at all, let alone read their mind

In contrast, a function that is approximated in the manner described needs the training data to replicate or make sense of it, and in fact is even necessary to assess whether the model is cheating at the benchmarks its creators assess it against. The weights themselves are a functional approximation, not a functional description

For the purposes of the ethos of free and open source software, it is obvious that training data must be included. However, this argument is also deployed in various other places, like intellectual property disputes, and is equally stupid there. Just because we use the term "learning" to describe these systems doesn't mean it makes sense for the law to treat them as people. It is both nonsensical and harmful to say that no human can be held responsible for what an "AI" model does, but that somehow they are "just like people learning from experience" when it benefits tech companies to believe that

SOLAR_FIELDS

Is it sufficient to say something is open if it can be reproduced with no external dependencies? If it costs X gazillion dollars to reproduce it, that feels irrelevant to some extent. If it is not reproducible, then it is not open. If it is reproducible, then it is open. Probably there’s some argument to be made here that it’s not actually open if some random dev can’t reproduce it on their own machine over a weekend, but I honestly don’t buy that argument in this realm.

lolinder

> If it is not reproducible, then it is not open. If it is reproducible, then it is open.

You're applying reproducibility unevenly, though.

The Linux kernel source code cannot feasibly be reproduced, but it can be copied and modified. The Mistral weights cannot feasibly be reproduced, but they can be copied and modified. Why is the kernel code open source while the Mistral weights are not?

Reproducibility is clearly not the deciding factor.

seba_dos1

> The entire point of FOSS is for a few experts to do all that work once and then we all can share and modify the output of those years of work and millions of dollars invested as we see fit, without having to waste all that time and money doing it over again.

The entire point of FOSS is to preserve user freedom. Avoiding pointless waste of repeated work is a side effect of applying that freedom.

It would feel entirely on point for things that require ungodly amounts of money and resources to even start considering exercising your freedoms on to not be considered FOSS, even if that aspect isn't considered by currently accepted definitions.

lolinder

Nit: what you're describing is Free Software as put forward by the FSF, not Open Source as put forward by the OSI.

I realize I'm the one who used the combo acronym first, but this is a discussion about the OSI, which exists to champion the cynical company-centric version of the movement, and for that version my description is accurate.

evoke4908

> I think a far more reasonable analogy, to the extent that any are reasonable, is that the training data is all the stuff that the developers of the FOSS software ever learned, and the thousands of computer-hours spent on training are the thousands of man-hours spent coding.

I think this is a decent point. Is your FOSS project actually open source if your 3D assets were made in Fusion or Adobe?

Similarly, how open is a hardware project if you post only finalized STLs? What about with and without Fusion source files?

You can still replicate the project. You can even do relatively minor edits to the STL. Is that open or not?

dahart

> The entire point of FOSS is for a few experts to do all that work once and then we all can share and modify the output of those years of work and millions of dollars invested as we see fit, without having to waste all that time and money doing it over again.

Really? Hmm yeah maybe you’re right, but for some reason, said that way it somehow starts to seem a little disappointing and depressing. Maybe I’m reading it differently than you intended. I always considered the point of FOSS to be about the freedoms to use, study, customize, and share software, like to become an expert, not to avoid becoming an expert. But if the ultimate goal of all that is just a big global application of DRY so that most people rely on the corpus without having to learn as much, I feel like that is in a way antithetical to open source, and it could have a big downside or might end up being a net negative, but I dunno…

nextaccountic

The source is really the training data plus all code required to train the model. I might not have resources to "compile", and also "compilation" is not deterministic, but those are technical details

You could have a programming language whose compiler is a superoptimizer that's very slow and is also stochastic, and it would amount to the same thing in practice.

a2128

The usefulness of data here is that you can retrain the model after making changes to its architecture, e.g. seeing if it works better with a different activation function. Of course this is most useful for models small enough that you could train it within a few days on a consumer GPU. When it comes to LLMs only the richest companies would have the adequate resources to retrain.

samj

The OSI apparently doesn't have the mandate from its members to even work on this, let alone approve it.

The community is starting to regroup at https://discuss.opensourcedefinition.org because the OSI's own forums are now heavily censored.

I encourage you to join the discussion about the future of Open Source, the first option being to keep everything as is.

justinclift

For reference, this is the OSI Forum mentioned: https://discuss.opensource.org

Didn't personally know they even had one. ;)

scrollaway

Heh... HN has always been full of massive proponents of the OSI, with people staunchly claiming any software under a license that isn't OSI-approved isn't 'real open source'.

Now we're seeing that maybe putting all that trust and responsibility in one entity wasn't such a great idea.

opan

We still have the FSF and free software, both predating "open source" and the OSI.

seba_dos1

OSD is widely accepted in the community and I don't expect that to change regardless of what happens with AI definitions.

Plus we still have FSF's definition and DFSG.

jart

OSI must defend the open source trademark. Otherwise the community loses everything.

The legal system in the US doesn't provide them any other options but to act.

tzs

They don’t have a US trademark on “open source”. Their trademarks are on “open source initiative” and “open source initiative approved license”.

andrewmcwatters

Hahaha… very open. Yeah, no one saw this coming.

blogmxc

OSI sponsors include Meta, Microsoft, Salesforce and many others. It would seem unlikely that they'd demand the training data to be free and available.

Well, another org is getting directors' salaries while open source writers get nothing.

dokyun

This is why I'd wait for the FSF to deliver their statement before taking anything OSI comes out with seriously.

JoshTriplett

The FSF delivering a statement on AI will have zero effect, no matter what position they take.

dokyun

As programs that utilize AI continue to become more prevalent, the concern for their freedom is going to become very important. It might require a new license, like a new version or variant of the GPL. In any case I believe the FSF is going to continue to campaign for the ethical freedom of these new classes of software, even if it requires new insight into what it means for them to be free, as they have done before. The FSF is also a much larger and more vocal organization than OSI is, even without the latter's corporate monarc--I mean, monetizers.

null

[deleted]

whitehexagon

>It would seem unlikely that they'd demand the training data to be free and available.

I wonder who has legal liability for the closed-data generated weights and some of the rubbish they spew out. Since users will be unable to change the source-data inputs, and will only be able to tweak these compiled-model outputs.

Is such tweaking analogous to having a car resprayed, and the manufacturer washes their hands of any liability over design safety.

looneysquash

The trained model is object code. Think of it as Java byte code.

You have some sort of engine that runs the model. That's like the JVM, and the JIT.

And you have the program that takes the training data and trains the model. That's your compiler, your javac, your Makefile and your make.

And you have the training data itself, that's your source code.

Each of the above pieces has its own source code. And the training set is also source code.

All those pieces have to be open to have a fully open system.

If only the training data is open, that's like having the source, but the compiler is proprietary.

If everything but the training set is open, well, that's like giving me gcc and calling it Microsoft Word.

AlienRobot

If I remember correctly, Stallman's whole point about FLOSS was that consumers were beholden to developers who monopolized the means to produce binaries.

If I can't reproduce the model, I'm beholden to whoever trained it.

>"If you're explaining, you're losing."

That is an interesting point, but isn't this the same organization that makes "open source" vs. "source available" a topic? e.g. why Winamp wouldn't be open source?

I don't think you can even call a trained AI model "source available." To me the "source" is the training data. The model is as much of a binary as machine code. It doesn't even feel right to have it GPL licensed like code. I think it should get the same license you would give to a fractal art released to the public, e.g. CC.

alwayslikethis

It's not clear that copyright applies to model weights at all, given they are generated by a computer and isn't really a creative work. It is closer to a quantitative description of the underlying data, like a dictionary or word frequency list.

AlienRobot

That's interesting. I wonder what will protect these models then, if anything? NDAs? Or maybe the model can be a trade secret or patented?

I think dictionaries are copyrightable, however?

klabb3

I think this makes the most sense. The only meaningful part of the term is whether or not you can hack on it, without permission from (or even coordination with) owners, founders or creators.

Heck, a regular old binary is much less opaque than “open” weights. You can at least run it through a disassembler and slowly, dreadfully, figure out how it works. Just look at the game emulator community.

For open weight AI models, is there anything close to that?

AlienRobot

It's a bit impressive that AI managed to produce something blobbier than a binary blob. AI is the blobbiest blob, so blobby that it's a black box to even its own authors.

I wonder how could anyone be an open source enthusiast, distrusting source code they can't verify, and yet a LLM enthusiast, trusting a huge configuration file that can't be debugged.

Granted I don't have a lot of knowledge about LLMs. From what I know, there are some tools that can tell you the confidence/stickiness of certain parts of the generate output, e.g. "for a prompt like this, this word WILL appear almost every time, while this other word will almost never appear." I think there was something similar for image generation that could tell what areas of an image stemmed from what terms in the prompt. I have no idea how this information is derived, but it doesn't feel like there are many end-user tools for this. Maybe the AI researchers have access to more powerful tooling.

For source code I can just open a file in notepad.exe to inspect it. I think that's the standard.

If, for example, a computer program was programmed using an esoteric language that read used image files instead of text files as source code, I don't think you would be able to consider that program "open source" unless the image format it used was also open source, e.g. PNG. If it was some proprietary format, people can't create tools for it, so they can't actually do anything the image blob, which restricts their freedoms.

wmf

On one hand if you require people to provide data they just won't. People will never provide the data because it's full of smoking guns.

On the other hand if the data isn't open you should probably use the term open weights not open source. They're so close.

samj

Yes, and Open Source started out with a much smaller set of software that has since grown exponentially thanks to the meaningful Open Source Definition.

We risk giving AI the same opportunity to grow in an open direction, and by our own hand. Massive own goal.

bjornsing

> Yes, and Open Source started out with a much smaller set of software that has since grown exponentially thanks to the meaningful Open Source Definition.

I thought it was thanks to a lot of software developers’ uncompensated labor. Silly me.

mistrial9

> ... require people to provide data they just won't. People will never provide the data ...

the word "people" is so striking here... teams and companies, corporations and governments.. how can the cast of characters be so completely missed. An extreme opposite to a far previous era where one person could only be their group member. Vocabulary has to evolve in deliberations.

skissane

> On one hand if you require people to provide data they just won't. People will never provide the data because it's full of smoking guns.

Tangential, but I wonder how well an AI performs when trained on genuine human data, versus a synthetic data set of AI-generated texts.

If performance when trained on the synthetic data set is close to that when trained on the original human dataset – this could be a good way to "launder" the original training data and reduce any potential legal issues with it.

jart

That's basically what companies like Mistral do. Many open source models are trained on OpenAI API request output. That's how a couple guys in Europe are able to build something nearly as good as GPT4 almost overnight and license it Apache 2.0. If you want the training data to be both human and open source, then there aren't many good options besides things like https://en.wikipedia.org/wiki/The_Pile_%28dataset%29 which has Hacker News, Wikipedia, the Enron Emails, GitHub, arXiv, etc.

dartos

I believe there are several papers which show that synthetic data isn’t as good as real data.

It makes sense as any bias in the model generated synthetic data will just get magnified as models are continuously trained on that biased data.

abecedarius

The side note on hidden backdoors links to a paper that apparently goes beyond the usual ordinary point that reverse engineering is harder without source:

> We show how a malicious learner can plant an undetectable backdoor into a classifier. On the surface, such a backdoored classifier behaves normally, but in reality, the learner maintains a mechanism for changing the classification of any input, with only a slight perturbation. Importantly, without the appropriate "backdoor key", the mechanism is hidden and cannot be detected by any computationally-bounded observer.

(I didn't read the paper. The ordinary version of this point is already compelling imo, given the current state of the art of reverse-engineering large models.)

Terr_

Reminds me of a saying usually about "bugs" but adapted from this bit from Tony Hoare:

> I conclude that there are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies and the other way is to make it so complicated that there are no obvious deficiencies.

My impression is that LLMs are very much the latter-case, with respect to unwanted behaviors. You can't audit them, you can't secure them against malicious inputs, and whatever limited steering we have over the LSD-trip-generator involves a lot of arbitrary trial and error and hoping our luck holds.

JumpCrisscross

> After long deliberation and co-design sessions we have concluded that defining training data as a benefit, not a requirement, is the best way to go

Huh, then this will be a useful definition.

The FSF position is untenable. Sure, it’s philosophically pure. But given a choice between a practical definition and a pedantically-correct but useless one, people will use the former. Irrespective of what some organisation claims.

> would have been better, he said, if the OSI had not tried to "bend and reshape a decades old definition" and instead had tried to craft something from a clean slate

Not how language works.

blackeyeblitzar

I don’t understand why the “practical” reality requires using the phrase “open source” then. It’s not open source. That label is false and fraudulent if you can’t produce the same artifact or approximately the same artifact. The data is part of the source for models.

JumpCrisscross

> don’t understand why the “practical” reality requires using the phrase “open source” then. It’s not open source. That label is false and fraudulent

Natural languages are parsimonious; they reuse related words. In this case, the closest practical analogy to open-source software has the lower barrier to entry. Hence, it will win.

There is no place for defining open source as data available. In software, too, this problem is solved by using “free software” for the extreme definition. The practical competition is between the Facebook model available with restrictions definition and this.

SrslyJosh

Indeed it will be a useful definition, as this comment noted above: https://news.ycombinator.com/item?id=41951573

JumpCrisscross

Sure. Again, there is a pedantic argument with zero broad merit. And there is a practical one. No group owns words; even trademarks fight an uphill battle. If you want to convince people to use your definition, you have to compromise and make it useful. Precisely-defined useless terminology is, by definition, useless; it’s efficient to replace that word, especially if in common use, with something practical.

tourmalinetaco

It is in no way useful for the advancement of MLMs. Training data is literally the closest thing to source code MLMs have and to say it’s a “benefit” rather than a requirement only allows for the moat to be maintained. The OSI doesn’t care about the creation of truly free models, only what benefits companies like Facebook or IBM who release model weights but don’t open up the training data.

swyx

i like this style of article with extensive citing of original sources.

previously on: https://news.ycombinator.com/item?id=41791426

its really interesting to contrast this "outsider" definition of open ai with people with real money at stake https://news.ycombinator.com/item?id=41046773

didibus

> its really interesting to contrast this "outsider" definition of open ai with people with real money at stake

I guess this is a question of what we want out of "open source". Companies want to make money. Their asset is data, access to customers, hardware and integration. They want to "open source" models, so that other people improve their models for free, and then they can take them back, and sell them, or build something profitable using them.

The idea is that, like with other software, eventually, the open source version becomes the best, or just as good as the commercial ones, and companies that build on top no longer have to pay for those, and can use the open source ones.

But if what you want out of "open source" is open knowledge, peeking at how something is built, and being able to take that and fork it for your own. Well, you kind of need the data. And your goal in this case is more freedom, using things that you have full access to inspect, alter, repair, modify, etc.

To me, both are valid, we just need a name for one and a name for the other, and then we can clearly filter for what we are looking for.

andrewmcwatters

I’m sure this will be controversial for some reason, but I think we should mostly reject the OSI’s definitions of “open” anything and leave that to the engineering public.

I don’t need a board to tell me what’s open.

And in the case of AI, if I can’t train the model from source materials with public source code and end up with the same weights, then it’s not open.

I don’t need people to tell me that.

OSI approved this and that has turned into a Ministry of Magic approved thinking situation that feels gross to me.

didibus

I agree. If it's open source, surely I can at least "compile" it myself. If the data is missing, I can't do that.

We'll end up with like 5 versions of the same "open source" model, all performing differently because they're all built with their own dataset. And yet, none of those will be considered a fork lol?

I don't know what the obsession is either. If you don't want to give others permission to use and modify everything that was used to build the program, why are you wanting to trick me in thinking you are, and still calling it open source?

rettichschnidi

> If you don't want to give others permission to use and modify everything that was used to build the program, why are you wanting to trick me in thinking you are, and still calling it open source?

Because there is an excemption clause in the EU AI Act for free and open source AI.

seba_dos1

...which doesn't rely on any OSI decisions.

strangecasts

> And in the case of AI, if I can’t train the model from source materials with public source code and end up with the same weights, then it’s not open.

Making training exactly reproducible locks off a lot of optimizations, you are practically not going to get bit-for-bit reproducibility for nontrivial models

samj

Nobody's asking for exact reproducibility — if the source code produces the software and it's appropriately licensed then it's Open Source.

Similarly, if you run the scripts and it produces the model then it's Open Source that happens to be AI.

To quote Bruce Perens (definition author): the training data IS the source code. Not a perfect analogy but better than a recipe calling for unicorn horns (e.g., FB/IG social graphs) and other toxic candy (e.g., NYT articles that will get users sued).

didibus

That's kind of true for normal programs as well, depending on the compiler you use, and if it has non-deterministic processes in it's compilation. But still, it's about being able to reproduce the same build process, and get a true realization of the program, even if not bit-for-bit, it's the same intended program.

rockskon

To be fair, OSI approval also deters marketing teams from watering down the definition of open source into worthless feelgood slop.

tourmalinetaco

That’s already what’s happened though, even with MLMs. Without training data we’re back to modifying a binary file without the original source.

JumpCrisscross

> if I can’t train the model from source materials with public source code and end up with the same weights, then it’s not open

This is the new cracker/hacker, GIF pronunciation, crypto(currency)/crypto(graphy) mole hill. Like sure, nobody forces you to recognise any word. But the common usage already precludes open training data—that will only get more ensconced as more contracts and jurisdictions embrace it.

mistrial9

in historical warfare, Roman soldiers easily and brutally defeated brave, individualist and social opponents on the battlefield, arguably in markets afterwards. It is a sad and essential lesson that applies to modern situations.

In marketing terms, a simple market communication, consistently and diligently applied, in varied contexts and over time, can and usually will take hold despite untold number of individuals who shake their fists at the sky or cut with clever and cruel words that few hear IMHO

OSI branding and market communications seem very likely to me to be effective in the future, even if the content is exactly what is being objected to here so vehemently.

aithrowawaycomm

What I find frustrating is that this isn't just about pedantry - you can't meaningfully audit an "open-source" model for security or reliability problems if you don't know what's in the training data. I believe that should be the "know it when I see it" test for open-source: has enough information been released for a competent programmer (or team) to understand the how the software actually works?

I understand the analogy to other types of critical data often not included in open-source distros (e.g Quake III's source is GPL but its resources like textures are not, as mentioned in the article). The distinction is in these cases the data does not clarify anything about the functioning of the engine, nor does its absence obscure anything. So by my earlier smell test it makes sense to say Quake III is open source.

But open-sourcing a transformer ANN without the training data tells us almost nothing about the internal functioning of the software. The exact same source code might be a medical diagnosis machine, or a simple translator. It does not pass my smell test to say this counts as "open source." It makes more sense to say that ANNs are data-as-code programming paradigms, glued together by a bit of Python. An analogy would be if id released its build scripts and announced Quake III was open-source, but claimed the .cpp and .h files were proprietary data. The batch scripts tell you a lot of useful info - maybe even that Q3 has a client-server architecture - but they don't tell you that the game is an FPS, let alone the tricks and foibles in its renderer.

lolinder

> I believe that should be the "know it when I see it" test for open-source: has enough information been released for a competent programmer (or team) to understand the how the software actually works?

Training data simply does not help you here. Our existing architectures are not explainable or auditable in any meaningful way, training data or no training data.

samj

That's why Open Source analyst Redmonk now "do not believe the term open source can or should be extended into the AI world." https://redmonk.com/sogrady/2024/10/22/from-open-source-to-a...

I don't necessarily agree and suggest the Open Source Definition could be extended to cover data in general (media, databases, and yes, models) with a single sentence, but the lowest risk option is to not touch something that has worked well for a quarter century.

The community is starting to regroup and discuss possible next steps over at https://discuss.opensourcedefinition.org

aithrowawaycomm

I don't think your comment is really true, LLM providers and researchers have been a bit too eager to claim their software is mystically complex. Anthropic's research is shedding light on interpretability, there has been good work done on the computational complexity side, and I am quite confident that the issue is LLM's newness and complexity, not that the problem is actually intractable (or specifically "more intractable" than other hopelessly complex software like Facebook or Windows).

To the extent the problem is intractable, I think kt mostly reflects that LLMs have an enormous amount of training data and do an enormous amount of things. But for a given specific problem the training data can tell you a lot:

- whether there is test contamination with respect to LLM benchmarks or other assessments of performance

- whether there's any CSAM, racist rants, or other things you don't want

- whether LLM weakness in a certain domain is due to an absence of data or if there's a more serious issue

- whether LLM strength in a domain is due to unusually large amounts of synthetic training data and hence might not generalize very reliably in production (this is distinct from test contamination - it is issues like "the LLM is great at multiplication until you get to 8 digits, and after 12 digits it's useless")

- investigating oddness like that LeetMagikarp (or whatever) glitch in ChatGPT

blackeyeblitzar

But training data can itself be examined for biases, and the curation of data also brings in biased. Auditing the software this way doesn’t require explainability in the way you’re talking about.

Legend2440

Does "open-source" even make sense as a category for AI models? There isn't really a source code in the traditional sense.

Barrin92

I had the same thought. "Source Code" is a human readable and modifiable set of instructions that describe the execution of a program. There's obviously parts of an AI system that include literal code, usually a bunch of python scripts or whatever, to interact and build the thing, but most of it is on the one hand data, and on the other an artifact, the AI model and neither is source code really.

If you want to talk about the openness and accessibility of these systems I'd just ditch the "source" part and create some new criteria for what makes an AI model open.

atq2119

There's code for training and inference that could be open-source. For the weights, I agree that open-source doesn't make sense as a category.

They're really a kind of database. Perhaps a better way to think about it is in terms of "commons". Consider how creative commons licenses are explicit about requirements like attribution, noncommercial, share-alike, etc.; that feels like a useful model for talking about weights.

mistrial9

I have heard government people talk about "the data is open-source" meaning it has public, no cost copy points to get data files e.g. csv or other.

paulddraper

Yeah, it's like an open-source jacket.

I don't really know what you're referring to....

echoangle

An Open Source jacket actually makes more sense to me than an open source LLM. I generally understand hardware to be open source when all design files are available (for example CAD models of a case and KiCad files for a PCB). If the patterns of a jacket were available in an editable standard-format file, you could argue that’s an open source jacket.