Published on [Permalink]
Reading time: 3 minutes

Look, it’s a reasonable approximation of Theseus’s Ship, and that’s all we have budget for, okay?

Via Baldur Bjarnason, Gary Marcus notices that LLMs like Dall-E have a plagiarism problem:

Systems like DALL-E and ChatGPT are essentially black boxes. GenAI systems don’t give attribution to source materials because at least as constituted now, they can’t. (Some companies are researching how to do this sort of thing, but I am aware of no compelling solution thus far.)

Unless and until somebody can invent a new architecture that can reliably track provenance of generative text and/or generative images, infringement – often not at the request of the user — will continue.

A good system should give the user a manifest of sources; current systems don’t.

I will admit that my initial reaction to the examples that Marcus cites in his post was that if someone is entering the the text prompt “animated sponge” into Dall-E, they are probably actually thinking of SpongeBob Squarepants in the first place and it is not super-surprising that the images that come back are all SpongeBob Squarepants.

That is, of course, utterly beside the point.

Marcus’s beef here (and I think he is right) is not necessarily that these systems are returning results based on copyrighted work, but that this stuff is being represented as original and the original creators (or rights holders, as the case may be) are being given no credit. In the case of SpongeBob SquarePants or “golden droid from classic sci-fi movie,” it is obvious what is happening because those are widely recognizable cultural icons; for the countless other writers and artists whose work is being ripped off by these companies and the LLM-based products they are building, it is not obvious.

Furthermore, I would suggest that these results are not even that great and that they are actually pretty troubling.

Since Star Trek analogies seem to be the only way anyone can understand this stuff, I’ll give that a try.

We are to believe that the ship’s transporter disassembles you into your constituent atoms, sends the information about all of that over to the destination, and then reassembles you there. Or possibly builds you from scratch there—it’s never made entirely clear, as the entire idea of transporters was originally a means of avoiding the cost of having to show the Enterprise landing on and taking off from different planets every week.

The whole concept is already pretty metaphysically and ethically fraught, but let’s put all that aside for the moment. Suppose instead that the Federation found that recreating exact copies of people was costly and impractical. Instead, they make a list of all the people in Starfleet, feed that list into the computer, and run a bunch of pattern analysis on it. Then, when you step into the transporter and get broken down into your constituent atoms, what steps out on the other end is not you or even a copy of you, but rather a reasonably close representation of you based on the patterns the computer found in the list it has of all members of Starfleet.

Does that seem cool and okay? Because that is what the large language model systems are doing.

And if we want to stick with the Star Trek analogy, it wouldn’t even be the Federation that has come up with this scheme. It would be private companies to whom Starfleet has outsourced the contract for developing transporters for their starship, and these companies would all have their own interests and motives for developing transporter technology. None of them actually cares about whether it’s you that step out the other end of the transporter—the system is not capable of that and it’s not even the goal of the whole thing.

✍️ Reply by email

✴️ Also on Micro.blog

omg.social greenfield.social another weblog yet another weblog