Template:M intro work Large Learning Model: Difference between revisions

Line 84: Line 84:
When we say, “fetch me a tennis racquet”, and the machine comes back with something more like a lacrosse stick, we are far more impressed than we would be had a human done the same thing. We would think the human a bit dim. But with [[generative AI]] we don’t, at first, even notice we are not getting what we asked for. We might think, “oh, that will do,” or perhaps, “ok, computer: try again, but make the basket bigger, the handle shorter, and tighten up the net.” We can iterate this way until we have what we want — or we could just use a conventional photo of a tennis racquet.
When we say, “fetch me a tennis racquet”, and the machine comes back with something more like a lacrosse stick, we are far more impressed than we would be had a human done the same thing. We would think the human a bit dim. But with [[generative AI]] we don’t, at first, even notice we are not getting what we asked for. We might think, “oh, that will do,” or perhaps, “ok, computer: try again, but make the basket bigger, the handle shorter, and tighten up the net.” We can iterate this way until we have what we want — or we could just use a conventional photo of a tennis racquet.


AI-generated image generation famously struggles with hands, eyes and logical three-dimensional architecture. First impressions can be stunning, but the second look reveals an absurdist symphony. It is just as true of text prompts: on close inspection we can see the countless minute logical ''cul-de-sacs'' and two-footed hacks. (Many humans write in logical ''cul-de-sacs'' and two-footed hacks, but that is another story.)
AI-generated image generation famously struggles with hands, eyes and logical three-dimensional architecture. First impressions can be stunning, but the second look reveals an absurdist symphony. It is just as true of text prompts: on close inspection we can see the countless minute logical ''cul-de-sacs'' and two-footed hacks from which the clever story miraculously [[Emergence|emerge]]s. (To be sure, many human authors write in logical ''cul-de-sacs'' and two-footed hacks, but that is another story.)


Either way, the novelty soon palls: as we persevere we begin to see the magician’s wires.We get a sense of how the model goes about what it does. It has its familiar tropes and tics and persistent ways of doing things which aren’t quite what you have in mind. The piquant surprise at what it produces dampens at each go-round, eventually settling into an [[Entropy|entropic]] and vaguely dissatisfying quotidian.  
Either way, the novelty soon palls and, as we persevere, we begin to see the magician’s wires. We get a sense of how the model goes about what it does: its familiar tropes and tics and persistent ways of doing things which are never quite what you have in mind. The piquant surprise at what it produces dampens at each go-round, eventually settling into an [[Entropy|entropic]] and vaguely dissatisfying quotidian.  


In this way the appeal of iterating a targeted work product with a random pattern-matcher soon loses its lustre. The first couple of passes are great: they get from zero to 0.5. But the marginal improvement in each following round diminishes, as the machine reaches asymptotically towards its upper capability in producing what you had in mind, which we estimate unscientifically as about 75% of it.  
In this way, the appeal of iterating a targeted work product with a random pattern-matcher soon loses its lustre. The first couple of passes are great: they get from zero to 0.5. But the marginal improvement in each following round diminishes, as the machine reaches asymptotically towards its upper capability in producing what you had in mind, which we estimate unscientifically as about 75% of it.  


Now, as [[generative AI]] improves towards 100 — assuming it does improve: there are some indications it may not; see below — that threshold may move but it will never get to 100. In the mean time, as each successive round takes more time and bears less fruit, mortal enthusiasm and patience with the LLM will have long-since waned: well before the [[Singularity]] arrives.
Now, as [[generative AI]] improves towards 100 — assuming it does improve: there are some indications it may not; see below — that threshold may move but it will never get to 100. In the mean time, as each successive round takes more time and bears less fruit, mortal enthusiasm and patience with the LLM will have long-since waned: well before the [[Singularity]] arrives.


And many improvements we will see will largely be in the [[meatware]]: as we refine and elaborate our queries; we learn how better to frame our queries, and “prompt-engineering” becomes the skill, rather than the dumb, parallel pattern-matching process that responds to it. Ours is the skill going in, and ours is the skill construing the output. What the machine does is the boring bit.  
And many improvements we will see will largely be in the [[meatware]]: as we refine and elaborate our queries; we learn how better to frame our queries, and “prompt-engineering” becomes the skill, rather than the dumb, parallel pattern-matching process that responds to it. Ours is the skill going in, and ours is the skill construing the output. What the machine does is the boring bit.