Template:M intro work Large Learning Model: Difference between revisions

no edit summary
Tags: Mobile edit Mobile web edit
No edit summary
Line 88: Line 88:
Either way, the novelty soon palls: as we persevere we begin to see the magician’s wires.We get a sense of how the model goes about what it does. It has its familiar tropes and tics and persistent ways of doing things which aren’t quite what you have in mind. The piquant surprise at what it produces dampens at each go-round, eventually settling into an [[Entropy|entropic]] and vaguely dissatisfying quotidian.  
Either way, the novelty soon palls: as we persevere we begin to see the magician’s wires.We get a sense of how the model goes about what it does. It has its familiar tropes and tics and persistent ways of doing things which aren’t quite what you have in mind. The piquant surprise at what it produces dampens at each go-round, eventually settling into an [[Entropy|entropic]] and vaguely dissatisfying quotidian.  


In this way the appeal of iterating a targeted work product with a random pattern-matcher loses its lustre. The first couple of passes are great: they get from zero to 0.5. But the marginal improvement in each following round diminishes, as the machine reaches asymptotically towards its upper capability in producing what you had in mind, which we estimate unscientifically as about 75% of it.  
In this way the appeal of iterating a targeted work product with a random pattern-matcher soon loses its lustre. The first couple of passes are great: they get from zero to 0.5. But the marginal improvement in each following round diminishes, as the machine reaches asymptotically towards its upper capability in producing what you had in mind, which we estimate unscientifically as about 75% of it.  


Now, as [[generative AI]] improves  towards 100 — assuming it does improve: there are some indications it may not; see below — that threshold may move but it will never get to 100. In the mean time, as each successive round takes more time and bears less fruit, mortal enthusiasm and patience with the LLM will have long-since waned: well before the [[Singularity]] arrives.
Now, as [[generative AI]] improves  towards 100 — assuming it does improve: there are some indications it may not; see below — that threshold may move but it will never get to 100. In the mean time, as each successive round takes more time and bears less fruit, mortal enthusiasm and patience with the LLM will have long-since waned: well before the [[Singularity]] arrives.
Line 108: Line 108:
Legal language is, in [[James Carse]]’s sense, ''finite''. Literature is ''[[Finite and Infinite Games|infinite]]''.
Legal language is, in [[James Carse]]’s sense, ''finite''. Literature is ''[[Finite and Infinite Games|infinite]]''.


Now: the punchline. Given how important the reader and her cultural baggage are to the creative act in normal literature, we can see how a [[large learning model]], which spits out text ripe for someone to construct it, is a feasible model in that domain: to move from a model where ''most'' of the creative work is done by the reader to one where ''all'' of it is, is no great step. There is enough bad human literature out there like that now, that is is no great stretch to do without the human altogether. In that case, what does it matter what the text says, as long as it is coherent enough for an enterprising reader to make something out of it?
Now: the punchline. Given how integral the reader and her cultural baggage are to the creative act in ''normal'' literature, we can see how, in that domain, a [[large learning model]], which spits out text ripe with possibilities, begging for someone to “construct” it, is a feasible model: to move from a model where ''most'' of the creative work is done by the reader to one where ''all'' of it is, is no great step.  


''But that does not work at all at all for legal language''. The language must say exactly what the parties require: nothing more or less, and it must do it in a way that leaves nothing open to a later creative act of interpretation. We should regard legal drafting as closer to computer code than literature: a form of symbol processing where the meaning resides wholly within and is fully limited by the text. But unlike computer code, you can’t run it in a sandbox to see if it works.
There is enough bad human literature like that out there now, that is is no great stretch to imagine doing without the human altogether. In that case, what does it matter what the text says, as long as it is coherent enough for an enterprising reader to make something out of it?
 
''But that does not work at all at all for legal language''. Legal language is code: it must say exactly what the parties require: nothing more or less, and it must do it in a way that leaves nothing open to a later creative act of interpretation. We should regard legal drafting as closer to computer code than literature: a form of symbol processing where the meaning resides wholly within and is fully limited by the text.  
 
But unlike computer code, the operating system it is written for is not a closed logical system, and even the best-laid code can still run amok. You can’t run it in a sandbox to see if it works.


====Meet the new boss —====
====Meet the new boss —====