Template:M intro work Large Learning Model: Difference between revisions

No edit summary
Line 108: Line 108:
Legal language is ''finite''. Literature is ''[[Finite and Infinite Games|infinite]]''.
Legal language is ''finite''. Literature is ''[[Finite and Infinite Games|infinite]]''.


If law is literature then it is unique in that it is the only one to favour certainty over possibility. Unique in that it is the one in which we must still Grant soul jurisdiction to authorial intent for Scott the problem is, a large language model has no authorial intent.  
Now: the punchline. Given how important the reader and her cultural baggage are to the creative act in normal literature, we can see how a large learning model is a feasible model in that domain: to move from a model where ''most'' of the creative work is done by the reader to one where ''all'' of it is is no great step. Indeed, there is enough mediocre literature out there that meets this description now, only written by humans, but to formula and slavishly aping hackneyed archetypes that is is no great stretch to do without the writer altogether. In that case, what does it matter what the text says, as long as it is coherent enough for an enterprising reader to make something out of it?
We should regard legal drafting as closer to computer code: a form of symbol processing where the meaning resides wholly within and is fully limited by the text.
 
''But that does not work at all at all for legal language''. The language must say exactly what the parties require: nothing more or less, and it must do it in a way that leaves nothing open to a later creative act of interpretation. We should regard legal drafting as closer to computer code than literature: a form of symbol processing where the meaning resides wholly within and is fully limited by the text. But unlike computer code, you can’t run it in a sandbox to see if it works.


====Meet the new boss —====
====Meet the new boss —====