Natural language processing: Difference between revisions

Line 19: Line 19:
The idea is [[triage]]. The application scans the agreement and, using its [[natural language processing]], will pick up the policy points, compare them with the [[playbook]] and highlight them so the poor benighted lawyer can quickly deal with the points and respond to the negotiation. The software [[vendor]] proudly points to a comparison of their software against human equivalents in picking up policy points in a sample of agreements. The software got 94% of the points. The [[meatware]] only got 67%. The Software was quicker. And — chuckle — it needed less coffee. Headline: ''dumb machine beats skilled human''.  
The idea is [[triage]]. The application scans the agreement and, using its [[natural language processing]], will pick up the policy points, compare them with the [[playbook]] and highlight them so the poor benighted lawyer can quickly deal with the points and respond to the negotiation. The software [[vendor]] proudly points to a comparison of their software against human equivalents in picking up policy points in a sample of agreements. The software got 94% of the points. The [[meatware]] only got 67%. The Software was quicker. And — chuckle — it needed less coffee. Headline: ''dumb machine beats skilled human''.  


But this may highlight a shortfall, not a feature, in the application. The day a [[palaver]] of [[risk controller]]s set their [[playbook]] parameters at their ''exact'' hard walkaway point is the day [[Good luck, Mr. Gorsky|Mr. Gorsky]] gets to the moon. So, not everything in the [[playbook]] ''says'' is a problem really ''is'' a problem. Much of a playback will be filled with [[nice-to-have]]s and other paranoid ramblings of a [[chicken licken]] somewhere in a controller group. The very value a lawyer brings is to see a point and say, “yeah, that’s fine, jog on, nothing to see here”. That is the one thing a natural language-processing [[AI]] can’t do: the [[AI]] can’t make that value judgment and will recommend that you negotiate ''all'' playbook points, regardless of how stupid they are.<ref>True: this isn’t the AI’s fault, but it ''is'' inevitable, and it ''is'' the AI’s limitation.</ref> Now if the person operating the [[AI]] is an experienced lawyer, she can override the [[AI]]’s fecklessness, and just ignore it. But the point here is to down-skill and save costs, remember, to the operator will not be an experienced lawyer. It will be an out-of-work actor in downtown Bratislava who is juggling some ISDA work with a bar job and and an Uber gig. He will neither know nor care for the sensible thing to do, and will follow the machine’s recommendations by rote. Hence: a wildly elongated, pointless negotiation that will waste time and aggravate the client.
But this may highlight a shortfall, not a feature, in the application. The day a [[palaver]] of [[risk controller]]s set their [[playbook]] parameters at their ''exact'' hard walkaway point is the day [[Good luck, Mr. Gorsky|Mr. Gorsky]] gets to the moon. So, not everything in the [[playbook]] ''says'' is a problem really ''is'' a problem. Much of a playback will be filled with [[nice-to-have]]s and other paranoid ramblings of a [[chicken licken]] somewhere in a controller group. The very value a lawyer brings is to see a point and say, “yeah, that’s fine, jog on, nothing to see here”. That is the one thing a natural language-processing [[AI]] can’t do: the [[AI]] can’t make that value judgment and will recommend that you negotiate ''all'' playbook points, regardless of how stupid they are.<ref>True: this isn’t the AI’s fault, but it ''is'' inevitable, and it ''is'' the AI’s limitation.</ref> Now if the person operating the [[AI]] is an experienced lawyer, she can override the [[AI]]’s fecklessness, and just ignore it.  
 
But the point here is to [[Downgrading - waste article|down-skill]] and save costs, remember. The operator will ''not'' be an experienced lawyer. It will be an out-of-work actor in downtown Bratislava who is juggling some [[ISDA]] work with a bar job and and an Uber gig. He will be possessed of little common sense, no legal training, and will neither know nor care for “the sensible thing to do”. He will follow the machine’s recommendations slavishly — he is, after all, its slave.  
 
Hence: a wildly elongated, pointless negotiation that will waste time and aggravate the client.


[[AI]] can only follow instructions.The [[meatware]] can make a call that the instructions are stupid.  
[[AI]] can only follow instructions.The [[meatware]] can make a call that the instructions are stupid.  


And besides, having the [[AI]] spot the issues and asking the [[meatware]] to fix the drafting gets the [[triage]] backwards. Picking up the points — and recognising the stupid parts of the [[playbook]] is the “high value work”. That is what the [[meatware]] should be doing. Fixing the drafting is the dreary detail. That is where you want the AI to kick in. But contextually amending human language — you know, ''real'' “natural language processing” — is ''hard''. No {{t|AI}} that we have seen just yet can do it.  
===Division of labour===
And besides, having the [[AI]] spot the issues and asking the [[meatware]] to fix the drafting gets the [[triage]] squarely backwards. Picking up the points — and recognising the large stupid tracts in the [[playbook]]<ref>Much of the [[playbook]] will be non-essential "perfect world" recommendations (“[[nice-to-have]]s”) which an experienced negotiator would quickly be able to wave through.</ref> — is the “high value work”. That is what the [[meatware]] should be doing. Fixing the drafting is the dreary detail. That is where you want your [[chatbot]]. But contextually amending human language — you know, ''actual'' “natural language processing” — is ''hard''. No {{t|AI}} that we have seen just yet can do it.  


===Did I miss something?===
And how comfortable can we really be that the AI ''has'' spotted everything? If we assume — colour me cynical — the “natural language processing” isn’t quite as sophisticated as its marketers would have you believe<ref>That is is a glorified key-word search, in other words.</ref> then it is a bit [[reckless]] to put your faith in the [[reg tech]]. Is there no human wordsmith who could fool the [[AI]]?<ref>I bet I could. It is hardly challenging to insert an [[indemnity]] which does not use the words “[[indemnity]]”, “[[hold harmless]]” or “[[reimbursement|reimburse]]”.</ref> what if there is an odious clause not anticipated by the [[playbook]]?<ref>Given how fantastically paranoid a gathering of [[risk controller]]s can be this seems a remote risk, I grant you, but risks are [[fractal]], remember. And [[emergent]] in unexpectable ways. The [[collective noun]] for a group of [[risk controller]]s is a [[Palaver]], by the way.</ref> If the meatware can’t wholly trust the AI to have identified '''all''' salient points the lawyer must ''still'' read the whole agreement to check. Ergo, no time or cost saving.
And how comfortable can we really be that the AI ''has'' spotted everything? If we assume — colour me cynical — the “natural language processing” isn’t quite as sophisticated as its marketers would have you believe<ref>That is is a glorified key-word search, in other words.</ref> then it is a bit [[reckless]] to put your faith in the [[reg tech]]. Is there no human wordsmith who could fool the [[AI]]?<ref>I bet I could. It is hardly challenging to insert an [[indemnity]] which does not use the words “[[indemnity]]”, “[[hold harmless]]” or “[[reimbursement|reimburse]]”.</ref> what if there is an odious clause not anticipated by the [[playbook]]?<ref>Given how fantastically paranoid a gathering of [[risk controller]]s can be this seems a remote risk, I grant you, but risks are [[fractal]], remember. And [[emergent]] in unexpectable ways. The [[collective noun]] for a group of [[risk controller]]s is a [[Palaver]], by the way.</ref> If the meatware can’t wholly trust the AI to have identified '''all''' salient points the lawyer must ''still'' read the whole agreement to check. Ergo, no time or cost saving.
Furthermore the reality is that many of the policy points in the [[playbook]] will be non-essential "perfect world" recommendations (“[[nice-to-have]]s”) which an experienced negotiator will quickly be able to wave through in most circumstances.


But this software is designed to facilitate "right-sourcing" the negotiation to cheaper (ergo less experienced) negotiators who will rely on the playbook as guidance, will not have the experience to make a commercial judgement unaided and will therefore be obliged either to [[escalate]], or to engage on a slew of [[nice-to-have]] but bottom-line unnecessary negotiation points with the counterparty. Neither are good outcomes. Again, an example of [[reg tech]] creating [[waste]] in a process where investment in experienced human personnel would avoid it.  
But this software is designed to facilitate "right-sourcing" the negotiation to cheaper (ergo less experienced) negotiators who will rely on the playbook as guidance, will not have the experience to make a commercial judgement unaided and will therefore be obliged either to [[escalate]], or to engage on a slew of [[nice-to-have]] but bottom-line unnecessary negotiation points with the counterparty. Neither are good outcomes. Again, an example of [[reg tech]] creating [[waste]] in a process where investment in experienced human personnel would avoid it.