Natural language processing: Difference between revisions

Jump to navigation Jump to search
No edit summary
Line 19: Line 19:
The idea is [[triage]]. The application scans the agreement and, using its [[natural language processing]], will pick up the policy points, compare them with the [[playbook]] and highlight them so the poor benighted lawyer can quickly deal with the points and respond to the negotiation. The software [[vendor]] proudly points to a comparison of their software against human equivalents in picking up policy points in a sample of agreements. The software got 94% of the points. The [[meatware]] only got 67%. The Software was quicker. And — chuckle — it needed less coffee. Headline: ''dumb machine beats skilled human''.  
The idea is [[triage]]. The application scans the agreement and, using its [[natural language processing]], will pick up the policy points, compare them with the [[playbook]] and highlight them so the poor benighted lawyer can quickly deal with the points and respond to the negotiation. The software [[vendor]] proudly points to a comparison of their software against human equivalents in picking up policy points in a sample of agreements. The software got 94% of the points. The [[meatware]] only got 67%. The Software was quicker. And — chuckle — it needed less coffee. Headline: ''dumb machine beats skilled human''.  


But this may highlight a shortfall, not a feature, in the application. The day a [[palaver]] of [[risk controller]]s set their [[playbook]] parameters at their ''exact'' hard walkaway point is the day [[Good luck, Mr. Gorsky|Mr. Gorsky]] gets to the moon. So, not everything in the [[playbook]] ''says'' is a problem really ''is'' a problem. Much of a playback will be filled with [[nice-to-have]]s and other paranoid ramblings of a [[chicken licken]]y somewhere in a controller group. The very value a lawyer brings is to see a point and say, "yeah, that's fine, jog on, nothing to see here. That is the one thing a [[natural language processing]] [[AI]] can’t do. The AI ''forces'' you to negotiate ''all'' playbook points, regardless of how stupid they are. True: this isn’t the AI’s fault, but it ''is'' inevitable, and it ''is'' the AI’s limitation.  
But this may highlight a shortfall, not a feature, in the application. The day a [[palaver]] of [[risk controller]]s set their [[playbook]] parameters at their ''exact'' hard walkaway point is the day [[Good luck, Mr. Gorsky|Mr. Gorsky]] gets to the moon. So, not everything in the [[playbook]] ''says'' is a problem really ''is'' a problem. Much of a playback will be filled with [[nice-to-have]]s and other paranoid ramblings of a [[chicken licken]] somewhere in a controller group. The very value a lawyer brings is to see a point and say, “yeah, that’s fine, jog on, nothing to see here”. That is the one thing a natural language-processing [[AI]] can’t do: the [[AI]] can’t make that value judgment and will recommend that you negotiate ''all'' playbook points, regardless of how stupid they are.<ref>True: this isn’t the AI’s fault, but it ''is'' inevitable, and it ''is'' the AI’s limitation.</ref> Now if the person operating the [[AI]] is an experienced lawyer, she can override the [[AI]]’s fecklessness, and just ignore it. But the point here is to down-skill and save costs, remember, to the operator will not be an experienced lawyer. It will be an out-of-work actor in downtown Bratislava who is juggling some ISDA work with a bar job and and an Uber gig. He will neither know nor care for the sensible thing to do, and will follow the machine’s recommendations by rote. Hence: a wildly elongated, pointless negotiation that will waste time and aggravate the client.


[[AI]] can only follow instructions.The [[meatware]] can make a call that the instructions are stupid.  
[[AI]] can only follow instructions.The [[meatware]] can make a call that the instructions are stupid.  

Navigation menu