82,853
edits
Amwelladmin (talk | contribs) No edit summary |
Amwelladmin (talk | contribs) No edit summary |
||
Line 28: | Line 28: | ||
===Division of labour=== | ===Division of labour=== | ||
And besides, having the [[AI]] spot the issues and asking the [[meatware]] to fix the drafting gets the [[triage]] squarely backwards. Picking up the points — and recognising the large stupid tracts in the [[playbook]]<ref>Much of the [[playbook]] will be non-essential “perfect | And besides, having the [[AI]] spot the issues and asking the [[meatware]] to fix the drafting gets the [[triage]] squarely backwards. Picking up the points — and recognising the large stupid tracts in the [[playbook]]<ref>Much of the [[playbook]] will be non-essential “perfect world” recommendations (“[[nice-to-have]]s”) which an experienced negotiator would quickly be able to wave through.</ref> — is the “high value work”. That is what the [[meatware]] should be doing. Fixing the drafting is the dreary detail. That is where you want your [[chatbot]]. But contextually amending human language — you know, ''actual'' “natural language processing” — is ''hard''. No {{t|AI}} that we have seen just yet can do it. | ||
===Did I miss something?=== | ===Did I miss something?=== | ||
And how comfortable can we really be that the AI ''has'' spotted everything? If we assume — colour me cynical — the “natural language processing” isn’t quite as sophisticated as its marketers would have you believe<ref>That is is a glorified key-word search, in other words.</ref> then it is a bit [[reckless]] to put your faith in the [[reg tech]]. Is there no human wordsmith who could fool the [[AI]]?<ref>I bet I could. It is hardly challenging to insert an [[indemnity]] which does not use the words “[[indemnity]]”, “[[hold harmless]]” or “[[reimbursement|reimburse]]”.</ref> what if there is an odious clause not anticipated by the [[playbook]]?<ref>Given how fantastically paranoid a gathering of [[risk controller]]s can be this seems a remote risk, I grant you, but risks are [[fractal]], remember. And [[emergent]] in unexpectable ways. The [[collective noun]] for a group of [[risk controller]]s is a [[Palaver]], by the way.</ref> If the meatware can’t wholly trust the AI to have identified '''all''' salient points the lawyer must ''still'' read the whole agreement to check. Ergo, no time or cost saving. | And how comfortable can we really be that the AI ''has'' spotted everything? If we assume — colour me cynical — the “natural language processing” isn’t quite as sophisticated as its marketers would have you believe<ref>That is is a glorified key-word search, in other words.</ref> then it is a bit [[reckless]] to put your faith in the [[reg tech]]. Is there no human wordsmith who could fool the [[AI]]?<ref>I bet I could. It is hardly challenging to insert an [[indemnity]] which does not use the words “[[indemnity]]”, “[[hold harmless]]” or “[[reimbursement|reimburse]]”.</ref> what if there is an odious clause not anticipated by the [[playbook]]?<ref>Given how fantastically paranoid a gathering of [[risk controller]]s can be this seems a remote risk, I grant you, but risks are [[fractal]], remember. And [[emergent]] in unexpectable ways. The [[collective noun]] for a group of [[risk controller]]s is a [[Palaver]], by the way.</ref> If the meatware can’t wholly trust the AI to have identified '''all''' salient points the lawyer must ''still'' read the whole agreement to check. Ergo, no time or cost saving. | ||
But this software is designed to facilitate “right- | But this software is designed to facilitate “right-sourcing” the negotiation to cheaper (ergo less experienced) negotiators who will rely on the playbook as guidance, will not have the experience to make a commercial judgement unaided and will therefore be obliged either to [[escalate]], or to engage on a slew of [[nice-to-have]] but bottom-line unnecessary negotiation points with the counterparty. Neither are good outcomes. Again, an example of [[reg tech]] creating [[waste]] in a process where investment in experienced human personnel would avoid it. | ||
The basic insight here is that if a process is sufficiently low in value that experienced personnel are not justified, it should be fully automated rather than partially automated and populated by inexperienced personnel | The basic insight here is that if a process is sufficiently low in value that experienced personnel are not justified, it should be fully automated rather than partially automated and populated by inexperienced personnel |