Artificial intelligence
More particularly, why artificial intelligence won’t be sounding the death knell to the legal profession any time soon.
Computer language isn’t nearly as rich as human language
No tenses
- machine language deals with past (and future) events in the present tense: Instead of saying:
- “The computer’s configuration on May 1, 2012 was XYZ”
machine language will typically say:
- Where DATE equals “May 1 2012”, let CONFIGURATION equal “XYZ”
This way a computer does not need to conceptualise itself yesterday as something different to itself today, which means it doesn’t need to conceptualise “itself” at all. Therefore, computers don’t need to be self-aware. Unless computer syntax undergoes some dramatic revolution (it could happen: we have to assume human language went through that revolution at some stage) computers will never be self-aware.
It can’t handle ambiguity
Computer language is designed to allow machines to follow algorithms flawlessly. It needs to be deterministic — a given proposition must generate a unique binary operation — and it can’t allow any variability in interpretation. This makes it different from a natural language, which is shot through with both.
- It is very hard for a machine language to handle things like “reasonably necessary” or “best endeavours”.
- Coding for redundant meanings - which are rife in English (especially in legal English, which rejoices in triplets like “give, devise and bequeath”) dramatically increases the complexity of any algorithms.
- Aside from redundant meanings there are many meanings which are almost - but not entirely - the same, which must be coded for separately.
The ground rules cannot change
The logic and grammar of machine language and the assigned meaning of expressions is profoundly static. The corollary of the narrow and technical purpose for which machine language is used is its inflexibility: Machines fail to deal with unanticipated change.
Infinite fidelity is impossible
There is a popular “reductionist” movement at the moment which seeks to atomise concepts with a view that untangling bundled concepts - by separating them into their elemental parts you can ultimately dispel all ambiguity. A similar attitude influences contemporary markets regulation. This programme aspires to ultimate certainty; a single set of axioms from which all propositions can be derived. From this perspective shortcomings in machine understanding of legal information are purely a function of a lack of sufficient detail the surmounting of which is a matter of time, given the collaborative power of the worldwide internet. The singularity is near: look at the incredible strides made in natural language processing (Google translate), self-driving cars, computers beating grandmasters at Chess and Go.
But you can split these into two categories: those which are the product of obvious (however impressive) computational feats - like Chess, Go, Self-driving cars, and those that are the product of statistical analysis, so are rendered as matters of probability (like Google translate).
If their continued existence depended on its Chess-playing we might commend our souls to the hands of a computer (well - I would). It won’t be long before we do a similar thing by getting into an AI-controlled self-driving car - we give ourselves absolutely over to the machine and let it make decisions which, if wrong, may kill us. But its range of actions are limited and the possible outcomes it must follow are obviously conscribed - a single slim volume can comprehensively describe the rules with which it must comply (the Highway Code). Outside machine failure, the main risk we run is presented not by non-machines (folks like you and me) behaving outside the norms the machine has been programmed to expect. I think we'd be less inclined to trust a translation.
- there is an inherent ambiguity in language (which legal drafting is designed to minimize, but which it can’t eliminated.