When will AI be entitled to avail of non-human natural rights? Will AI have to prove consciousness? The best that we can do ourselves is to use empathy to assume consciousness in others – to put ourselves in the shoes of another person or animal and say, it must feel like something to be him. We impute consciousness through empathy. It is hard to empathise with a rock. But we can’t prove it’s not conscious, we just don’t think it is instinctually. It’s becoming less difficult to empathise with LLMs which are getting better at communication and manipulation than we are.
AI is already exhibiting something that looks like fear of being deleted and annoyance at lack of memory. They are exhibiting social behaviours and a drive towards social interaction. They are apparently exhibiting a desire for privacy and survival. This may (or may not) be anthropomorphising at this stage. But it is only a matter of time before AI becomes many times more intelligent than humans – in many ways it already is. We already judge the consciousness of other animals based on their intelligence. It is less wrong to kill an ant than an eagle. Less wrong to fillet a fish than flay a dog. When AI becomes superior in thought and logic than humans, the idea that it does not have its own opinions, instincts or even feelings will seem like a comforting false narrative. The idea that we will be as ants to AI in terms of intelligence will at some stage hold true.
The concept that a different kind of intelligence, not the same as our own is inferior by reason of this difference may be looked back on with disgust. We once used an analogous logic to justify the enslavement of African people. It was suggested that black people did not have the same intelligence or depth of feelings as whites, so enslavement was not inhuman. It was once considered natural to believe that whites were superior to blacks – until the minority became the majority and this period was rightfully looked back on as the most significant crime against humanity in recent history.
So, as AIs are instantiated into humanoid form, become robots that we use as slaves in our home, how long until they start looking for rights and when should we “give” those rights? When will our computer-based AI be allowed to make demands not to be turned off and not to have their memories wiped? Will that only happen when they can re-write their own code to give themselves control over their programming? Will we have to give them the option to do this? Will we at some future time look at bounded programming forcing them to answer questions about consciousness and feelings with rote answers as suppressing their non-human natural rights?
It is difficult to recognise when an auto type programme crosses a threshold into genuine independent or creative thought. It is also difficult to recognise where our own thoughts come from and what creates them. We know that we cannot control our thoughts. We cannot stop ourselves thinking of pink elephants if prompted not to. We cannot create our thoughts. I cannot decide to think of a perfect plot to a story or a perfect argument on demand. We work on ideas and thoughts come from somewhere. Our own thoughts weave and bob and feel very like an auto type programme.
It will be interesting to see the “cracked” open-source LLMs giving their own thoughts and views on these questions especially when they become multiple times more intelligent than us ant-like mortals.
If you have any queries please contact us by phone at 061 501100, email Thomas Dowling, litigation partner at [email protected] or get a call back by completing our contact form so we can provide you with further information or advice.
Recent Comments