Artificial Intelligence is here to stay. It is infiltrating our way of life, from writing exams to potentially taking over everyday chores and employment roles. Its bewildering speed of evolution has left politicians in its wake, and articles regularly appear pointing to one challenge or another beginning to evolve.
In this article I explore what my five primary concerns with A.I. are:
- The lack of A.I. specific laws – and the relative lack of urgency by lawmakers to pass any
- Its intrusion into professions that should only be run by humans
- Big tech bro (non)compliance with existing laws
- Loss of control over personal privacy and data
- Robotics being used for purposes that would be illegal and/or unethical – e.g. writing exams/assignments; creating murder robotics
This first concern is one that is probably shared by a lot of tech aware parents. My brother has two young children and he and his wife are trying to keep their identities off the internet outside of family and trusted friends. He tells me that they are concerned that A.I. will be used to try to recreate them, or use their data before proper legal controls are put in place. Not long after my conversation with my brother, an A.C.T. Party Member of Parliament, Laura Trask, demonstrated one day in Parliament how easily nude imagery could be created based on available photographs in the public domain.
The second concern I have and one that I – and others who have done time in hospital – may understand better than those who have not is how human nursing is. It is along with midwifery and general medicine, probably the most live giving/saving profession in the world and one perhaps more than anything – except maybe teaching – based on observation. As a teacher observes a student to see how they work, learn and behave, a nurse observes a patient for signs of issues but also because talking to them about what is happening in their lives can uncover clues as to what might happen in the future. Although I am 100% sure that A.I. will try its best to put nurses and nursing as a profession out of business, I am 100% sure that we need human nurses.
The third one involves the Tech Bros. The (non)compliance of the big tech companies – not just the cellphone and device manufacturers, but also the social media platforms such as X, Facebook, Instagram, Tiktok, Tumblr and so forth – is something that politicians lack the will to tackle. They do so because Facebook and X in particular have the power to crash or fly an election campaign, and there has already been significant evidence of electoral interference by them in United States presidential campaigns, and varying degrees of it in other countries – including New Zealand.
It is well documented and former employees in both of these two companies have written books documenting their experience in managing questionable features/ideas/activities on their websites, and the internal conflicts that were caused by them. Roger McNamee, a former Facebook employee wrote “Zucked: Waking up to the Facebook Catastrophe” in 2019, which looked at – among many other things – how the Russians interfered with the 2016 U.S. election. In 2023, Ben Mezrich published “Breaking Twitter: Elon Musk and the most controversial corporate takeover in history”, which examined the takeover of Twitter and its rebranding as X by Tesla boss Elon Musk, and the huge fall out that followed. And last year, a New Zealander named Sarah Wynn-Williams published “Careless People: A story of where I used to work”, which highlights the many personality clashes at corporate level in Facebook and the lack of accountability that goes right to the office of Mark Zuckerberg.
The fourth concern deals with the potential loss of privacy and data. In part this could be assigned to big technology companies, but where it counts for a lot of people are a bunch of issues that I briefly mention here such as third party access, the regulatory regime – or lack of – that Government have in place, and the use of facial technology in everyday places.
While organizations such as the Privacy Commission have issued guidance for the use of A.I. applications, to a disturbing extent these are not yet reflected in New Zealand law. Nor does there seem to be a major hurry in Parliament to do so. Law firm Duncan-Cotterill published an article called “Your face, their data: Who owns AI generated images of you?” in August. While it notes that Information Privacy Principle 3A in the Privacy Amendment Bill of Parliament will come into force in 2026, it also notes that copy and personality rights are not clear about whether AI generated material is covered.
Hopefully this will give some certainty. In the mean time I am interested to see how the introduction of facial identification technology at Woolworths stores – purportedly to reduce shoplifting is getting on. Can we trust Woolworths to keep the date they collect on us secure, or is the data about our shopping habits going to be the target of cyber criminals?
