Technology is a great enabler. The countless advancements ranging from self-driving cars to virtual reality and a million others are astounding badges of honour to human ingenuity.
Ok, that’s the positive preamble.
Maybe it’s just the alarmist in me, but where we seem to be lagging in our meteoric rise as a species is coupling the question “Can we do this?” with “Should we do this?” without which we may go down a near Frankensteinian route towards disaster.
And with that segue, I’d like to talk about Microsoft’s recent patent for “creating a conversational chatbot of a specific person”[1]. First off, it's brilliant as a thought. If we can train chatbots from the responses of thousands of different people, it makes sense that we ought to do it from one person’s inputs as well. And with us lemmingly proffering pieces of ourselves to the digital gods, there is probably no dearth of information with which Big Tech could rebirth us, if not now then in the near future.
But what this piece of genius may not be considering is whether we should be doing this at all? The 'why'? A few quick reasons come to mind:
- Pounds and dollars – If you build it, they will come… and pay you for it. The bottom line is that this could probably be monetized in a fairly lucrative way.
- Parenthetical advances – There is a lot of research that has incidental benefits, i.e. that can give people direction on new ways of using this technology or a new direction in which to conduct further research.
- Morbid reasons – As this Forbes article states[2], there is the potential that using your online presence and input from your living days, there is the possibility that your chatbot can outlive you and provide an ongoing presence after your … departure.
- Because we can – Enough said.
The most reasonable of the motives above is arguably point 2. The research related to this patent could be hugely beneficial in ways we haven’t even conceived of yet. However, the unforeseen has no moral compass and the fact is that it could also be used for nefarious purposes outside of the original intent of its creators. You might think I’ve seen one too many Hollywood movies. However, if you think of the already burgeoning list of morally ambiguous ways in which technology is used, it’s not so much science fiction as it is science fact.
I’ll just pick one example of bias in Artificial Intelligence (AI) algorithms. While AI can help doctors interpret test results more accurately in certain instances, these algorithms are trained on historical data that has our inherent human biases built in. So when studies such as this[3] show that black patients would have gotten better life-saving medical care “if their kidney function had been estimated using the same formula as for white patients[4]” we should probably take notice and do something about it.
In addition to simply creating your digital zombie doppelganger, does this technology potentially open the pandora’s box for identity theft? The Federal Trade Commission in the US reports that there were $3.3 BN in fraud losses in 2020 alone[5]. If hackers can additionally get to one’s tone of voice, word choice, etc. the sky is the limit to their “success” in impersonating someone. We could keep going on this hyperbolic ride by adding on a layer of ‘deepfakes’[6] (AI generated fake videos). So now we could impersonate someone’s tone of voice with a video of “them” saying it. While we’re at it, we might as well invite Boston Dynamics to the party with their walking, talking and (yes) dancing robot[7][8], but by now I’m sure you catch my doomsday drift.
The practice of asking “why” is not a novel concept; this tension between science and ethics has long persisted and is in fact necessary. But it seems as if this time, it’s being played in a new arena, one in which regulation is struggling to keep up. The rate at which we are willingly giving our private data to Facebook and its peers far outpaces anything that legislation such as GDPR and its counterparts can rein. We just accept the delicious cookies and move on. And then we’re shocked with outcomes like (allegedly) swayed election results[9]. These breaches were enabled using the tools available to those who know how to manipulate them. And the aforementioned Microsoft patent could be another weapon in the arsenal.
Privacy pertaining to our data is the great debate of our time. Or at least it should be.
Let me be clear. I’m not advocating for an abandoning of all scientific breakthroughs and a return to the dark ages (those people had their issues too). The thing to do however is foster discussion and debate. And this needs to be from a wide swathe of society as opposed to those in the know of technological advancement and who “speak the language”. More voices mean less myopic points of view and a consideration of all relevant aspects of an issue. And Microsoft (the patent applicant) has said so themselves, in partnership with IBM and (yes) the Pope and the Catholic Church[10] when they called for a larger debate on ethics as it relates to AI. Whether that makes you more or less inclined to heed this call is another matter…
Let’s talk about the ethical implications. Let’s talk about what else this technology could be used for. Let’s talk about whether we can do this another way. Let’s talk about what checks and balances we need in place to do this safely. Let’s talk.