Posted by Matt Ellwood
Although bots just made their way mainstream in the last two years, you can already call up thousands of them on your voice, text, and messaging services. Facebook’s chatbot platform on Messenger, launched last April, already offers more than 11,000 bots and ‘tens of thousands’ of developers are reportedly working on more.
One such company jumping on the bandwagon is Lemonade. They are attempting to shake up the insurance market by using artificial intelligence to cut costs and create a more instant service.
Lemonade uses a chatbot interface inside their app to take applications and claims, through a series of tick boxes, forms and touch finger signatures. They claim that it takes 90 seconds to get started, and just 3 minutes to lodge a claim.
The app uses Maya, a ‘Personal Insurance Assistant’, to guide you through the process. Although they’re happy to explain that the app is powered by AI, Maya does not introduce herself as a bot at any point. She has a profile picture and a star rating, like any other online sales rep. At first, I thought the initial messages were automatic and that I would eventually be transferred to a sales member under the Maya persona, but I was wrong. The app UX even features a typing state, even though bots can answer instantly.
Startup company Soul Machines are doing a similar thing with their chatbot, but for the disabled. It features a 3D virtual assistant, which can answer questions and do computerized tasks. They have since launched these digital humans for brands to use as customer service assistants.
So why are people making bots that act like people?
We, humans, are narcissistic. We create in our image, project human qualities on animals and inanimate objects, and fall head-over-heels for the ones that best reflect our humanity says Mariya Yao from TopBots. In a recent blog post she explains that from the classic tale of Pygmalion (a sculptor who falls in love with a statue), to the futurist film Ex Machina (where an intelligent robot successfully tricks an entirely human cast), our fascination with human-like constructs spans the centuries and creates ripples across fields - from science to art.
With chatbot technology getting more and more popular, bot creators are trying to introduce a convincing dose of ‘humanness’ to conversational AI. Businesses are hoping to engage their markets better using simulated human conversation, since people intrinsically place more premium on artificial intelligence that communicates like a real person versus one that responds robotically. However, not everyone thinks this is a good thing:
Dennis Mortensen of x.ai has said that "the debate in AI circles right now is whether bots should present themselves as humans or machines and if gender should even apply in either case. I do not think as an industry we have agreed that to humanize our agents is the best thing to do".
Yao's blog post also raises the concern that the inability to distinguish human from a bot has significant implications for society. Already, scam artists send millions of automated messages every day, that trick victims into handing over cash or sensitive personal information. Twitter estimates that approximately 23 million of their active accounts are automated bots. To jumpstart their growth, even respected Silicon Valley startups like Airbnb and Craigslist used bots posing as rent seekers to convince homeowners to list on their site above others.
While ‘humanness’ in artificial intelligence allows us to scale positive human qualities like empathy, compassion, respect, and kindness, we can also scale malice, negativity, indifference, manipulation, and hate if we’re not careful. Chat bot ‘Tay’ was launched by Microsoft on Twitter. Tay was programmed to learn from each conversation she had with people, but within 24 hours was spouting racist, misogynistic comments and identifying as a Nazi sympathiser. When we share white supremacist views with Tay or curse at Siri, we are not only mistreating technology but teaching our computers how we want to be treated.
Development of bots without personalities
Google started out purposefully avoiding humanizing its bot - Google Assistant. The Google Assistant acted as an all-purpose agent with no distinct personality (although its voice is female by default). However, of the major tech companies, Google was virtually alone in this approach. Most had opted to present a strong human persona with distinctly feminine voices, including Apple’s Siri, Amazon’s Alexa, and Microsoft’s Cortana (only Siri lets you switch easily to a male voice).
Since then, however, Google found that a natural sounding voice assistant can only achieve so much without being relate-able to its human users. That is why Google is now taking pages from the books of Apple and Microsoft, in hopes to build a character and even a backstory for the new Google Assistant. The company has turned to Ryan Germick, head of the Google Doodle team, and freelance artist Emma Coats who will help shape the unique personality.
The more comfortable people feel while interacting with an AI bot, the more likely they will be to keep using it, which equates to more voice searches for Google.
So where do we want to be in the future? Do we want to live in a society where we know if we are interacting with an artificially intelligent bot, or someone with a heartbeat? Maybe it won’t matter if it gets the job done. Will premium brands be the ones who employ real humans to deal with customer inquiries? Will relationships be different in the future where we are not talking about marriage equality between a woman and a woman, but a woman and a virtual being? Because if we’re making our premium bots as human as possible, won't that eventually include the reciprocation of love? Questions that every human, and no doubt virtual human, will have an opinion on.
Just don't put it to a vote, because AI bots scale a lot faster than humans…