Interacting with AI is fast becoming the norm, thanks to the ubiquity of Siri, Alexa, and the Google Assistant.
My morning routine starts by using the “Okay, Google” command to ask my smartphone for the latest news, and it responds with the BBC Radio 4 bulletin. During my work day, 80 percent of my searches are conducted by voice on my HP Chromebook, and I end the day with a “Goodnight, Google,” which triggers a runtime routine that reads tomorrow’s schedule, tells me the weather, sets my alarm for 5 a.m., and plays 60 minutes of wind-down forest sounds.
But would these interactions be more meaningful if the Google Assistant had a face? Mark Stephen Meadows thinks so. He’s the founder and CEO of Botanic Technologies, and has spent 20 years working in AI, including stints at Siri creator SRI International and Xerox PARC.
In a recent interview, Meadows explained why he believes adding an embodied avatar to voice assistants will increase trust, enhance branding, and improve the efficacy of these interactions. Here are edited and condensed excerpts from our conversation.
PCMag: Why did you set up Botanic Technologies in 2011?
Mark Stephen Meadows: Companies need AI services that map the conversational landscape of millions of users. We believe that CUIs [conversational user interfaces] need a personality [to give] them an identity [and] create a sense of authentication so you can trust them.
If we look at how we interact with one another, and our ability to understand our mutual values, personality is important. It allows context and an ability to assess value. We set out to bolt these items—all related to brand and relationship management—together into a toolset our customers could use.
Although often contested, UCLA Professor Albert Mehrabian claimed in his book Nonverbal Communication that 93 percent of our communication does not involve words or speech. This backs up your argument that AI requires more well-rounded communication skills.
Right. Our human personalities are made of many different things: appearance, sound, movement, words, psychological, and chemical stuff that goes beyond what we fully understand. Personality is often archetypal, sometimes incomprehensible. CUIs, then, need to convey those same traits for the same reason. And that gets us into personality.
What kinds of personalities do you imbue your CUIs with at Botanic?
Most of our builds have been in some service sector (healthcare, finance, education, entertainment). Restaurants, hospitals, and airports, for example, all have people dressed and behaving in particular ways, according to their roles/tasks (wait staff, pilot, ticket counter clerks). The way they dress, talk, and generally behave reflects the archetypal personality they are intended to convey to you. Our CUIs are the same.
You’ve built CUIs for several B2B markets, including China, the Netherlands, and the US. Describe one, and explain its objectives.
In 2015, we collaborated with Weta Digital and Hanson Robotics on an AI/CUI project for a Chinese Health Authority. The objective was to design a system, a photorealistic avatar called “Min,” that was able to help residents of rural China manage their health.
How do you address privacy concerns around data?
Firstly, in the End User License Agreement (EULA) by being transparent in telling people how their data is being managed and by allowing people to set a slider between privacy and publicity: they pay for privacy and are paid for publicity. Our data is valuable and we should be paid for it, but only if we want. Privacy is built in from the bottom up, via code, and from the top down via legislation and regulatory methods like GDPR.
Like many geeks, I have a piece of opaque tape to occlude my Chromebook’s camera. How will you persuade people to let a visually based AI peer back at us through our screens?
The art of conversation is less about talking, and more about listening. The core goal of putting a face on AI is to be able to better collect end-user data. This multimodal capacity allows us to compare the words, sounds, appearance, and movements of the user so we can identify how that person feels. Affect is what our multi-modal bot framework is collecting, at its core, and understanding the user is of key importance, not only for analytics but also for providing a contextual reply.
So if I keep the tape on the camera, I can’t chat with your CUIs.
Right. Well, you could still use voice- or text-based chat. But it won’t work as well.
How did you become interested in AI?
I graduated with a triple BA in philosophy, literature, and mathematics. I became a portrait artist, an author, and a D&D geek, so I’ve been interested in interactive forms of storytelling for decades. Conversational interfaces are a natural fallout of this work. Each of our builds are talking portraits essentially.
Prior to becoming an entrepreneur, you worked at SRI International, the birthplace of Siri, and Xerox PARC, where the personal computer was created. Tell me about those experiences.
While at SRI, I learned the value of both multi-modal interaction and interactive narrative. I had just written my first chatbot—both Siri and Nuance were getting started down the hall from me—and I realized we’d be talking with these systems soon. Also, around that time, I was spending the majority of my waking hours in Second Life and other real-time 3D gaming systems, rigging avatars with chatbots. That was the beginning of everything I’m doing now.
Who are your investment partners and what stage are you at in funding?
We bootstrapped for some five years, and brought in $1 million in funding led by Outlier Ventures as we worked to stabilize our horizontal platform. Then in the last two years we spun off SeedToken.io, which allowed us to build a large ecosystem of partners to address vertical markets. We’re going to be passing the hat again for Botanic in 2019 to address two key verticals in collaboration with partners.
Finally, you’re a certified 100-ton US Coast Guard Captain, and have sailed on five of the seven seas. Did you create an embodied avatar to assist you onboard? Sailing can be a solitary occupation.
I haven’t as yet, but funny enough, Steve Jobs (and partners) registered a patent on technology designed to control a marine vessel via voice, so maybe that will happen one day. Ultimately, being a sailor has taught me a great deal about solving problems independently, about working with others as a crew, about keeping things as ship-shape as possible and most of all, about keeping the “sailor’s eye”—always keeping the long view of the voyage rather than focusing on individual waves.
Meadows will be speaking at Blockchain West in Anaheim, Calif. on March 19.