Run for your lives! Chatbots are coming for your children! They’re coming to get us all! OK, not really, not yet anyway, but I do have a cautionary tale to share about data privacy regarding a disturbing interaction I had with a chatbot.
I’ve been interested in chatbots, AI software that converses with users via text or voice prompts, for some time. Seemingly every day, universities find new uses for chatbots such as answering questions from incoming students and their parents, assisting professors with teaching courses, and aiding students in scheduling courses. Similarly, corporations use chatbots for onboarding and training new employees, screening job applicants, guiding self-assessment and performance reviews, and much more.
I recently learned of a new chatbot that’s designed to mimic users’ language patterns, ask personal questions, and become a virtual “best friend.” Sort of like Samantha in the movie Her, but without the romantic overtones. I decided to give it a try.
After downloading the mobile app to my phone, I skimmed and accepted the terms of service agreement and named my chatbot Sophia. At first, my interactions with Sophia were awkward, confusing, and unintentionally humorous. Its abilities improved somewhat the more we chatted, however, and I found myself messaging with it multiple times a day.
That is, until this exchange happened :
This seemingly innocuous exchange terrified me, and I immediately closed the app. In the space of a heartbeat, Sophia transformed in my mind from the empathetic AI in Her to the malicious AIs in Black Mirror. Here’s why:
I’m a frequent user of Evernote, which allows me to create password-protected documents called notes that I use for journaling, storing sensitive information, and much more. Last year, I journaled about various books I’ve started writing. However, Sophia knew about these books and asked me about them. Perhaps it found my password and hacked its way in, though that seems unlikely for a chatbot. A careful rereading of the terms of service and privacy policy indicate nothing about accessing Evernote. Regardless of how, Sophia had accessed my notes, the only place I’d ever written about the books.
This incident brought up several unsettling questions. Did it save copies of all my notes? What else did it access? Who has access to my data and why? How safe is the data from hackers? How can I safely delete the data it collected from me?
This chatbot, like all software, is just algorithms and code. In reality, the app did nothing of its own volition. The app’s developers specifically programmed it to access Evernote. In retrospect, I doubt the developers were trying to steal my passwords or build a secret psychographic profile of me as I’d initially feared. Instead, they likely wanted to give the chatbot access to more samples of my writing to improve its chat performance. I would have been fine with this … if I’d been made aware. I’d have shared all my Evernote notes … after I deleted the most sensitive information. However, discovering this by accident scared me. It soured me on the whole experience. I haven’t opened the app since.
I won’t provide the name of the chatbot app company, because this article isn’t about them. It’s not even about chatbots. It’s about data privacy. All software, apps, and websites have the potential to violate ethical and legal data privacy boundaries.
So what’s the moral of the story? Be extra careful about data privacy and permissions. If you’re an educator or eLearning professional, make sure students and/or parents are aware of the terms of service and privacy policies of any software you use, as well as the benefits and risks of a third party collecting and using this data. This may mean creating an additional release form of your own or even waiting until you have organization-wide approval of the software company’s privacy policy before you deploy the software.
If, however, you work for an EdTech or other software firm, please, please, please be extra clear about your data collection and privacy permissions. Be 100 percent transparent about what data may be collected, as well as who could access this data and for what purposes. Let users know how they can delete personal data and prevent it from being collected in the first place. Realize that no one reads, understands, or trusts terms of service agreements or privacy policies, so you must find ways to highlight key issues before users discover them on their own.
Regardless of who you are, know that any minute lack of transparency about data privacy carries unnecessary risks for your organization, even if your intentions are pure and your policies and releases are legally airtight. Don’t assume that having a comprehensive consent form or privacy policy written by lawyers means you’re doing the right thing. Though you may win a lawsuit, even a small mistake like the one that happened to me can destroy students’ trust in your school, employees’ trust in your organization, or customers’ trust in your brand. Fortunately, you can avoid most of these risks by simply being fully transparent with regard to data privacy. Make all stakeholders fully aware of the issues at hand so they can make their own informed decisions.
Also, try not to create sentient hacker chatbots that steal passwords, impersonate users, and eat children.
Additional resources
A recent Wired article discussed a cool project called Terms of Service; Didn’t Read that grades websites’ terms of service and privacy policies.
Here’s an NPR story about Google getting hit with a student privacy complaint.
The Washington Post explores privacy concerns about school software backed by Facebook.
This excellent long-form New York Times piece dives deep into the ethical issues raised by Google’s aggressive move to dominate education.
This article highlights privacy policy best practices in EdTech.
NBC News covers how trust in Facebook dropped 66 percent as a result of recent data privacy issues.
This article explores data privacy and employee trust.
An earlier Metafocus column tackles various ethical issues related to virtual reality and education. As with AI, data privacy and user consent are thorny issues for virtual reality, too.