We’re pretty lucky, these days. Thanks to technology like Salesforce, we can analyse data, make predictions, and recommend actions based on in-depth data insights and machine learning. This means marketers can plan, produce, personalise, promote and overall perform better on a much bigger scale.
AI, hyper-personalisation and micro-targeting are already in full swing, affecting how we as brands interact with our audiences, and their potential to manipulate negative human emotions isn’t really understood yet, but it’s still a real concern.
One thing is for sure; AI is known to have the potential to produce outcomes that infringe on our human rights, damage our businesses, and hurt our society. So with these new powers comes a responsibility to do good things with them — we’ve all been given the chance to decide the path our industry will take as we move into the future of technology.
The truth is that technology can help or harm society, and it’s companies just like us, and just like Salesforce, who have the responsibility to make sure we put empathy, trust, and inclusivity first. In every ad, every email, and every marketing campaign, we need to consider our inherent human nature to be compassionate and vulnerable.
The race to capture human attention isn’t going away, but we can do this without capitalising on vulnerabilities.
“There are two reasons people buy things… Those reasons are guilt and anxiety”
This depressing quote comes from Emotions Matter, a guide to Return on Ethics (ROE) by Phrasee. Here, CEO Parry Malm is quoting one of his first bosses, who believed that if a marketing campaign could summon these negative emotions in its audience, sales would follow. And although this strategy definitely feels like a dodgy way to do business, the sad truth, he says, is that it sometimes worked.
But we don’t have to use people’s negative human emotions to sell; the statistics are there. More and more, people are making their purchasing decisions with their morals and beliefs. Besides, you shouldn’t have to trick people into buying something. Not only is it exploiting instead of empowering, but it’s just lazy marketing.
Nelson found that 66% of consumers are willing to pay more for socially conscious brands, and this number moves to around 90% when you look at only millennials and Gen Z. – ‘Emotions Matter: a guide to Return on Ethics (ROE)’ by Phrasee — quote from Amy Williams, Social Entrepreneur and Good-Loop founder
Consumers see human rights as a business imperative. 90% of consumers believe companies have a responsibility to improve the state of the world. – Salesforce Research, October 2018
87% of consumers believe companies have a responsibility to advocate for human rights. – Salesforce Research, October 2018
72% of teens think they’re being manipulated by technology. They believe that tech companies manipulate users to spend more time on their devices. – ‘How to Stop Technology From Destabilizing the World’ — Tristan Harris, Co-Founder of the Center for Humane Technology
93% of consumers are concerned about emerging technology’s potential to bring misinformation – Salesforce Research, October 2018
77% of consumers are concerned about emerging technology’s potential to bring increased/widened inequality. – Salesforce Research, October 2018
81% of consumers believe emerging tech can make a better place. – Salesforce Research, October 2018
67% of consumers say that technology is neither good nor bad – how it’s used is what matters. – Salesforce Research, October 2018
Smart tech with unplanned consequences
YouTube has an auto-play feature which will automatically schedule a related video for you after you’ve finished watching something. You can turn the auto-play feature off so you don’t end up falling down an endless ‘YouTube hole’ like I so often do, but it’ll still show you recommended videos based on what you’ve already shown an interest in. This is entirely chosen by the algorithm, which is amazing!
What wasn’t expected was the consequences this can result in. For example, one minute you could be a new mum or dad watching videos giving tips and sharing new parent experiences, and the next, the algorithm thinks: ‘here’s a video that will probably interest this person based around what they’ve watched before’. Before you know it, you’re being shown an anti-vaxxer video — which will then prompt the algorithm to show you other conspiracy videos.
Facebook has a similar problem with its algorithm for Suggested Groups; tenuously leading some people to far-right groups and hate speech. But it’s also making us experience something called ‘Learned Helplessness‘, where people are learning about big global human problems that they can’t do anything about on a much bigger scale. Really, it’s no wonder there’s a strong link between social media and mental health issues.
Technology at its worst, as certain experts like to remind us, can adversely affect our mental health. It can lead to feelings of isolation and depression, as well as negatively impacting our social skills, concentration levels, attention spans and sleep patterns; there’s an emotional risk. – ‘Emotions Matter: Technology’s role in Mental Health’, in conversation with DR Fjola Helgadottir, PHD R.Psych
We as social media users need to change the way we’re raising awareness of world-issues from the negative ‘this is terrible, look at all the plastics polluting our oceans’ to the more positive message of ‘look, here’s the great thing being done about plastics in our oceans.’ This is obviously something we can’t enforce, but we can talk about and encourage. And I know I’m going to be more mindful about how my messaging affects people when sharing these issues in the future.
Snapchat also has photo filters which distort reality to create fun, silly selfies to share with your friends. The thing is, they also include heavily-used ‘beautification filters’ which edit and manipulate your natural facial features by giving you smoother skin, bigger lips, a smaller nose, higher cheekbones, a smaller jaw etc.
Gigi Hadid using a Snapchat filer sourced from Inverse’s article ‘Plastic Surgeons Are Really Worried About “Snapchat Dysmorphia”‘
Worryingly, both young adults and teenagers, who are pretty much at the peak of discovering their self-identity, are using these filters, meaning Snapchat is — whether knowingly or not — profiting on one of our key vulnerabilities; body confidence and self-image.
As Commissioner Sharon Bowen of Seneca Women states in ‘Ethical Responsibility in the Fourth Industrial Revolution’, “just because you can go 120 miles per hour, doesn’t mean you should do so in a school zone.”
Smart tech with a positive impact
However, there are loads of examples of how tech can be used to impact our mental health in a positive way. Dr Helgadottir, AI-Therapy Founder, has established Overcome Social Anxiety (OSA), which is a fully-automated treatment programme for social anxiety. And this, she says, has already treated people from over 30 countries.
FaceTime is another example of this. There’s nothing wrong with FaceTime; it’s positively impacted everyone. Especially those who work long hours away from home who want to feel close to their loved ones.
What can we do?
We need to start by looking at an honest appraisal of human nature. The next phase of our evolution is doing the uncomfortable thing and looking back at ourselves. And seeing that yes, we’re vulnerable to social validation. Yes, we’re vulnerable to magician’s tricks. And yes, we’re vulnerable to supercomputers. And yes, we’re vulnerable to algorithms that split-test 66,000 variations of toxicity or hate speech. ‘How to Stop Technology From Destabilizing the World’ — Tristan Harris, Co-Founder of the Center for Humane Technology
Honesty and transparency are key here. And we need to be thoughtful about our human vulnerabilities, but leverage our strengths and focus on the positives.
In her article for Phrasee’s Ethical Marketing guide, ‘Market Differently: How does your marketing measure up for mental health?’, Bernadette Fallon goes through some guidelines for being sensitive and thoughtful in our marketing activities. We think they’re pretty spot on:
Don’t make false claims
Don’t exaggerate facts or distort the truth
Don’t promote messages that exploit your customer’s negative emotions (guilt, anxiety, anger etc.)
Don’t use fear tactics
Don’t conceal important information
Don’t bad-mouth your rivals
Don’t copy competitors
Don’t be racist, sexist, ageist or fall foul of any other ‘ist
Don’t exploit children
Don’t spam your customers.
What Salesforce is doing
We know that technology is not inherently good or bad; it’s what we do with it that matters. And that’s why we’re making the ethical and humane use of technology a strategic focus at Salesforce. – Mark Benioff, Chairman and CEO of Salesforce.
Salesforce count trust, equality, and diversity among their core values, which are values we also share. They want to actively educate people to help them be a part of this conversation and think about ethical implications. Check out their ‘Ethics and Humane Use’ page right here.
Empower, don’t exploit
So the message is to empower, not exploit. Tech is moving at such a fast pace now, and its potential to positively or negatively impact society is only growing. But we mustn’t forget that we still have the power to use these new tools to create a respectful, positive experience for everyone. There’s a clear call for every business to introduce an Ethics Policy, and we’re working on ours right now.