AI forces us to be human again

The adoption of AI is moving full speed ahead, but are we moving in a sustainable manner? The current lack of diversity in the tech sector leads to the development of technology that reinforces existing biases and discrimination in our current society. There’s a responsibility for the tech sector to do better by taking the human aspect into account – and other sectors should follow suit.

AI & the gender bias

Ever noticed that most voice assistants have female names and voices? According to big tech, this is a logical choice, as people find female voices more comforting, less aggressive, and audibly more clear. However, others might argue that the submissive or even flirtatious style reinforces problematic gender stereotypes.

Already in 2019, UNESCO voiced their concern, arguing that “their hardwired subservience influences how people speak to female voices and models how women respond to requests and express themselves. To change course, we need to pay much closer attention to how, when and whether AI technologies are gendered and, crucially, who is gendering them.”

Earlier this year, UNESCO also published a new study which examined stereotyping in Large Language Models, showing unequivocal evidence of bias against women in content generated by each of these LLMs. “These new AI applications have the power to subtly shape the perceptions of millions of people, so even small gender biases in their content can significantly amplify inequalities in the real world,” the report highlighted.

AI is a hot topic, and organizations and individuals alike are spending time and resources figuring out how to make the technology work for them. But to what extent do we take these considerations into account?

It all starts with creating awareness, says Petra Jenner, Senior Vice President & General Manager Europe, Middle East, and Africa at Splunk at the company’s .conf24 in Las Vegas. “We need to understand what we’re using, what LLMs are, and how they affect us. If we don’t understand, there’s a risk of biases coming back into the system,” Petra says. “And for me, the biggest risk is that people are not educated on these technologies. They are afraid of AI, and even refuse to use it. But in reality, they’re already unknowingly using it, with Siri, with Alexa… The awareness is just not there.”

In addition to the lack of awareness, the current state of the AI sector reinforces the risk of sustaining biases, Petra recently argued in an article in the Business Reporter. In 2023, only 22% of AI professionals were women, and women held just 26.7% of technology jobs overall. The AI workforce doesn’t reflect the society we live in, which has clear economic and social implications. “We are at risk of creating AI frameworks that do not represent the society we live in, resulting in untrustworthy outcomes that further entrench the existing bias and discrimination in society,” Petra said.

Make AI a leadership problem

The tech sector should take responsibility and do better. If we don’t address these issues and make an attempt to tackle these biases, we only perpetuate existing problems. But there’s a responsibility for organizations as well. AI brings a wealth of opportunities, and within the next years development will go fast. The business sector will have a large role to play in the advancement of AI, for the worse – or for the better.

This makes it all the more clear that the development of AI in organizations should not be restricted to the IT teams. CISOs and their teams must ensure that the adoption of technologies takes place in a secure manner. HR must ensure that the rights of individuals must be protected. And most importantly, management must have a clear vision of how to govern AI systems.

This means companies have to start thinking about what a successful implementation means to them, Petra Jenner says. “AI will bring productivity gains, which leaves extra time for companies and their employees. How will you be using this time? That’s the point where we all need to reflect on: how is my role evolving with AI?”

And that raises the question: will we be using AI purely for productivity gains, or will we be able to develop and implement it in a way that takes into account the human factor?

What if we do it right?

Last month, Dutch IT company AFAS Software announced that it is implementing a four-day work week trial, offering all employees to turn their Fridays into ‘development days’, on which they can spend time volunteering, providing care, or doing something creative. The software company expects productivity to remain the same, partly due to productivity gains brought by automation and AI – and partly due to increased employee happiness.

Amidst the worries of professionals in a range of different sectors, who fear that their job might be replaced by AI and automation soon, AFAS’ trial brings a fresh perspective on the opportunities that these new technologies bring. Whether other organizations will follow the example remains to be seen, but perhaps we can start to be cautiously optimistic.

Over Daphne

Daphne begon haar journalistieke carrière in het financiële hart van Londen, waar ze bij een financieel mediabureau schreef over faillissementen, herstructureringen en herfinancieringen. Terug in Nederland werkte ze bij een investeringsbank op de Zuidas, om vervolgens de switch te maken naar het onderwerp dat ze het meest interessant vindt: technologie.

Na een periode bij Fox Crypto als technisch schrijver werkt Daphne nu als freelancer en helpt ze met bedrijven in de IT-sector complexe, technische informatie te vertalen naar een duidelijk verhaal dat voor iedereen te begrijpen is.

Ook maakt ze persoonlijke verhalen over inspirerende professionals en schrijft ze graag over de onderbelichte kanten in de technologiesector, met de hoop de discussie hierover aan te wakkeren.

Vergelijkbare berichten