They are on a mission to fight discriminatory bias in artificial intelligence (AI)

It is awesome when truly enthusiastic people work together to make an impressive impact on digital technologies. Meet Clara and Linnea, two Master Thesis Students who are opting to develop a framework to battle algorithmic bias in AI applications.

There are plenty of pitfalls associated with the development of AI. One such issue can be discriminatory algorithmic bias, which means that implicit prejudices and values are unwittingly implemented in the technology. An infamous example of this is when the Google photos facial recognition software identified individuals with dark skin as gorillas.

Parts of the Swedish public sector is currently looking into the potential benefits of using AI to optimize their processes. Not actively working to check algorithmic bias could potentially have disastrous consequences. Cybercom has a close working relationship with many actors within the public sector and there is a need for greater understanding how algorithmic bias can be approached. These key insights will help us further innovate and develop the digital sustainability for AI in the public sector. The case study is focusing on one Sida project, but the broader aim is to generate a framework that is generally applicable on the public sector.

“We strongly advocate Cybercom’s core values and we enjoy teaming up with Cybercom during this project. We have seen that Makers from all corners of the company are very curious and willing to assist. This truly creates a positive work environment”

More about our student program, thesis work and internship at