Canada News

Get the latest new in Candada

Edmonton

New tool aims to harness the power of AI to combat internet hate against Indigenous people

A new tool aims to use artificial intelligence to help make the internet a safer place for Indigenous people.

The project was given the name wâsikan kisewâtisiwin, which translates to “kind energy” in Cree.  

“We’re trying to make the internet a kinder place. We’re trying to change the trajectory of the internet towards discriminated people,” Shani Gwin told CBC’s Radio Active.

Gwin is a Métis entrepreneur and founder of pipikwan pêhtâkwan, the Edmonton-based Indigenous communications firm leading the project.

Being developed in collaboration with the Alberta Machine Intelligence Institute (AMii), the tool is dual purpose, intended to help both Indigenous people and non-Indigenous Canadians reduce racism, hate speech, and online bias.

The first function of the program is to moderate online spaces like comment sections. While the internet has been a tool used by Indigenous people for advocacy, it also can frequently be an unsafe space for communities that are discriminated against, Gwin said.

Gwin said that all it takes is one comment for online spaces to fester.

The tool flags hateful comments, and then provides sample responses, while also documenting these instances for future reporting.

The second function of the tool is designed to serve as a writing plug-in for your computer — similar to Grammarly. Intended to help general Canadians understand their bias, it will flag any writing that may be biased against Indigenous people, provide an explanation, and a suggestion for how to reword the sentence.

Woman with long dark hair curled up in an armchair
Shani Gwin is the founder of pipikwan pêhtâkwan, an Indigenous public relations agency focused on elevating Indigenous voices, projects and issues. (Submitted by Amii)

Ayman Qroon, an associate machine learning scientist with Amii, explained that the system works a lot like the AI chatbot tool ChatGPT. It is advanced computer software that is trained to understand and generate human language. 

“You can think of it as like teaching a child by showing them thousands of books and articles and blogs. And they eventually end up understanding the language and the knowledge embedded in that language.”

Qroon then instructs the language model to classify a comment as hate speech, or not — and provide rationale as to why.

Bias feeds bias

But AI-powered content can also generate hate and disinformation.

“AI right now is designed through the lens of Canada’s dominant culture. And I would say that across the world that without input from racialized communities, including Indigenous people, AI cannot analyze and produce culturally safe and respectful content,” Gwin said.

“Every piece of infrastructure in Canada has been developed from the white patriarchal lens,” she said. “So more racialized people, more women need to get involved in the development of AI so that it doesn’t continue to be built in a way that’s going to harm us again.”

Man with dark hair smiles, leaning against a tall table.
Ayman Qroon, is an associate machine learning scientist at Amii working on developing wâsikan kisewâtisiwin. (Emily Williams/CBC)

Qroon said he is glad to see people questioning the underlying biases that may exist in AI.

“That means that we care and we’re thinking about the problem.” 

“The truth is, these models just look to learn from the data that you show them. If the internet is biased, it will learn to be biased — if there is hate speech there, it will learn that as well.”

AI bias revealed itself in training, Qroon said, adding that at times when experimenting with the AI, it would try to minimize the tragedies that Indigenous people went through.

“And that’s why it was very important for us to integrate the Indigenous community into this process and get their perspective and get the instructions from them.”

Radio Active9:02Indigenous AI tool

A Métis entrepreneur in Edmonton wanted to find out if artificial intelligence could be used to help to make the online experience better for Indigenous people. Shani Gwin is the founder of pipikwan pêhtâkwan.

The project has been selected as a semi-finalist for Massachusetts Institute of Technology’s Solve 2024 Indigenous Communities Fellowship.

Gwin said that her hope for the project is that it helps take the emotional labour of education off Indigenous people — and free them up to do things besides moderating comment sections.

“I think there might be concerns that people think that this AI tool will take jobs away from Indigenous people, but it’s not, that’s not what it’s for. It’s there to do the work that we don’t want to do.”

“But it also means changing the Internet and Canadians’ hearts and minds about who Indigenous people are.”

View original article here Source