Ubisoft, Riot Games Reveal Research Project To Combat Harmful Player Interactions

The beginning of the end of toxicity in video game text chats could come soon.

Video game developers Ubisoft and Riot Games have recently announced their partnership to work on the first cross-industry research initiative to combat toxic messages and other harmful content in video game chats.

Ubisoft and Riot Games are members of the Fair Play Alliance, a coalition of gaming professionals and companies committed to developing quality games and sharing the best practices that promote healthy communities in online gaming, per the Alliance's official website.

Ubisoft x Riot Games Collaboration Details

Ubisoft and Riot Games agreed to collaborate on a research project called "Zero Harm in Comms," with the aim to better improve artificial intelligence-based solutions to prevent harmful player interactions, per Ubisoft's statement.

To do so, both companies intend to create a cross-industry shared database and labeling ecosystem that collects in-game data to better train AI-based preemptive moderation tools to detect and mitigate disruptive behavior.

Both game developers assume that collective action and knowledge sharing is the key to helping create safe and meaningful online experiences in gaming, particularly in online video games.

Ubisoft believes that the research project is just the first step to a long, dragged-out war against online gaming toxicity. The gaming company intends to use the research project to explore how to lay the technological foundations for future industry collaboration.

Read More: Civilization VI Players Can Get Its Upcoming Leader Pass for Free - Here's How

It would also help create the framework needed to guarantee the ethics and privacy of the database.

It also has faith the database with being a success due to Riot Games' competitive games and its diversified portfolio covering every type of player and in-game behavior to train their respective AI systems.

Meanwhile, Riot Games plans to invest in AI systems that automatically detect harmful behavior and encourage more positive communities across all its games, per its statement.

Why Create A The Database At All?

The two developers' reason for creating Zero Harm in Comms is well-founded. A previous study published in the journal "Computers in Human Behavior" revealed that toxic behavior is contagious among gamers and that exposure to previous games increases the likelihood that a player will commit toxic acts in future games, per Intenta Digital.

These toxic accts, according to Wired, include sexual harassment, hate speech, spamming, and other toxic gamer behavior such as the publication of another's private information, saying strong emotional statements to create negative reactions, and using the game in unintended ways to harass others.

To prevent toxic gamers from committing such acts, a Protocol article citing Unity's take on the matter said that it would take an improved automated moderation to fully address the gaming community's problem of toxicity - something Ubisoft and Riot Games' research project aims to do.

Felix Thé, a vice president of product management at Unity, said in an interview with Protocol that the toxicity in online games and communities also has an acute effect on female gamers, who often fall prey to sexual harassment while playing an online game. As such, they tend to stop playing the game in question when exposed to such toxicity.

Related Article: LOUD, XERXIA Have Been Eliminated From Masters Copenhagen

© 2024 iTech Post All rights reserved. Do not reproduce without permission.

Company from iTechPost

More from iTechPost