Today I Discovered Intel’s Artificial Intelligence Sliders That Filter Online Gambling Abuse

Today I Discovered Intel's Artificial Intelligence Sliders That Filter Online Gambling Abuse

[]).push(function () { viAPItag.display(“vi_1088641796”) }) || []).push(function () { viAPItag.display(“vi_1088641796”) })

Last month, during its virtual presentation on GDC, Intel announced Bleep, a new AI-powered tool that it hopes will reduce the amount of toxicity gamers must experience in voice chat. According to Intel, the app “uses AI to detect and redact audio based on user preferences.” The filter works on incoming audio, acting as an additional user-controlled moderation layer on top of what a platform or service already offers.

It’s a noble effort, but there’s something terribly funny about Bleep’s interface, which lists in detail all of the different categories of abuse people might encounter online, coupled with sliders to control the amount. abuse that users want to hear. The categories range from “Aggression” to “LGBTQ + Hate”, “Misogyny”, “Racism and Xenophobia” and “White Nationalism”. There’s even a toggle for the N word. Bleep’s page says it hasn’t entered public beta yet, so that’s all subject to change.

Filters include “Aggression”, “Misogyny” …
Credit: Intel

… And a toggle for the “N-word”.
Image: Intel

With the majority of these categories, Bleep seems to give users a choice: How would you like none, some, most, or all of this offensive language to be filtered out? Like choosing from a buffet of toxic internet mud, Intel’s interface gives gamers the option of sprinkling a slight portion of aggression or name calling into their online game.

Bleep has been in the works for a few years now – PCMag notes that Intel spoke about this initiative at GDC 2019 – and is working with AI moderation specialists Spirit AI on the software. But moderating online spaces using artificial intelligence is no small feat, as platforms like Facebook and YouTube have shown. While automated systems can identify downright offensive words, they often ignore the context and nuance of some insults and threats. Online toxicity comes in many ever-changing forms that can be difficult to spot even for the most advanced AI moderation systems.

“While we recognize that solutions like Bleep don’t erase the problem, we believe it’s a step in the right direction, giving gamers a tool to monitor their experience,” Intel’s Roger Chandler said during his GDC demonstration. Intel hopes to release Bleep later this year and adds that the technology relies on its hardware-accelerated AI speech detection, which suggests that software can rely on Intel hardware to function.

[]).push(function () { viAPItag.display(“vi_1088641796”) }) || []).push(function () { viAPItag.display(“vi_1088641796”) })


Please enter your comment!
Please enter your name here