Co-founder of Koo, Mr. Mayank Bidawatka, stated that it is always an attempt of the platform to stay ahead of the ongoing content moderation development curve. He declared the commitment of his brand to make social media a friendly space and promote only and only healthy discussions. Mr. Mayank claimed Koo to have the most proactive content moderation in the world.
All this was on the announcement of the new Content Moderation Algorithm of Koo that is capable of blocking any and all types of harmful content posted on the platform within 5 seconds. Let us take a deep look inside this and check how this algorithm functions.
Action of the New Algorithm on Nudity
Any attempt by a user to post a picture or video containing nudity or sexual content is immediately and proactively detected and blocked by Koo's in-house "No Nudity Algorithm." It takes less than 5 seconds for these detections and blockings.
The ability to upload content, be found by other users, be featured in trending postings, and interact with other users is immediately disabled for those who post sexually explicit material.
New Algorithm disabling Toxic Comments, Hate and Violent Speech
In less than 10 seconds, Koo's new algorithm actively seeks for and deletes toxic comments and hate speech to make them invisible to the general audience.
A warning is placed on content that contains excessive blood, gore, or violent acts for the users.
Effectiveness of the New Algorithm on "Impersonation"
In order to identify fake profiles and block them, Koo's internal "MisRep Algorithm" regularly monitors the site for profiles that use the material, photographs, videos, or descriptions of well-known individuals. Pictures and videos of famous people are instantly removed from profiles upon identification, and such accounts are marked for future surveillance of unruly conduct.
New Algorithm against Misinformation and Disinformation
In order to identify and label misinformation and disinformation on a post, Koo's internal "Misinfo & Disinfo Algorithm" actively and in real-time scans all reported and viral fake news based on public and private sources of fake news. This technique helps to reduce the spread of viral misinformation on the platform.
The social media microblogging platform, Koo, is claiming it's in-house developed content moderation algorithm to be effective enough to remove and/or block unwanted and illegitimate content within 5-10 secs. On the verge of its beta phase, if this happens as claimed by Koo, this is going to be a revolutionary initiative in the fields of AI-based content moderation practices.
Comments