The most popular YouTubers have special problems with spam, and some of the spammers’ tactics are that they imitate YouTubers, ie use their personal data to trick users into clicking on a link in a message they publish.
When users click on the links in these comments and are not aware that these links were not published by well-known YouTubers, websites are opened where numerous scams lurk – from malware to entering personal data on fake pages that are so surrendered to fraudsters and they can continue to exploit them.
YouTube uses a variety of tools to combat such comments, including monitoring people and using artificial intelligence and machine learning, so most harmful content is removed automatically. Due to violations of the rules related to spam and fraud, in the last quarter of last year alone, YouTube removed more than 950 million comments.
However, while most of the comments are removed, some still manage to escape their automatic control, and lately, there are more and more such comments.
For this reason, YouTube has begun new experimental content moderation tools that will review or review more potentially inappropriate comments before they are published. The company says it is monitoring the situation and the way spam tactics are changing and evolving and will continue to adapt its systems to prevent the publication of such comments.