Following Twitter's acquisition of a security startup called Smyte, the company now has introduced some changes being implemented to take the fight to bots and other malicious accounts. Everything will reportedly start with a new verification system which will only affect new accounts created for the public social media platform. That's a logical place to begin making headway since one problem facing companies is how easily bad actors can just create a new account if their old one gets banned or blocked. On that front, the company is working with its security partners to protect users who need to remain anonymous for a wide variety of reasons while also ensuring that Twitter stays on top things. Specifically, those who are signing up for a new account will now need to confirm who they are with an email address or phone number.
That first measure won't necessarily stop bots from appearing since it's not at all impossible to simply create a new email address when setting a bot up. However, since email providers could implement similar measures and the process for getting a new email takes time, it should slow things down. In the meantime, its also not the only action Twitter will be taking. The social media company is continuing development of a new machine learning tool which will be used to sniff out bogus or malicious accounts. The proactive approach is meant to augment the current system of user-based reports and is already showing some promise. In fact, throughout May, the system managed to flag around 9.9 million potential spam or automated accounts and reduce the number of human-generated spam reports by around 32-percent over a two-month period. Furthermore, many new spam accounts are being blocked during signup or afterward, in some cases, due to security checks which the company says halt around 50,000 new account creations per day. Some of those are new accounts which appear to be automatically following a large number of celebrity or influencer accounts immediately after gaining access.
Finally, when Twitter notices an account which is behaving suspiciously but isn't clearly a malicious or bot-run account, the 'visibility' of those accounts will be reduced. That likely means that their tweets and comments won't be featured and won't rank high in searches performed on the platform. Beyond that, a warning will be put in place and new users won't be allowed to follow the account. Presumably, those are escalating repercussions and follow warnings from the company to account-holders, in case the account was flagged incorrectly. The changes should be present on both mobile and web versions of the platform.