Making the Internet safe

However, with the rapid advancement and adoption of new platforms, new, well-publicized risks to children and society have also emerged. It is important that online platforms prioritize user safety as a key feature of their development, but to date it is clear that there is significant progress to be made.

Our research illustrates the scale of this challenge, with 74% of parents saying their children have experienced harm online, from viewing illicit content to unwanted contact with strangers. With only one in ten children (11%) saying they would tell their parents about seeing harmful content online, the true scale of the online risk is likely even greater.

Parents are increasingly demanding more action and accountability from tech titans, with the vast majority (84%) supporting tougher regulations and laws to protect children online, even if it could limit their access to the internet. As is often the case, lawmakers around the world are racing to keep up with the changing landscape of the Internet, led by an increasingly wary and distrustful public.

Times are changing

But change is coming. In the United States, legislative battles rage at the state level over the issue of age verification. Federal legislation, in the form of KOSA and COPPA 2.0, is set to set a new agenda for online safety nationwide. Meanwhile, the EU’s Digital Services Act is already actively enforced, and the UK’s Online Safety Act is expected to be adopted from early next year. The next few years will see legislative kinks resolved and regulators begin to exercise their enforcement powers as they work to hold platforms accountable for perceived failures.

The many new global legislations are ambitious and attempt to encompass as many online safety “terrains” as possible. For online platforms, the most important provisions concern age assurance – preventing underage users from accessing inappropriate content, products and services – and content moderation – removing harmful and illegal content from platforms.

Platforms have certainly not been idle on this front, but much more can and should be done before regulators start “forcing their hand.” The key to achieving this is to move the industry from a reactive approach – “fix it when it’s broken” – to a preventative approach. This means not waiting for content to be flagged and reported by users, but going upstream and addressing content and harm at the source.

The rapid adoption of generative AI has fueled deep-fakes, explicit content, child sexual abuse material (CSAM), and other content-related issues. Newly introduced tools that allow for no-holds-barred image generation have exponentially increased the already enormous hose of content that platforms have to deal with every day.

While regulation takes time, the regulatory environment is driving tremendous interest and investment in the technologies that will support the safer Internet of tomorrow. In fact, the global market for content moderation solutions is expected to grow $17.45 billion by 2027. At the heart of this rapidly evolving industry are artificial intelligence and machine learning (ML), which are driving the development of online security tools.

AI-powered online security

Innovation, driven by necessity, creativity and the regulatory agenda, is bringing a new era of online security technologies from a range of different players. At the heart of these technologies are increasingly advanced machine learning algorithms. With the right data, these algorithms are trained to discern harm from security, manage access at scale, and augment human capabilities. The combination of man and machine in online security is crucial, given the scale of the challenge which is now too big for humans to tackle alone.

With concerns about privacy and anonymity, innovation in age screening largely focuses on behavioral analytics and user biometrics.

For example, highly accurate and privacy-preserving methods currently offered include age estimation using only an email address, facial age estimation or even other promising biometric innovations including voice-based age estimation and even on the palm of your hand.

The recent ‘Children’s CodeGuidance from the UK’s Information Commissioner’s Office (ICO) is driving innovation in this space, focusing on “low friction” and “high accessibility” methods. The intent is to make the Internet safer without hindering valid users’ access to the platforms.

Apart from age guarantee, content moderation technology will also see huge and growing demand in the coming years. Thanks to artificial intelligence, advanced algorithms can now sift through mountains of content to identify and flag what poses a threat or contains illegal material. Beyond initial training, this human-machine dialogue is essential for artificial intelligence to become increasingly precise in understanding what is and what is not harmful.

However, it is important to keep in mind that AI-based content moderation technology should go hand in hand with human moderation. And that humans should always make the final decision when it comes to tagged content, ensuring the solution is as accurate and scalable as possible.

As AI is central to many innovations in online security, high-quality training data has become more important than ever. However, the problem is that this data can be rare and often difficult to access, particularly when it involves children’s personally identifiable information (PII). This challenge requires industry-wide action and collaboration to make online security solutions as precise and effective as possible.

Looking to the future

As governments around the world focus on protecting our future generations, there is a collective optimism and push for all stakeholders – tech giants, politicians, security vendors and child safety advocates – to actively contribute to the creation of a safe online environment.

With a clear regulatory perspective, the stage is set for continued innovation and investment in online security technologies. Despite the challenges ahead, technology companies must continue to invest in online security. It’s not just about respecting the laws; but embracing the innovation at hand to move from a reactive to a proactive attitude when it comes to online security. The path forward is clear: embrace change and collectively fight for the safer Internet that society deserves.

Ryan Shaw is the founder and CEO of Check mine.

#Making #Internet #safe

Leave a Comment