June 2, 2021
March 10, 2022
New consortium launches to transform data access and support safety tech companies to build tools to identify and remove harmful content online.
The Online Safety Data Initiative launches today, bringing together expertise from suppliers Faculty, OSTIA and PUBLIC, and a range of government, academic, and civil society stakeholders. The initiative will drive innovation in the safety tech sector by providing companies with access to the vital data needed to develop world-class safety tools to identify and remove harmful content online.
Research published by multiple academic and civil society groups shows that the scale of harmful and illegal online activity and content increased significantly in 2020, as a result of the social isolation caused by the global COVID-19 pandemic. Online harms, such as terrorism, child sexual exploitation and abuse, hate speech, disinformation and advocacy of self harm and suicide, often target vulnerable indiviudals and threatens the security of individuals and our nation.
During the Government’s consultation on the Online Harms White Paper, stakeholders within the UK safety tech sector identified access to the required data as the single biggest barrier to developing innovative solutions to address online harms. This project will examine how unlocking access to relevant data can help to drive innovation and competition in safety technology.
Running over 15 months, the project will test methodologies for improving access to datasets that can be used for training Artificial Intelligence (AI) solutions to remove harmful and illegal content and networks. It will seek to understand why the decentralised hosting of online harms data prevents companies from developing technology to tackle the problem, and then identify and prototype some of the most promising solutions.
The consortium holds itself to high security standards and places great emphasis on confidentiality, integrity and transparency. As part of this project, it will be working closely with the Centre for Data Ethics and Innovation (CDEI) to establish a cross-sector independent advisory group to provide additional insight, challenge and transparency.
Drawing on the expertise of its team of over 50 PhDs and experience from working with over 230 customers, Faculty will be responsible for leading the development of novel data science prototype projects to test new and transformative approaches to making online harms data safe and secure to access for safety tech companies.
PUBLIC will be responsible for leading on the discovery phase of the project, during which it will work with stakeholders across the Safety Tech sector to understand data needs and opportunities.
OSTIA will help to engage the collective knowledge, experience and capability of its members, and provide perspective from Safety Tech companies that are at the forefront of the fight against online harms.
Together, this project will be delivered and supported by over twenty organisations, including leading social media platforms, safety tech SMEs and NGOs.
Marc Warner, CEO & Co-Founder, Faculty said: “Artificial intelligence is an incredibly powerful tool with huge potential to make large and fast-moving challenges such as online harms more manageable. To build AI that performs effectively and safely in the real world though, you will always need access to real world data. We’re delighted to partner on this initiative that helps provide access to data for good; for those that need it to be able to build tools which ultimately make the internet a safer place for everybody.”
Ian Stevenson, Chair, OSTIA, said: “The UK Safety Tech sector is already well placed to help build a safer internet, and those of us working in the sector have all encountered difficulties in accessing the data needed for research, development, training and testing. This co-ordinated response has the potential to fuel new solutions to crucial problems in online safety. While the internet and the online safety solutions this project will drive are digital, the effects of online harms are all too human. Ultimately, this project is a path to preventing abuse, improving wellbeing, and even saving lives, and we’re delighted to be partnering to deliver it.”
Andy Richardson, CTO, PUBLIC said: “The Safety Tech sector includes some of the UK’s fastest growing, most innovative companies, using cutting-edge technology to tackle some of the most urgent challenges society faces today; this project is a unique opportunity to break down the barriers that might stop these companies from building safer online spaces for everyone. We’re delighted to partner with DCMS, Faculty and OSTIA on this and look forward to engaging further with all those who care about this issue.”
Andy Burrows, Head of Child Safety Online Policy at the NSPCC, said: “This project will help to overcome the barriers that many safety tech firms face when developing new AI products. If companies can access datasets more easily to test their products, the result will be a range solutions that make children safer and that enable platforms to meet their Duty of Care to users.
“The NSPCC is therefore hugely supportive of this work, and we look forward to the project getting underway.”
Mary Aiken, Professor Forensic Cyberpsychology said: “the UK is leading worldwide when it comes to recognising online harms and supporting online safety technology or ‘SafetyTech’ solutions. I very much welcome the opportunity to work with DCMS, Faculty and Ostia on the Online Safety Data Initiative, specifically regarding innovations to tackle technology-facilitated harms such as cyberbullying, harassment, self-harm, child sexual exploitation and abuse, along with hate speech and mis or disinformation. Trust and transparency are critical in order to realise the value of data, transform its use and drive innovation. Government, civil society and individuals should work together to ensure that data-driven technologies are a force for good, a force that could help all of us to shape and create a safer and more secure cyber society.”
Lydia Grace, Online Harms Programme Manager at Samaritans said: “We know harmful content relating to self-harm and suicide is far too easily accessible, so it is critical that sectors come together to reduce access to potentially harmful content. It is essential that data is available to inform AI that can effectively detect and respond to harmful content in order to protect vulnerable users. We are excited to be supporting this initiative to improve access to data that can inform the development of effective tools to help users access the benefits of the online environment, whilst being protected from harm.”
NOTES TO EDITORS
The contract for the suppliers involved in the consortium was awarded as part of a competitive procurement process that ran from November 10 to December 7, 2020.
During the project, we’ll be working openly and transparently as much as possible. For example, we’ll be following principles of open standards and open-source as set out in the Government Technology Code of Practice, as well as the Open Government Playbook.
Faculty is an applied AI company that helps organisations who have the scale, data, and foresight to adopt AI into their business. We’re helping make AI real across society by providing a unique combination of strategy, software, and skills to our customers: everything needed to successfully create value from AI. Founder-led and with over 50 PhDs, we are a team of specialists that has worked with over 230 organisations to make AI real.
PUBLIC brings together experience from the public sector, technology and finance to help solve public problems through technology. The team is led by Daniel Korski, ex-deputy head of the No.10 policy unit and venture investor, Alexander de Carvalho. The GovStart programme, the technology solutions we build in-house, and the research & transformation projects we conduct on behalf of public bodies have earned PUBLIC a reputation as a GovTech pioneer and an expert in the role of new technologies in transforming Europe’s public sector.
OSTIA represents the fast-growing community of companies working in the creation of new technologies that improve Online Safety. Founded in the UK in 2020, it is the first organisation of its kind and has a growing membership, including international associate members. The association aims to provide a voice of hope by informing policy makers, online platform providers and the general public about online safety technologies; to create a networking forum for companies contributing towards the goal of online safety; and to create a collective influence on policy, regulation and broader support for the sector.
To achieve its aims and objectives, OSTIA works to bring together those seeking to drive improvements in Online Safety (government, charities), those who create the online world (social media, messaging and gaming companies) and the companies creating technologies to improve safety.
OSTIA’S launch has been welcomed and supported by the National Crime Agency, GCHQ, Home Office, NSPCC, the Department for Digital, Culture, Media and Sport and more.