At Reality Defender, we believe that artificial intelligence can add a positive net benefit to the world. We built our platform on this premise — to use AI to catch weaponized AI.
Yet AI can only make a positive contribution to society if (and only if) the technology powering advanced AI systems is developed, deployed, used, and properly regulated under a set of tenets that can guide its development far into the future.
Our team operates by a series of practical and philosophical guidelines that we believe should not only apply to our technology, but anyone working with advanced artificial intelligence systems in any capacity.
We hold these tenets as truth, and believe that by following them, we as an industry and as a people can protect the world from the more harmful and sinister possibilities of artificial intelligence — all while allowing humanity to benefit from its many possibilities.
AI should not put people in harm’s way.
The fictional superintelligence causing the downfall of humanity in sci-fi films need not exist for AI to materially endanger humans. Such risks already exist within more plausible real-world applications, including those used by entities with no proven safety mechanisms or legal data gathering methods. Most importantly, AI should not be abused to physically harm any individual.
AI should not outright replace human beings.
This tenet is simple: humans should work for the benefit of humanity, and so should AI. The goal should not be to replace human beings economically or existentially. AI should only be applied to make human lives easier, not to make them redundant.
AI systems should be transparent in their operations.
One of the biggest concerns about building AI technology for the benefit of all is the reinforcement of biased datasets within AI model training. Developers should be honest in informing the public about the datasets they are using to train their systems, as well as the flaws they discover as their systems learn and improve. The public should have a clear idea of how the AI systems they interact with function, what information they use to operate, and any flaws and risks stemming from their programming and data sources. (Reality Defender will release its own bias benchmarks in the coming months.)
When an AI system causes harm, it should be possible to determine who is responsible.
When powerful tools can be used to cause serious damage, the wielders and makers of those tools should face full accountability to society at large.
AI systems should be continuously monitored and updated to ensure they remain safe and effective over time.
Given that AI is poised to change the world as we know it, it is imperative that its development is a slow, deliberate process that involves an overabundance of caution and layers of oversight. Major inventors responsible for the stunning developments in AI possibilities continue to warn us about the need to slow down the process as learning systems become more sophisticated by the day.
AI development should be sustainable and not deplete natural resources or harm the environment.
With the massive economic possibilities of AI, it is easy to focus on profits and lose track of our larger responsibility to the planet and all who live on it. Those who wield the extraordinary responsibility of developing this new technology must build their systems with minimal environmental footprint. AI should be used to help us protect the Earth, not harm it further.
The development of AI should be a collaborative process that includes a diverse range of stakeholders, including those impacted by AI technologies.
The development of AI should not rest in the hands of only a select few. Since AI is bound to transform global society at large, its development should reflect the population it is bound to impact. Government and civilian committees should wield power over the regulation of major AI and AI-related technologies. Given the overwhelming presence of bias present in machine learning datasets, it is also imperative that AI isn’t trained to benefit one group of people over another.
Most importantly, AI should be developed with the goal of benefiting humanity at large.
AI should benefit all people, not a self-selected few. AI should also not be used to usher in some kind of imagined “post-human” era. With the help of AI, humanity should thrive collectively, not vanish entirely.
These guidelines serve as our North Star, and are especially crucial in the current moment, when the industry-wide focus on AI safety appears to be waning. By sharing these guidelines outside of the walls of the Reality Defender office, we hope others working in our ever-growing field will find they resonate and adopt them as their own, building stronger support for AI safety and cementing its status as infinitely more than an afterthought.