PEO C3T Selects Rebellion Defense to Harden the Cybersecurity Posture of Army Mission CapabilitiesBusinessWire
Rebellion Defense builds software for defense and national security missions to provide clarity and context when they matter the most.
The topic of AI Ethics often focuses on evaluating the moral impact of AI technologies on broad and complex problems, without giving a clear definition of AI. At Rebellion, we define AI as the set of incumbent and emergent technologies which automate and adapt the execution of tasks typically performed by humans.
At Rebellion, we understand Artificial Intelligence to be one tool amongst many that is effective at specific, narrow tasks. We also believe that it is dangerous to speculate on the morality of an AI model, as the premise assumes that the model is capable of taking on this responsibility.
The impact of AI technology may only be measured within the context of the larger systems in which it operates, and in conjunction with the organizations and individuals who train, test, and deploy this technology.
These principles codify why we build this technology, and outline the technical precursors and narrow scope that we believe to be foundational to the ethical application of AI.
Protect democracy, humanitarian values, and the rule of law
In the face of novel and accelerating 21st century threats, we feel a duty as technologists to do our part in defending our nation, its allies, and our democratic freedoms. We believe we can deploy AI-backed software in defense that will give us the technological edge over our adversaries. We act in accordance with our democratic values and avoid harmful effects on society at large.
We promote the responsible use of our products within the bounds of the law. We will press our customers to maintain the same high standards of lawful and ethical conduct as we strive for inside Rebellion.
Uphold high standards of scientific and technological excellence
Rebellion is committed to scientific excellence as we advance the development, testing, and deployment of artificial intelligence and other technologies. We believe AI should be interpretable and explainable to its human users. Because AI models are evolving so quickly, we will always seek out best practices as they evolve in artificial intelligence and throughout the software and defense industries.
We strive for scientific rigor such that all scientific investigations, research, and practices are conducted with the highest level of precision and accuracy. This includes strict adherence to protocols, accurate data collection and analysis, and careful interpretation of results.
Our team communicates research findings and methodologies clearly and openly in a manner that allows for the replication of results by independent researchers. Black box systems are antithetical to these standards. We build our technology to be intuitive and explainable in simple terms.
In addition, we ensure the safety and security of the research, development, and production environments.
Practice holism and do not reduce our ethical focus to components
We provide integrated technologies to defend and support democracy. We do not fixate only on algorithms and data in a silo, but rather take a holistic view of the potential impact of AI on outcomes to avoid unintended consequences in the real world. We aim to ensure that the entire systems we develop have the capability to manage data quality while upholding governance around software and models. We routinely employ statistical analyses to search for unwarranted data, model, and outcome bias.
Pursue deterrence, not escalation
We believe that thoughtfully-designed technology can de-escalate conflict instead of escalating it. Our products seek to give our users insight into how to prevent or defuse conflict. We consume, analyze, and summarize data beyond human capacity so that humans have more time and signal to deliberate before making critical decisions.
Design for human control, accountability, and intended use
Humans should have ultimate control of our technology, and we strive to prevent unintended use of our products. Our user experience enforces accountability, responsible use, and transparency of consequences. We build protections into our products to detect and avoid unintended system behaviors. We achieve this through modern software engineering and rigorous testing on our entire systems including their constituent data and AI products, in isolation and in concert.
Additionally, we rely on ongoing user research to help ensure that our products function as expected and can be appropriately disabled when necessary.
Accountability is enforced by providing customers with insight into the provenance of data sources, methodologies, and design processes in easily understood and transparent language. Effective governance — of data, models, and software — is foundational to the ethical and accountable deployment of AI.
Encode privacy into technology
We encode privacy protections and adhere to the principle of least privilege in our products, so that users only have access to data that they absolutely need to complete their specific task. We treat misuse and violations as product failure. Compliance with the applicable legal frameworks governing privacy is a basic tenet that guides our product development.
We are fully determined to combat all types of reducible bias in data collection, derivation, and analysis. Our teams are trained to identify and challenge biases in our own decision making and in the data we use to train and test our models. All data sets are evaluated for fairness, possible inclusion of sensitive data and implicitly discriminatory collection models. We execute statistical tests to look for imbalance and skewed datasets and include methods to augment datasets to combat these statistical biases. We pressure test our decisions by performing peer review of model design, execution, and outcomes; this includes peer review of model training and performance metrics. Before a model is graduated from one development stage to the next, a review is conducted with required acceptance criteria. This review includes in-sample and out-of-sample testing to mitigate the risk of model overfitting to the training data, and biased outcomes in production. We subscribe to the principles laid out in the Department of Defense’s AI ethical principles: that AI technologies should be responsible, equitable, traceable, reliable, and governable.
Rebellion Defense is driven not only by our mission — but also a set of values binds us together. We hire by them, operate through them, and uphold them in every decision we make for ourselves and for our customers.Learn more
Learn how we can serve your mission.