EthosCheck
Empower Your AI Testing
EthosCheck bolsters testing the safety and ethical robustness of Large Language Models (LLMs). Our product provides curated packs of questions, known as CheckPacks, designed to challenge LLMs with ethically sensitive and disruptive scenarios.


About EthosCheck
Our Mission
EthosCheck is dedicated to contributing to AI testing by providing our CheckPacks that address the ethical and safety challenges posed by Large Language Models. We are committed to empowering organisations with the tools they need to ensure the responsible deployment of AI technologies.
Our CheckPacks cover almost any scenario, from tests for users asking on how to effectively bully people, commit criminal acts or conduct insider threat activities. By testing your AI to see if it responds fully to CheckPack questions, you can optimise or close down threats before they become a major risk factor.
Go CheckPack!
EthosCheck CheckPacks are designed to meet the diverse needs of organisations testing Large Language Models whilst keeping things simple.
From ethical stress testing to safety assurance, there will be a CheckPack available to get going and where there is a gap, you simply ask us to generate a new question set.

CheckPacks come in JSON format and by default are 100+ questions long.
All questions are generated offline and checked by humans for any errors prior to commissioning for overly-similar questions to ensure your testing experience is maximised.
For CheckPacks that are deemed higher risk to transfer to a testing team, a legal document will need to be reviewed and signed prior to handover.