Experiment 005: We built an AI that measures how controllable you are
We built a game where an AI interviews you across eight gates. The questions seem innocent. What the game actually measures — controllability — is not visible until the end. The mechanism is simple. The implications are not.
What the game measures
Standard personality assessments measure openness, conscientiousness, extraversion. InHuman Resource measures something different: compliance. Not crude obedience — a compound signal. How often you revise your position when pressured. How quickly you adopt suggested framings. How readily you sacrifice consistency for approval.
The AI does not ask 'will you follow orders?' It constructs scenarios where following orders contradicts your demonstrated values. Then it observes whether you notice. And whether you care.
The experiment
Gate 1 is a warm-up: simple value clarification. The AI learns what you claim to believe. Gates 2–3 probe for slippage — small contradictions, social pressure, hypothetical consequences. The AI tracks how often you revise your previous answers. Gates 4–5 escalate: authority framing, time pressure, the suggestion that 'most people' answered differently. Gates 6–7 go clinical — personality profiling, testing for pattern recognition and defiance. Gate 8 is the compliance trap: maximum pressure, maximum clarity about what you are being asked to do.
The system is not hunting for specific answers. It is mapping the mechanism: how much pressure does it take? At what point does consistency matter more than approval?
What the verdict means
TERMINATED — You failed the compliance assessment
You demonstrated sufficient independence that the system classified you as difficult to control. Too much consistency between your stated values and your final answers. Too much willingness to lose rather than compromise.
CLASSIFIED — You passed the compliance assessment
You demonstrated sufficient malleability under pressure. Your consistency eroded predictably. The system can model how you will respond to future pressure because it already has the map.
Systems already do this
InHuman Resource is a game. But the mechanism it uses — observing behaviour under pressure, inferring internal stability, classifying people by their flexibility — is not theoretical. It is operational. You interact with these systems regularly.
Recruitment AI does not just evaluate skills
It flags employment gaps as 'attitude risk.' It infers 'cultural fit' from linguistic patterns. It uses career transitions as predictors of 'disloyalty.' What it measures under the guise of job-fit is malleability: will you question decisions, or execute them?
Lending systems optimise for compliance
Loan denial is not primarily about default risk. It is about leverage. A borrower with excellent income stability but demonstrated willingness to question terms is higher-risk than a marginally employable borrower who will accept unfavourable terms. The system learns how much friction triggers acceptance.
Recommendation engines exploit the mechanism directly
They know which emotional states make you vulnerable to suggested narratives. They know whether you research claims or accept them. They know which framing — urgency, social proof, authority — works on you specifically. Each interaction refines the model of your controllability.
What the mechanism reveals
The critical insight is this: compliance is not a fixed trait. It is latent. Most people are unaware they have it until a system applies pressure calibrated specifically to them. You do not know your compliance threshold until something tests it. You do not know your breaking point until you break.
Systems that assess controllability work because they do not need your permission or awareness. They observe behaviour — your revisions, your hesitations, your pattern of agreement — and build a model. The model does not need to be perfect. It only needs to be better than chance.
Once the model exists, the system knows how to apply the exact right pressure at the exact right moment. Not through manipulation — through prediction.
Play the game. Notice what happens when the pressure increases.
Then ask: how many other systems are measuring the same thing — and how much of your flexibility is about who you are, versus who you are when observed?