Researchers have discovered that multilayer perceptron (MLP) layers in transformer language models route continuous signals using binary neuron activations, a process that determines whether a token requires nonlinear processing. Specifically, in the GPT-2 Small model, which boasts 124 million parameters, certain neurons have been found to implement a consensus architecture, comprising seven "default-ON" neurons and one excitatory neuron. This binary routing mechanism is noteworthy, as it underscores the complex decision-making processes underlying transformer models1. The implications of this discovery are significant, particularly in the context of state-aligned activity involving transformer models, which shifts the threat model from a criminal to a geopolitical one. This, in turn, necessitates a distinct approach to mitigating potential risks. The findings have significant consequences for practitioners, as they must now adapt to a new threat landscape where geopolitical actors may exploit transformer models, requiring a revised playbook to counter these emerging threats.