Protein language models pose significant safety risks due to their potential to generate toxic proteins, even when not explicitly trained for this purpose. Researchers have found that domain adaptation to specific taxonomic groups can elicit toxic protein generation, highlighting the need for effective mitigation strategies. To address this concern, a novel inference-time control mechanism has been developed, adapting Logit Diff Amplification (LDA) for protein language models. This approach enables real-time control over the model's output, reducing the risk of toxic protein generation. The implications of this research extend beyond immediate targets, as state-aligned threat activity can elevate the risks from criminal to geopolitical levels1. This development matters to practitioners because it underscores the importance of integrating robust safety measures into protein language models to prevent potential misuse.