Researchers have introduced a novel neural architecture that leverages the factorisable structure inherent in various intelligent systems, including those in physics, language, and perception. This separable neural architecture (SNA) formalizes a representational class that encompasses additive, quadratic, and tensor-decomposed neural models, providing a unified framework for predictive and generative intelligence. By imposing constraints on interactions, SNA enables the explicit exploitation of structural properties, potentially leading to more efficient and effective models. The development of SNA has significant implications, as it may influence not only the field of artificial intelligence but also policy, security, and workforce dynamics1. This is particularly relevant given the potential applications of such architectures in various domains. The introduction of SNA marks a notable advancement in the field, as it may pave the way for more sophisticated and versatile AI systems. So what matters to practitioners is that SNA could revolutionize the way they approach complex problems, enabling the creation of more powerful and flexible models.