Research has identified a significant limitation in the scalability of large language models, stemming from the quadratic memory cost of exact self-attention, which frequently results in out-of-memory failures on modern hardware. To address this issue, a new method called Stream-CQSA has been proposed, which utilizes flexible workload scheduling to avoid out-of-memory errors during attention computation1. This approach removes the assumption that the full query, key, and value tensors must fit in device memory, allowing for more efficient processing of long-context inputs. By doing so, Stream-CQSA enables the development of more complex and powerful language models that can handle longer input sequences without running out of memory. This breakthrough has significant implications for the field of natural language processing and beyond, as it enables the creation of more advanced AI models that can be used in a variety of applications, so it matters to practitioners because it can lead to the development of more sophisticated and capable AI systems.