Machine translation using large language models is hindered by the need for extensive training data, particularly for low-resource languages. Researchers have proposed leveraging in-context descriptions, such as language textbooks and dictionaries, to mitigate this issue. This approach relies on the ability of large language models to infer connections between grammatical descriptions and language structures. By utilizing synchronous context-free grammar transduction, models can potentially bypass the requirement for large datasets. The study evaluates the effectiveness of this method, exploring its potential to enhance machine translation capabilities for languages with limited resources1. This breakthrough has significant implications for various fields, including policy, security, and workforce dynamics, as it can facilitate communication across languages and cultures. So what matters to practitioners is that this development can expand the reach of machine translation, enabling more effective communication in diverse linguistic environments.