A new benchmark, FinTradeBench, assesses the financial reasoning capabilities of Large Language Models (LLMs) by evaluating their ability to make informed decisions based on heterogeneous financial data. This benchmark is significant because it fills a gap in existing evaluations, which often focus on general language understanding rather than specific domain-related reasoning. FinTradeBench tests LLMs on their capacity to analyze company fundamentals from regulatory filings and trading signals from price dynamics, mimicking real-world financial decision-making scenarios. The introduction of FinTradeBench is crucial as it provides a standardized framework for evaluating the financial decision-making capabilities of LLMs1. This has significant implications for the use of AI in financial analysis and decision-making, potentially impacting investment strategies, risk assessment, and regulatory compliance. The development of FinTradeBench matters to practitioners because it enables them to better understand the limitations and capabilities of LLMs in financial contexts, informing more effective and responsible AI adoption in the financial sector.