Researchers have called into question the effectiveness of sparse autoencoders (SAEs) in capturing complex concept relationships, suggesting that these models may not accurately represent the underlying structure of the data1. SAEs are commonly used to extract features from neural network representations, often relying on the assumption that concepts exist as independent linear directions. However, evidence indicates that many concepts are organized along low-dimensional manifolds, encoding continuous geometric relationships. This discrepancy raises concerns about the interpretability of features extracted by SAEs. The implications of this research are significant, as it challenges the validity of SAEs in applications where complex concept relationships are critical. So what matters to practitioners is that they must reevaluate their reliance on SAEs for feature extraction, considering alternative models that can better capture the nuances of concept manifolds.