Asynchronous federated learning (FL) implementations frequently contend with "gradient staleness," a phenomenon arising when client devices, due to their diverse computational speeds, transmit model updates to a central server at irregular intervals. This temporal misalignment causes updates to reflect outdated versions of the global model, significantly degrading its convergence stability and overall predictive accuracy. Previous mitigation strategies, exemplified by the AsyncFedED method, proposed an adaptive aggregation technique that quantifies staleness using Euclidean distance to adjust contributions. This research systematically evaluates alternative distance metrics for their suitability in robustly compensating for gradient staleness during asynchronous FL aggregation1. The aim is to identify superior measurement techniques that can more effectively mitigate performance degradation caused by delayed updates. For cybersecurity practitioners and AI developers, mastering robust aggregation in asynchronous FL is paramount, as it directly influences the trustworthiness and operational reliability of distributed machine learning models deployed across diverse, potentially resource-constrained environments.