Models optimized for real-world tasks reveal the necessity of precise temporal coding in hearing

Neurons encode information in the timing of their spikes in addition to their firing rates. Spike timing is particularly precise in the auditory nerve, where action potentials phase lock to sound with sub- millisecond precision, but its behavioral relevance is uncertain. To investigate the role of this temporal coding, we optimized machine learning models to perform real-world hearing tasks with simulated cochlear input. We asked how precise auditory nerve spike timing needed to be to reproduce human behavior. Models with high-fidelity phase locking exhibited more human-like sound localization and speech perception than models without, consistent with an essential role in human hearing. Degrading phase locking produced task-dependent effects, revealing how the use of fine-grained temporal information reflects both ecological task demands and neural implementation constraints. The results link neural coding to perception and clarify conditions in which prostheses that fail to restore high-fidelity temporal coding could in principle restore near-normal hearing.