Introducing Spatial LibriSpeech, a spatial audio dataset with over 570 hours of 19-channel audio, first-order ambisonics, and optional distracting noise. Spatial LibriSpeech is designed for training machine learning models and includes labels for source position, direction of speech, room acoustics, and geometry. Spatial LibriSpeech is generated by augmenting LibriSpeech samples with over 220,000 simulated acoustic conditions in over 8,000 synthetic rooms. To demonstrate the utility of our dataset, we trained models on four fundamental spatial audio tasks, resulting in a mean absolute error of 6.60° in 3D source localization, 0.43m in distance, 90, 66 ms on T30 and 2.74 dB direct to destination. estimation of the reverberant ratio. We show that the same models transfer to widely used evaluation datasets, obtaining, for example, a mean absolute error of 12.43° in 3D source localization in TUT Sound Events 2018, and 157.32 ms in estimation. T30 in ACE Challenge.