Abstract

Contributed Talk - Plenary

Predicting Lockloss in Advanced LIGO Interferometer using Machine Learning methods via auxiliary data

Swasti Jain, Keiko Kokeyama
Cardiff University

In the study of Gravitational Waves, lockloss events are a major source of downtime for LIGO (Laser Interferometer Gravitational-Wave Observatory), significantly impacting observational time and loss of potential detections. Seismic activity is known to trigger many of these events. Early prediction from seismic data could enable operational planning and preventive intervention. Large efforts have been made in this direction such as DetChar Lockloss analysis, Seismon (Earthquake prediction model) along with seismic isolation stages and control strategy deployed in LLO and LHO, producing substantial results as this remains an open problem in detector characterisation. This work presents a deep learning system for predicting lockloss events using multichannel seismic time-series data from LIGO seismometers. The dataset consisted of 70-timestep windows from four Streckeisen seismometers (STS) channels, organised into classes based on earthquake and lockloss co-occurrence for O4a and O4b runs and mapped into timeseries and dmdt plots. A hybrid CNN-LSTM architecture was developed, combining convolutional layers for local feature extraction with bidirectional LSTM for temporal modelling. An attention mechanism was added to identify critical predictive time windows. The model was validated using 5-fold cross-validation, achieving 96% recall by detecting 996 out of 1035 lockloss events across all folds. The F1 score was 87% with an AUC of 89%. The false alarm rate was approximately 25%, corresponding to one false alarm for every three real events. Performance remained consistent across folds with low variance, indicating robust generalisation across the current dataset. Due to limited lockloss events, Conditional GANs and PG-GANs were also explored for synthetic data augmentation to better test the CNN-LSTM model. Wasserstein GAN with Gradient Penalty was implemented to improve training stability by generating realistic multichannel time-series. The model is currently a classification model, and it demonstrates that with additional training, it will be ready for deployment as an advisory monitoring system. It already achieves a 96% detection rate and shows stable, predictable FAR, demonstrating potential for practical use once the model is further validated.