Real-time monitoring of water states in large-diameter aqueducts – learning from distributed acoustic sensing signals

Dao-Yuan Tan and colleagues present a real-time acoustic sensing system with hierarchical clustering for monitoring large-diameter aqueduct flow states. A 6 km aqueduct case study demonstrated improved water management and infrastructure reliability.
Like

Share this post

Choose a social network to share with, or copy the URL to share elsewhere

This is a representation of how your post may appear on social media. The actual post will vary between social networks

When you turn on a tap, it’s easy to forget the long, hidden journey that water takes through massive underground aqueducts. These vast pipelines—some as wide as a subway tunnel—are critical lifelines for cities, yet what happens inside them is mostly invisible. Beneath our feet, water flows under varying conditions: sometimes calm and smooth, other times rushing and turbulent. Ensuring these hidden highways work safely and efficiently is a monumental engineering challenge.

Our team faced this challenge head-on in South China’s Pearl River Delta. The question was: How can we “see” and understand water’s journey in real time, inside kilometers of buried pipe? The answer came not from traditional cameras or flow sensors, but from a surprising source—fiber optic cables, and a new way to “listen” to the music of flowing water.


Why Conventional Monitoring Falls Short

Traditional methods for aqueduct monitoring have serious limitations. Closed-circuit cameras only work in dry pipes and can’t see through water. Devices like ultrasonic or microwave sensors give point measurements—snapshots at specific spots—but can’t provide a complete, real-time view across the whole pipeline. These approaches are like trying to monitor highway traffic with a single roadside camera: you miss most of the action.

Complicating matters, the most critical moments for aqueducts are during filling or draining, when water and air interact in complex ways. Air pockets can get trapped, water can surge unexpectedly, and pressures can spike—sometimes with damaging results. Monitoring these changes as they happen, across the full length of a giant buried pipeline, has always been a huge challenge.


Listening to Flow: The Power of Distributed Acoustic Sensing

Enter Distributed Acoustic Sensing (DAS), a technology that transforms ordinary fiber optic cables into thousands of vibration sensors. In the Pearl River Delta project, we took advantage of fiber optics already installed for communication. By connecting a laser-based interrogator to one end of the cable, we could detect tiny vibrations anywhere along its entire 6-kilometer length.

Whenever water flows, air bubbles pop, or turbulence occurs, the resulting vibrations travel through the pipeline structure and are picked up by the fiber. This means that DAS turns the entire aqueduct into a continuous “stethoscope.” We could “hear” the acoustic signatures of different flow conditions, from gentle rumbling to energetic hissing, in real time and at every point along the pipe.

But with such a massive data stream—imagine 1,200 microphones all recording at once—simply listening isn’t enough. We needed a way to make sense of the complex acoustic patterns and translate them into meaningful information about what was actually happening inside the pipeline.


HierarchyNet: Decoding Flow States with Artificial Intelligence

To tackle this, our team developed DAS-Hydro HierarchyNet, a deep learning model designed to interpret the unique “fingerprints” of different water flow states. Using a two-stage process, the model first determines whether a section of pipe is wet or dry, then classifies which type of flow is occurring—such as smooth stratified flow, wavy flow, plug (slug) flow, or bubbly flow.

How does it work? First, the system analyzes the frequency content of the vibrations, much like breaking down a piece of music into its individual notes. Certain flow regimes produce distinctive frequencies: calm, stratified flows might generate a low, steady hum, while bubbly or slug flows produce bursts of higher-frequency sound. Our AI model learned these patterns by training on real-world data from controlled filling events.

As the aqueduct filled with water, the model continuously tracked the movement of the water front, identifying regions where flow conditions shifted. Even in challenging conditions—such as the early stage of filling when vibrations are faint—the system could reliably estimate the water’s progress and detect subtle transitions between flow regimes.


Field Validation: The Aqueduct’s “Music” Comes to Life

We validated this approach on a 6-kilometer section of the Pearl River Delta aqueduct, buried 40–60 meters below the surface and designed to supply water to cities like Shenzhen and Hong Kong. During annual maintenance, the pipeline is emptied and later refilled, creating a perfect testbed for our monitoring framework.

As water advanced through the pipeline, our system captured the evolution of flow regimes in real time. The figure below (to be inserted) illustrates this beautifully:

[Figure 1: Acoustic characteristics during aqueduct water filling process]

The model’s predictions matched field observations with remarkable accuracy—pinpointing the water’s leading edge, classifying flow states along the pipeline, and providing operators with a live, intuitive visualization of the process. For the first time, we could “watch” the water’s journey unfold in a previously inaccessible underground world.


Opening the Black Box: Making AI Transparent

To build trust in our AI, we went a step further and used a method called SHAP (SHapley Additive exPlanations) to interpret what the model was learning. This allowed us to see which vibration frequencies mattered most for detecting each flow regime. For example, the model relied on low frequencies to identify smooth flows and on higher frequencies for bubbly, turbulent conditions. This transparency gave both our team and aqueduct operators confidence that the model’s decisions were grounded in real physics—not just statistical guesswork.


Reflections and Future Directions

Looking back, the journey wasn’t without challenges. Early on, we struggled to distinguish the faint signals of the very first trickle of water from background noise. It took creative data processing and careful model training to ensure accurate detection, especially at low flow rates. But the payoff has been substantial: operators now have a powerful tool to detect problems early, optimize filling procedures, and ensure infrastructure safety.

Our work demonstrates that by “listening” to water’s journey with DAS and AI, we can transform dark, buried pipelines into smart, observable infrastructure. In the future, we hope to expand this approach to detect issues like sediment buildup or partial blockages, and to combine acoustic sensing with other data sources—such as pressure or temperature sensors—for even richer insights.

We believe that intelligent, data-driven monitoring will be central to the next generation of resilient water systems—helping safeguard this precious resource for cities around the world.
Link: https://www.nature.com/articles/s44172-025-00483-6

Please sign in or register for FREE

If you are a registered user on Research Communities by Springer Nature, please sign in

Follow the Topic

Hydraulic Engineering
Technology and Engineering > Civil Engineering > Geoengineering > Hydraulic Engineering
Geotechnical Engineering and Applied Earth Sciences
Physical Sciences > Earth and Environmental Sciences > Earth Sciences > Geotechnical Engineering and Applied Earth Sciences

Related Collections

With Collections, you can get published faster and increase your visibility.

Applications of magnetic particles in biomedical imaging, diagnostics and therapies

This collection from Communications Engineering will explore the various different ways in which magnetic nanoparticles are being applied to develop unique imaging, diagnostic and therapeutic technologies for biomedical applications.

Publishing Model: Open Access

Deadline: Dec 31, 2025

Integrated Photonics for High-Speed Wireless Communication

Publishing Model: Open Access

Deadline: Dec 31, 2025