Researchers at the University of Dundee, UK, are bringing Twitter, citizen science and AI techniques together to develop an early-warning system for flood-prone communities.
The study was led by Dr. Roger Wang, lecturer of fluid mechanics in civil engineering at the University of Dundee.
The research is published in the journal Computers & Geosciences.
Rising Sea Levels
Rising sea levels pose a very realistic threat to American cities. Although not even a month into 2018, America’s major cities, New York, Boston and Santa Barbara, have been hit by major floods, resulting in rising death tolls and damage cost.
In 2017 alone, 10 of the 16 major natural disasters involved flooding, resulting in damages of $266.5 billion, according to the National Center for Environmental Information. In the aftermath of major hurricanes, such as Hurricanes Harvey and Maria, cities are still being affected by recurrent floods.
“Sea levels have been rising at an average rate of 3.4 mm a year over the past decade,” Wang said in a statement. “The extremes of today will become the average of the future so coastal cities and countries must take action to protect their land.”
The Problem
In order to develop an accurate, hyper-resolution urban flooding model, there needs to be a good quality dataset to validate and support the model.
“A key task is to convincingly validate the model, otherwise it’s ‘rubbish in and rubbish out’ validation, means to compare the model results and ground truth dataset to prove the reliability of the model,” said Wang.
Without a doubt, urban flooding is a major national threat. However, not only is the issue difficult to monitor due to complexities in data collection and processing, but also the current dataset is either very inaccessible or inaccurate.
According to Wang, there are three standard sources for flood datasets: a remote satellite sensors, a local sensor network, and witness statements and insurance reports from governments. However, they all have their disadvantages.
First, a remote satellite sensor is too expensive to access and has a bad resolution or low scanning frequency. Second, a local sensor network is too expensive to install and maintain. Third, the most popular source of witness surveys is costly, inaccurate, incomplete and often suffer from delays.
“I spent 3 months but was surprised that there was no existing good quality dataset,” said Wang. “I was forced to think out of the ‘box.’ ”
The Solution
The team thought of social media networks, like Twitter, and crowdsourcing apps and how their users provide real-time commentary on floods. Using AI techniques, the team extracted data from these social media networks, where first responders often go to ask for help and record what’s happening.
“A tweet can be very informative,” said Wang. “It could contain information of flood location, water depth, flooding trend, human feelings, which are valuable to evacuation decisions, and damage estimate. More importantly, it has a potential to provide hyper-resolution data. The accuracy can be in meters or parcels, a tweet that mentioned the street name and number.”
Case Study: Sunny Day Flooding
The team conducted a case study incorporating Twitter, citizen science, and AI techniques to monitor sunny day flooding, a type of flooding that occurs due to high sea level in the absence of any extreme weather event.
First, the team streamed and filtered Twitter data over a one-month period in 2015. They used keywords like “flood,” “inundation,” “dam,” “dike,” and “levee.” More than 7,500 tweets were analyzed over this time. Then, the team used natural language processing to find out more about severity, location and other relevant information.
Second, they applied AI techniques to the data collected from MyCoast, a crowdsourcing app that contains over 6,000 flood photographs, all of which are taken and reported by its users. Using AI techniques, the team could automatically identify scenes of flooding from the images that users post.
Lastly, they validated both the information extracted from Twitter and MyCoast against actual precipitation data and road closure reports to examine the quality of the datasets.
They found that tweets correlated to precipitation levels and MyCoast data correlated to the road closure reports. They believe that while social media networks like Twitter provide information relevant to large-scale, inexpensive monitoring, crowdsourced data provides detailed and customized information.
“We have reached the point of 70% accuracy and we are using the thousands of images available on MyCoast to further improve this,” Wang said in a statement.
The team doesn’t think this new model should get rid of the standard practices, but advance them.
“We found these big data-based flood monitoring approaches can definitely complement the existing means of data collection and demonstrate great promise for improving monitoring and warnings in future,” Wang said in a statement.
For now, the model can only monitor the development of flood events.
What’s Next?
The team is currently collaborating with other companies to develop a mathematical model that can predict the new trend of the flood and warn communities accordingly.
“Coupling our big data and AI platform to the math model, we can real-time correct the mathematical model, called ‘data assimilation,’ ” said Wang. “As a result, the math model becomes more accurate and keeps being updated to reflect the new trend of the flood. We believe the new model can predict future flood days before the real event, depending on the geolocation and weather.”