Geospatial Data Verification

Previously we looked at visual data verification using Python and Pandas. Here we shall extend this to look at geospatial data verification of the earlier Oklahoma Injection Well Dataset.

Gesopatial data can be managed and plotted using Geopandas – a geospatial extension to Pandas. This comes with some basic basemap data, but you will probably want to add your own basemap data. For our basemap, we shall use the built-in World basemap and add Oklahoma County Subdivision boundaries to it.

For these examples, you will need to install numpy, matplotlib, pandas, and geopandas. All can be installed using pip.

First we create a World basemap using geopandas’ own low resolution World map:

“naturalearth_lowres” is a basemap provided with geopandas. We create a set of axes (ax) using the call to subplots(). Although unnecessary for the above example, it allows us to plot multiple layers on the same map (ie. same set of axes). Here is the resulting map:

This uses a standard matplotlib frame. You can use this to zoom and pan around the map.

Next we will add the Oklahoma County Subdivisions. This can be downloaded from catalog.data.gov in the form of ESRI Shape files. Here is the amended code:

Here is the result:

We see the sub-divisions after zooming in on Oklahoma:

Next we shall add the data that we wish to validate. The Oklahoma well injection data has a row for each well. Two columns provide the location. We shall attempt to plot each well with a red dot on the above map.

We do this by importing the data into a pandas DataFrame, and then creating a geopandas GeoDataFrame from this:

Here is the resulting map:

Where is the map? If you look closely, there are a few red dots. Also the axes are wrong – the numbers are far too big. What has happened is that a few of the coordinates are either in a different coordinate system (not decimal longitude,latitude), or some of them have incorrect or missing decimal points. We could check the data points and possibly correct them. For now, we will simply filter all data points outside of the valid range of longitude (-180…180) and latitude (-90…90). We perform this within the pandas DataFrame using calls to between():

Here is the resulting World map:

That is much better! We can also see some red around the black of Oklahoma. However there are some other red clusters that are no where near Oklahoma. There’s a cluster near Africa at 0,0 – a common location for data with missing coordinates. Similarly, there are clusters where the latitude is zero (on the Equator, due south of Oklahoma) and longitude is zero (on the Meridian, due east of Oklahoma).

Finally, there’s a large cluster in China. This happens to have the same longitude and latitude as Oklahoma, but in the eastern hemisphere. The western hemisphere is indicated with negative longitude, whilst the eastern is indicated with positive longitude. Clearly these coordinates are missing a negative sign. This is easy to correct, using numpy to correct the longitude values in situ:

We cannot recover the missing coordinate values for the other erroneous points, so we will filter them out. A bounding rectangle filter is applied:

This much tighter filter means the original filter to valid world coordinates is superfluous and can now be removed. However, we need to apply this new filter after correcting the erroneous longitude signs.

Here is the resulting world map:

And here is the Oklahoma detail:

There are a small number of data points in Texas. The next step would be to investigate these. It is likely their coordinates have been mis-typed.

Conclusions

This is the last in a series of articles showing you how to load data into Python, and then validate it. Data validation can be at the field level using functional checks, as well as at the dataset level using visual inspection.

Leave a Reply