I tried to run a spatial join between a list of assets and a river basin dataset that you can find at the link below https://datasets.wri.org/dataset/aqueduct-global-flood-risk-maps?msclkid=630fc948b63611ec9931936b22cf4990
The first approach was a join on an ESPG 4326 projection setting and it works fine.
rfd = r"C:Users~aqueduct_global_flood_risk_data_by_river_basin_20150304.shp" wri_rfr = gpd.read_file(rfd, crs='epsg:4326') test = ['Unit 1', 'Unit 2' ] test_lat = ['0.176095', '-24.193790'] test_lon = ['117.495523', '150.370650'] df = pd.DataFrame() df['Name'] = test df['Latitude'] = test_lat df['Longitude'] = test_lon gdf = gpd.GeoDataFrame(df, geometry=gpd.points_from_xy(df['Longitude'], df['Latitude'])) gdf = gdf.set_crs('epsg:4326') joined = gpd.sjoin(gdf, wri_rfr, how='inner') len(joined )
The two assets have both a join.
In a second approach, I try to create a 500 mt buffer around my assets using a meter-based projection system (3006) and proceed to merge them…but it returns no result?
rfd = r"C:Users~aqueduct_global_flood_risk_data_by_river_basin_20150304.shp" wri_rfr = gpd.read_file(rfd, crs='epsg:4326') test = ['Unit 1', 'Unit 2' ] test_lat = ['0.176095', '-24.193790'] test_lon = ['117.495523', '150.370650'] df = pd.DataFrame() df['Name'] = test df['Latitude'] = test_lat df['Longitude'] = test_lon gdf = gpd.GeoDataFrame(df, geometry=gpd.points_from_xy(df['Longitude'], df['Latitude'])) gdf = gdf.set_crs('epsg:4326') gdf = gdf.to_crs({'init': 'epsg:3006'}) gdf.geometry = gdf.geometry.buffer(500) gdf = gdf.loc[gdf.is_valid] wri_rfr_3006 = wri_rfr.to_crs({'init': 'epsg:3006'}) wri_rfr_3006 = wri_rfr_3006.loc[wri_rfr_3006.is_valid] joined = gpd.sjoin(gdf, wri_rfr_3006 , how='inner') len(joined )
it returns no joins.
What am I missing here? Why would be the results different?
Advertisement
Answer
- have coded up data sourcing of shape files
- take a look at documentation https://epsg.io/3006 this is for Sweden. Hence locations in Borneo and Australia are going to start to give rounding errors when expressed in meters from Sweden
- have taken approach of work out UTM CRS of each point, buffer it, then convert back to epsg:4386
- with buffered point geometry can now spatial join as an inappropriate CRS for global geometry has not been used
test = ["Unit 1", "Unit 2"] test_lat = ["0.176095", "-24.193790"] test_lon = ["117.495523", "150.370650"] df = pd.DataFrame() df["Name"] = test df["Latitude"] = test_lat df["Longitude"] = test_lon gdf = gpd.GeoDataFrame(df, geometry=gpd.points_from_xy(df["Longitude"], df["Latitude"])) gdf = gdf.set_crs("epsg:4326") # work out UTM CRS for each point, then buffer it and return back as original CRS def buffer_meter(g, crs="epsg:6666", buffer=50): t = gpd.GeoDataFrame(geometry=[g], crs=crs) return t.to_crs(t.estimate_utm_crs()).buffer(buffer).to_crs(crs).values[0] # buffer the points gdf["geometry"] = gdf["geometry"].apply(buffer_meter, crs=gdf.crs, buffer=500) # now join gpd.sjoin(gdf, wri_rfr, how='inner')
data sourcing
import requests from pathlib import Path from zipfile import ZipFile import urllib import geopandas as gpd import pandas as pd # download data sets urls = [ "http://datasets.wri.org/dataset/c19396d9-45c8-4e92-bf05-d1411c9cc2ca/resource/498319f7-992a-4447-94b4-c62d8f1daa38/download/aqueductglobalfloodriskdatabycountry20150304.zip", "http://datasets.wri.org/dataset/c19396d9-45c8-4e92-bf05-d1411c9cc2ca/resource/471ef133-939c-4ca6-9b1c-5f81b5251c2b/download/aqueductglobalfloodriskdatabyriverbasin20150304.zip", "http://datasets.wri.org/dataset/c19396d9-45c8-4e92-bf05-d1411c9cc2ca/resource/dd90c26a-edf2-46e4-be22-4273ab2344d0/download/aqueductglobalfloodriskdatabystate20150304.zip", ] dfs = {} for url in urls: f = Path.cwd().joinpath(urllib.parse.urlparse(url).path.split("/")[-1]) if not f.exists(): r = requests.get(url, stream=True, headers={"User-Agent": "XY"}) with open(f, "wb") as fd: for chunk in r.iter_content(chunk_size=128): fd.write(chunk) zfile = ZipFile(f) zfile.extractall(f.stem) dfs[f.stem] = gpd.read_file(list(f.parent.joinpath(f.stem).glob("*.shp"))[0]) wri_rfr = dfs["aqueductglobalfloodriskdatabyriverbasin20150304"]