Hannah Rigdon

home  |  GIS  |  musings  |  cartography  
       |  about


Replication of a classification of Natural Rivers

Original study: Rosgen, D. L. in CATENA 22 (3):169–199. https://linkinghub.elsevier.com/retrieve/pii/0341816294900019.

Replication by: Kasprak, A., N. Hough-Snee, T. Beechie, N. Bouwes, G. Brierley, R. Camp, K. Fryirs, H. Imaki, M. Jensen, G. O’Brien, D. Rosgen, and J. Wheaton. 2016. The Blurred Line between Form and Process: A Comparison of Stream Channel Classification Frameworks ed. J. A. Jones. *PLOS ONE* 11 (3):e0150293. https://dx.plos.org/10.1371/journal.pone.0150293.

Replication Authors: Hannah Rigdon, Zach Hilgendorf, Joseph Holler, and Peter Kedron.

Replication Materials Available at: https://github.com/hrigdon98/RE-rosgen

Created: 23 March 2021

Revised: 24 March 2021

Abstract

The complexity of river systems pose a challenge for reproducible and transferable science, yet a common framework for describing hydrological systems is crucial to the success of watershed management, land use planning, natural hazard mitigation and much more. Rosgen's original study (1994) lays out a river classification system that seeks to standardize language and frame of reference for those working with river systems across disciplines. Rosgen's system focuses on sinuosity, entrenchment, and width of flood-prone areas to define 7 major river classes that are then broken down into Level II classifications based on slope and channel material. Kasprak et. al (2016) set out to implement the Rosgen classification system, in addition to others, to see how the different schemes compared and where they diverged. They used DEM data and field measurements from the Middle Fork John Day Basin, which is a watershed of high conservation interest in the Columbia River Basin. Our study set out to test the reproducibility of both of these studies with GIS tools to see if river classification schemes could be successfully implemented without having to rely on site-specific field data, which would make river classification a much easier and accessible task.

Sampling Plan and Data Description

For the replication of the Kasprak study, each student in GEOG0323 Open Source GIS was randomly assigned a study location from the 33 study locations analyzed by Kasprak et al 2016. The 33 study sites were originally chosen by Kasprak et al. from a stratified sampling of CHaMP data surveys conducted in 2012 and 2013.

Materials and Procedure

Kasprak et al. (2016) used the River Bathymetry Toolkit and the CHaMP Topo Toolbar to perform their analysis. This analysis was replicated using entirely open source software, in GRASS, with some postprocesssing in R Studio. Our analysis also used a 1m LiDAR-derived DEM in place of field measurements to calculate our terrain-based metrics.

My location was the study site identified by loc_id = 9 in the Columbia Habitat and Monitoring Program (CHaMP)dataset.

Code and materials are available in this Github repository.

A detailed outline of GRASS procedures can be found here. We used the models linked below to automate steps for loading and visualizing the data as well as averaging our digitized banks and valleys into centerlines.

Models:

Visualization

Center Line calculations

We then moved into RStudio and further analyzed our data to graph longitudinal profiles and cross-section analyses, following this R markdown document. While our procedures were pretty similar to those of the Kasprak et al study, there are a few key differences. This analysis, as stated, relied entirely on open source software, so our calculations of values in Table 2 have a little more subjectivity than those that could be produced by a proprietary GIS tool. The other important difference is the resolution of our DEM data. The Kasprak study used elevation data with a resolution of 0.1 meters, whereas our DEM was 1m. Although you can see some artifacts in the shaded relief, 1m is still an incredibly fine resolution and shouldn't affect the outcome of our results.

Replication Results

slope

streamlines

valley lines

longitudinal profile

slope

Table 1. Site Measurements (in meters)

Variable Value Source
Bankfull Width 9.8264/td> CHaMP_Data_MFJD
Bankful Depth Average 0.4114n Dpth_Avg in CHaMP_Data_MFJD
Bankful Depth Max 0.8571 Dpth_Max in CHaMP_Data_MFJD
Valley Width 500 Cross-section profile
Valley Depth 1.7142 2x Bankfull depth max
Stream/River Length 265 GRASS measurement of bank centerline
Valley Length 280.157387 GRASS measurement of valley centerline
Median Channel Material Particle Diameter 67 SubD50 in CHaMP_Data_MFJD

rosgen key

Discussion

There was also a certain amount of subjectivity in the heads-up digitizing of the stream banks and valley lines. The analysis attempted to control for bias by alternating between digitizing banks and valleys, as well as alternating between slope and hillshade layers underneath, but I'm not sure how much bias that actually removed. As a novice at interpreting stream paths, I actually ended up digitizing the road alongside my stream, which stood out much more in the DEM than the stream did. Regardless, the results ended up being surprisingly similar. This suggests that roads parallel to stream banks could potentially be used as proxies for river classification, but this require further analysis and testing to see if the results from my accidental digitization of a road hold true. That would probably underestimate the sinuosity of the stream, but could be a useful proxy for rivers that are hard to pick out in terrain imagery, especially since roads tend to mirror stream movement. I think that someone with more experience identifying stream banks and river valleys could digitize a little more accurately. However, it does not seem like that generated enough of an error to discredit the results of this analysis.

Conclusion

Overall, this analysis was successful in replicating the results of Kasprak et al 2016. The results of this analysis suggest that GIS can potentially be used as a tool for river classifications. This is an exciting possibility, as resources and accessibility of field measurements are often a barrier to conducting river classifications, and the results of this workflow suggest that this method could be implemented to perform river classifications in place of field measurements. However, this was an analysis conducted on one study site, out of 33, so these results should not be generalized for the whole dataset. There is still significant temporal discrepancy between the collection of the LiDAR data and the CHaMP survey that should be addressed before the scope of this analysis is expanded further.

References

1. Rosgen, D. L. 1994. A classification of natural rivers. CATENA 22 (3):169–199. https://linkinghub.elsevier.com/retrieve/pii/0341816294900019.

2. Kasprak, A., N. Hough-Snee, T. Beechie, N. Bouwes, G. Brierley, R. Camp, K. Fryirs, H. Imaki, M. Jensen, G. O’Brien, D. Rosgen, and J. Wheaton. 2016. The blurred line between form and process: A comparison of stream channel classification frameworks ed. J. A. Jones. PLOS ONE 11 (3):e0150293. https://dx.plos.org/10.1371/journal.pone.0150293.

Report Template References & License

This template was developed by Peter Kedron and Joseph Holler with funding support from HEGS-2049837. This template is an adaptation of the ReScience Article Template Developed by N.P Rougier, released under a GPL version 3 license and available here: https://github.com/ReScience/template. Copyright © Nicolas Rougier and coauthors. It also draws inspiration from the pre-registration protocol of the Open Science Framework and the replication studies of Camerer et al. (2016, 2018). See https://osf.io/pfdyw/ and https://osf.io/bzm54/

Camerer, C. F., A. Dreber, E. Forsell, T.-H. Ho, J. Huber, M. Johannesson, M. Kirchler, J. Almenberg, A. Altmejd, T. Chan, E. Heikensten, F. Holzmeister, T. Imai, S. Isaksson, G. Nave, T. Pfeiffer, M. Razen, and H. Wu. 2016. Evaluating replicability of laboratory experiments in economics. Science 351 (6280):1433–1436. https://www.sciencemag.org/lookup/doi/10.1126/science.aaf0918.

Camerer, C. F., A. Dreber, F. Holzmeister, T.-H. Ho, J. Huber, M. Johannesson, M. Kirchler, G. Nave, B. A. Nosek, T. Pfeiffer, A. Altmejd, N. Buttrick, T. Chan, Y. Chen, E. Forsell, A. Gampa, E. Heikensten, L. Hummer, T. Imai, S. Isaksson, D. Manfredi, J. Rose, E.-J. Wagenmakers, and H. Wu. 2018. Evaluating the replicability of social science experiments in Nature and Science between 2010 and 2015. Nature Human Behaviour 2 (9):637–644. http://www.nature.com/articles/s41562-018-0399-z.