Tutorials

All tutorials are based on Jupyter notebooks that are hosted on GitHub. If you want to run the code yourself, you can find the notebooks in the examples folder of the NeuralHydrology GitHub repository.

Data Prerequisites
For most of our tutorials you will need some data to train and evaluate models. In all of these examples we use the publicly available CAMELS US dataset. This tutorial will guide you through the download process of the different dataset pieces and explain how the code expects the local folder structure.
Introduction to NeuralHydrology
If you’re new to the NeuralHydrology package, this tutorial is the place to get started. It walks you through the basic command-line and API usage patterns, and you get to train and evaluate your first model.
Adding a New Model: Gated Recurrent Unit (GRU)
Once you know the basics, you might want to add your own model. Using the GRU model as an example, this tutorial shows how and where to add models in the NeuralHydrology codebase.
Adding a New Dataset: CAMELS-CL
Always using the United States CAMELS dataset is getting boring? This tutorial shows you how to add a new dataset: The Chilean version of CAMELS.
Multi-Timescale Prediction
In one of our papers, we introduced Multi-Timescale LSTMs that can predict at multiple timescales simultaneously. If you need predictions at sub-daily granularity or you want to generate daily and hourly predictions (or any other timescale), this tutorial explains how to get there.
Inspecting the internals of LSTMs
Model interpretability is an ongoing research topic. We showed in previous publications (e.g. this one) that LSTM internals can be linked to physical processes. In this tutorial, we show how to extract those model internals with our library.
Finetuning models
A common way to increase model performance with deep learning models is called finetuning. Here, first a model is trained on a large and diverse dataset, before second, the model is finetuned to the actual problem of interest. In this tutorial, we show how you can perform finetuning with our library.