Scraping off the Dust: Redeploy of my Rust web app

I do a redeploy of my Rust web app with the Ubuntu 18.04 image on AWS.

Part of a Series: Designing a Full-Featured WebApp with Rust
Part 1: Piecing Together a Rust Web Application
Part 2: My Next Step in Rust Web Application Dev
Part 3: It’s Not a Web Application Without a Database
Part 4: Better Logging for the Web Application
Part 5: Rust Web App Session Management with AWS
Part 6: OAuth Requests, APIs, Diesel, and Sessions
Part 7: Scraping off the Dust: Redeploy of my Rust web app
Part 8: Giving My App Secrets to the AWS SecretManager

Life hasBusy as a bee

Life has been busy – no apologies or excuses, but, ya know, it’s 2020. Yet, I’m trying to slowly make my way back into playing with Rust. I decided to move my EC2 instance from AWS-Linux to the Ubuntu image; for one, I got tired of fighting with LetsEncode to get it to renew my SSL cert every 3 months. Also, I wanted to see how a redeploy of my Rust web app would go and if it still worked (why wouldn’t it?). So, lets see how tough it is to get my environment back to the same place. I took some notes (in case I needed to restart, sigh), so let’s go through it.

Go back to Part 1 to see what this fake web app is about and how I got here – I need to reread it myself! So, first, this is what I ended up needing to add to what I got from the default Ubuntu image:

sudo apt install build-essential checkinstall zlib1g-dev pig-config libssl-dev libpq-dev postgresql postgresql-contrib -y

Lots of that was needed in order to get OpenSSL installed, I was following along hints here. Continuing those instructions, I did:

cd /usr/local/src/
sudo wget https://www.openssl.org/source/openssl-1.1.1g.tar.gz
sudo tar -xf openssl-1.1.1g.tar.gz

cd openssl-1.1.1g
sudo ./config --prefix=/usr/local/ssl --openssldir=/usr/local/ssl shared zlib
sudo make
sudo make test
sudo make install
sudo echo "/usr/local/ssl/lib" > /etc/ld.so.conf.d/openssl-1.1.1g.conf
sudo ldconfig -v
sudo mv /usr/bin/c_rehash /usr/bin/c_rehash.backup
sudo mv /usr/bin/openssl /usr/bin/openssl.backup
sudo nano /etc/environment # to add "/usr/local/ssl/bin" to the PATH

Next, instead of solely storing my code on a potentially tenuous EC2 server, I wanted to keep it backed up on my Google Drive (or whatever you like, this solution works with MANY network storage). I used rclone for my Raspberry Pi photo frame so I was familiar with that already. This is weird though, I don’t really need this for projects I store in GitHub… gotta think about it… maybe I just need a /gdrive synced dir for “things”.

curl https://rclone.org/install.sh | sudo bash
rclone config # to add google drive and authorize it

mkdir ~/projects
mkdir ~/projects/rust



Ok, the most fun step!!

curl https://sh.rustup.rs -sSf | sh
cd ~/projects/rust
git clone git@github.com:jculverhouse/pinpoint_shooting.git

I need nginx for my app

sudo apt install nginx
sudo service nginx start

And now the much more reliable LetsEncrypt using Ubuntu 18.04

# follow instructions at https://certbot.eff.org/lets-encrypt
# setup root cronjob to renew once/week

For my Rocket-powered Rust app, I followed some reminders here to connect it to nginx. Simple enough, really. What’s mostly relevant:

...
server_name pinpointshooting.com; # managed by Certbot
location / {
    proxy_pass http://127.0.0.1:3000;
}
...

What? Nginx still has TLS 1 and 1.1 turned on by default? Followed this and removed those, tested the config, and restarted nginx. All of that I checked with SSLLabs via https://www.ssllabs.com/ssltest/analyze.html :

sudo nano /etc/nginx/nginx.conf # to remove TLS1 TLS1.1 from any line
sudo nano /etc/letsencrypt/options-ssl-nginx.conf # to remove TLS1 TLS1.1 from any line
sudo nginx -t
sudo service nginx reload

I’ll need Postgres for my PinpointShooting app as well, found some steps to follow here, plus I needed to setup for my own app and run the initial migrations to get it up-to-date. That involved another change so I could login with the password from a non-user-account.

cargo install diesel_cli --no-default-features --features posters 
psql -d postgres -U postgres
  create user pinpoint password 'yeahrightgetyourown'; # save this in file .env in app dir
  create database pinpoint;
  grant all privileges on database pinpoint to pinpoint;

sudo nano /etc/postgresql/10/main/pg_hba.conf # to edit "local all all" line to be md5 instead of peer

sudo service postgresql restart
psql -d postgres -U pinpoint # to test password above; just exit

cd ~/projects/rust/pinpointshooting
diesel migration run

Finally:

rustup default nightly # because Rocket, sigh...
cargo update
cargo build --release
target/release/pps &

And, we’re back online! Turns out, a redeploy of my Rust web app was about as easy as I could expect! If the app happens to be running, check it out here (though, there isn’t much to see or anything to do): pinpointshooting.com. Also, browse the repo and feel free to send me comments on how to be better about using idiomatic Rust!

OAuth Requests, APIs, Diesel, and Sessions

Part of a Series: Designing a Full-Featured WebApp with Rust
Part 1: Piecing Together a Rust Web Application
Part 2: My Next Step in Rust Web Application Dev
Part 3: It’s Not a Web Application Without a Database
Part 4: Better Logging for the Web Application
Part 5: Rust Web App Session Management with AWS
Part 6: OAuth Requests, APIs, Diesel, and Sessions
Part 7: Scraping off the Dust: Redeploy of my Rust web app
Part 8: Giving My App Secrets to the AWS SecretManager

Intertwined woven basket material.. much like Oauth, accounts, APIs, diesel, and sessions are in this project (probably any project)
Things are starting to intertwine…

Some big changes in the repository just recently. I added Google Signin and Facebook Signin OAuth connections. I’m thinking I may not even configure an internal password on the site for users and instead just require one of those options. Probably I’ll add more, like 500px.com and/or Flickr, given the site’s purpose. A password field is still in my database though, so I haven’t given up the idea completely. Also, the OAuth requests create accounts using Diesel.

Users (now identified as shooters – we photographers haven’t given up that term) – are now written to the db. I really fought with one Diesel feature, so that bit is still commented out in the code. In addition, I added my first API to POST to – so another step with the Rocketcrate as well! I’d like to work my way into playing with a GraphQL endpoint so I can play with that as well!! (What’s the limit on Crate dependencies in a project anyway?!) I’m starting to think I won’t be able to tackle all of this in a single post – but let start!

OAuth vs Old and New Accounts

When a user arrives on the site, I check for a cookie with a session id (see my previous post). I decided, for now, I would use the User Agent (plus some random characters) as a fingerprint for creating the session id. So, when I am able to get a session_id from a cookie, I want to verify the User Agent is the same and that the session hasn’t expired. If the user arrives brand new, without a cookie, I immediately create an empty, no-user session for them. All of this is done, for now, right at the top of my index() route.

<src/routes.rs>

...
#[get("/")]
pub fn index(mut cookies: Cookies, nginx: Nginx) -> rocket_contrib::templates::Template {
    let session = get_or_setup_session(&mut cookies, &nginx);
    ...

After the index page loads, it shows the Google and Facebook Sign In buttons. Clicking one of those, they do the validation dance and get permission from the user. When that is granted, my app gets a token back which I send up to the server via a POST to /api/v1/tokensignin.

<src/api.rs>

use rocket::{http::Cookies, post, request::Form, FromForm};

use crate::oauth::*;
use crate::routes::Nginx;
use crate::session::*;

#[derive(FromForm)]
pub struct OAuthReq {
    pub g_token: Option<String>,  // google login req
    pub fb_token: Option<String>, // facebook login req`
    pub name: String,
    pub email: String,
}

#[post("/api/v1/tokensignin", data = "<oauth_req>")]
pub fn tokensignin(mut cookies: Cookies, nginx: Nginx,
        oauth_req: Form<OAuthReq>) -> String
{
    let mut session = get_or_setup_session(&mut cookies, &nginx);

    if let Some(token) = &oauth_req.g_token {
        match verify_google_oauth(&mut session, &token,
            &oauth_req.name, &oauth_req.email)
        {
            true => {
                session.google_oauth = true;
                save_session_to_ddb(&mut session);
                "success".to_string()
            }
            false => {
                session.google_oauth = false;
                save_session_to_ddb(&mut session);
                "failed".to_string()
            }
        }
    } else if let Some(token) = &oauth_req.fb_token {
        match verify_facebook_oauth(&mut session, &token,
            &oauth_req.name, &oauth_req.email)
        {
            true => {
                session.facebook_oauth = true;
                save_session_to_ddb(&mut session);
                "success".to_string()
            }
            false => {
                session.facebook_oauth = false;
                save_session_to_ddb(&mut session);
                "failed".to_string()
            }
        }
    } else {
        "no token sent".to_string()
    }
}

OAuth Requests via HTTP POSTs

This is how you allow for a POST and form data to come in – you setup a struct (OAuthReq in my example) of what you expect and bring that in as an input param. Plus, I am also bringing in any cookies that arrive with the request plus some Nginx headers so I have access to UserAgent. In the code so far, I’m either verifying a Google or Facebook token. Let’s look at the Google example (the Facebook one is nearly the same). Here are the relevant parts, but I’ll break some pieces down and go through it:

<src/oauth.rs>

...
pub fn verify_google_oauth(
    session: &mut Session,
    token: &String,
    name: &String,
    email: &String,
) -> bool {
    let mut google = google_signin::Client::new();
    google.audiences.push(CONFIG.google_api_client_id.clone());

    let id_info = google.verify(&token).expect("Expected token to be valid");
    let token = id_info.sub.clone();

    verify_token(session, "google".to_string(), &token, &name, &email)
}

Which leads right away to a big match:

fn verify_token(
    session: &mut Session,
    vendor: String,
    token: &String,
    name: &String,
    email: &String,
) -> bool {
    use crate::schema::oauth::dsl::*;
    use crate::schema::shooter::dsl::*;
    let connection = connect_pgsql();
    match oauth
        .filter(oauth_vendor.eq(&vendor))
        .filter(oauth_user.eq(&token))
        .first::<Oauth>(&connection)
    {

With the OK arm:

        // token WAS found in oauth table
        Ok(o) => {
            if let Some(id) = session.shooter_id {
                if id == o.shooter_id {
                    return true;
                } else {
                    return false;
                }
            } else {
                // log in user - what IS the problem with BelongsTo!?
                //if let Ok(s) = Shooter::belonging_to(&o)
                //    .load::<Shooter>(&connection)
                //{
                //    session.shooter_id = Some(shooter.shooter_id);
                //    session.shooter_name = Some(shooter.shooter_name);
                //    session.email_address = Some(shooter.email);
                return true;
                //} else {
                //    return false;
                //}
            }
        }

And the ERR arms:

        // token not found in oauth table
        Err(diesel::NotFound) => match session.shooter_id {
            Some(id) => {
                create_oauth(&connection, &vendor, token, id);
                true
            }
            None => match shooter
                .filter(shooter_email.eq(&email))
                .first::<Shooter>(&connection)
            {
                // email address WAS found in shooter table
                Ok(s) => {
                    create_oauth(&connection, &vendor, token, s.shooter_id);
                    true
                }
                // email address not found in shooter table
                Err(diesel::NotFound) => {
                    let this_shooter =
                        create_shooter(&connection, name, None,
                            email, &"active".to_string());
                    session.shooter_id = Some(this_shooter.shooter_id);
                    create_oauth(&connection, &vendor, token,
                        this_shooter.shooter_id);
                    true
                }
                Err(e) => {
                    panic!("Database error {}", e);
                }
            },
        },
        Err(e) => {
            panic!("Database error {}", e);
        }
    }
}



Simple Queries with Diesel

Breaking all that code down to smaller bits: first, I query the PgSQL database for the given oauth user:

match oauth
    .filter(oauth_vendor.eq(&vendor))
    .filter(oauth_user.eq(&token))
    .first::<Oauth>(&connection) {
        Ok(o) => { ... }
        Err(diesel::NotFound) => { ... }
        Err(e) => { ... }
}

Check the oauth table for records WHERE (filter) the oauth_vendor is (google or facebook) AND I’ve already stored the same validated oauth_user. I will get back either Ok(o) or Err(diesel::NotFound) … (or some worse error message), so I make a pattern with those 3 arms.

If we did get a hit from the DB, that session id is already tied to a shooter_id (user id) unless something is very wrong. So, IF we also have a shooter_id defined in our current session, I just need to verify that they match and return true or false. But, if we don’t have a shooter_id in our session, we know the oauth is tied to a shooter in the db, so this will log them in. Diesel has an easy way to get that parent record, which is what this should do:

// if let Ok(s) = Shooter::belonging_to(&o).load::<Shooter>(&connection) {
   ...

I fought and fought to get this work, but you can see it is still commented out. From posts and chat around the Internet, I believe it can work – I think I either have a scope problem or my models aren’t setup correctly… this is how they look:

<src/model.rs>

...
#[derive(Identifiable, Queryable, Debug, PartialEq)]
#[table_name = "shooter"]
#[primary_key("shooter_id")]
pub struct Shooter {
    pub shooter_id: i32,
    pub shooter_name: String,
    pub shooter_password: String,
    pub shooter_status: String,
    pub shooter_email: String,
    pub shooter_real_name: String,
    pub shooter_create_time: chrono::NaiveDateTime,
    pub shooter_active_time: Option<chrono::NaiveDateTime>,
    pub shooter_inactive_time: Option<chrono::NaiveDateTime>,
    pub shooter_remove_time: Option<chrono::NaiveDateTime>,
    pub shooter_modify_time: chrono::NaiveDateTime,
}
...
#[derive(Identifiable, Associations, Queryable, Debug, PartialEq)]
#[belongs_to(Shooter, foreign_key = "shooter_id")]
#[table_name = "oauth"]
#[primary_key("oauth_id")]
pub struct Oauth {
    pub oauth_id: i32,
    pub oauth_vendor: String,
    pub oauth_user: String,
    pub shooter_id: i32,
    pub oauth_status: String,
    pub oauth_create_time: chrono::NaiveDateTime,
    pub oauth_last_use_time: chrono::NaiveDateTime,
    pub oauth_modify_time: chrono::NaiveDateTime,
}

I’ll get it to work eventually – I really hope it isn’t failing because I didn’t specifically name my primary fields just id like in the examples Diesel gives in their guides. It seems like naming shooter_id in table oauth to match shooter_id in the shooter table should make things obvious. Hopefully we aren’t forced to always use id as the primary field… no, that can’t be it.

Anyway, back to verifying. The other main case is that an oauth record with this token is NOT found in the table. Which means it is a new connection we haven’t seen before. If the session is already logged in, we just need to attach this oauth token to the logged in user and return true!

Some(id) => {
    create_oauth(&connection, &vendor, token, id);
    true
}

Otherwise, two choices – we will try to match on an existing shooter via the email address. If we find a match, we log them in and again attach this oauth token to their shooter record.

 None => match shooter
     .filter(shooter_email.eq(&email))
     .first::<Shooter>(&connection)
 {
     // email address WAS found in shooter table
     Ok(s) => {
         create_oauth(&connection, &vendor, token, s.shooter_id);
         true
     }

Otherwise, we don’t get a hit; that is, we haven’t seen this oauth token before AND we haven’t seen this validated email address before. We have to call that a brand new shooter account. I mentioned we create accounts from the OAuth requests using Diesel – this is where that happens. In this case, we create both the shooter record and the oauth record, linking them together.

// email address not found in shooter table
Err(diesel::NotFound) => {
    let this_shooter =
        create_shooter(&connection, name, None, email,
            &"active".to_string());
    session.shooter_id = Some(this_shooter.shooter_id);
    create_oauth(&connection, &vendor, token, this_shooter.shooter_id);
    true
}

Using Diesel to Insert Records

As we fall back out of the stack of functions we’ve called, because we return true here the session will get updated with the shooter_id – they are now logged in. Also, the shooter and oauth records are saved, so if they come back, they can just validate and be logged into their same account again. Here are the two methods that create those records:

<src/shooter.rs>

...
pub fn create_shooter<'a>(
    connection: &PgConnection,
    name: &'a String,
    password: Option<&'a String>,
    email: &'a String,
    status: &'a String,
) -> Shooter {
    use crate::schema::shooter::dsl::*;

    let new_shooter = NewShooter {
        shooter_name: name.to_string(),
        shooter_password: match password {
            Some(p) => p.to_string(),
            None => thread_rng()
                .sample_iter(&Alphanumeric)
                .take(64)
                .collect::<String>(),
        },
        shooter_status: status.to_string(),
        shooter_email: email.to_string(),
        shooter_real_name: name.to_string(),
    };

    diesel::insert_into(shooter)
        .values(&new_shooter)
        .get_result(connection)
        .expect("Error saving new Shooter")
}
<src/oauth.rs>

...
pub fn create_oauth<'a>(
    connection: &PgConnection,
    vendor: &'a String,
    user_id: &'a String,
    shooterid: i32,
) -> Oauth {
    use crate::schema::oauth::dsl::*;

    let new_oauth = NewOauth {
        oauth_vendor: vendor.to_string(),
        oauth_user: user_id.to_string(),
        shooter_id: shooterid,
    };

    diesel::insert_into(oauth)
        .values(&new_oauth)
        .get_result(connection)
        .expect("Error saving new Oauth")
}

As far as writing these new records to PgSQL – in both cases, we have NewShooter and NewOauth structs that allow us to set the bare minimum of fields without having to worry about the fields that PgSQL will default for us (like the create_date fields). We setup the appropriate struct and pass it to insert_into(). Adding .get_result() will return the newly created record to us, so we have access to the brand new shooter_id or oauth_id.

Complexity

If a user comes to the site, signs in with one OAuth (which creates their shooter record and attaches that oauth token) and then signs in with the other, this logic figures out they are validated to be the same person, so creates just a single shooter record with two oauth records, and both point to the one user. If they come back, they can authenticate via either third-party and are allowed back in.

Ok, more to come as I figure out other problems. I haven’t gone through that logic tightly enough to make sure I don’t have any holes – and it wouldn’t surprise me to find some. It doesn’t really matter – this is certainly teaching me Rust! Give it a try at PinpointShooting.com – but don’t be surprised if you shooter account gets deleted, constantly.

Rust Functions, Modules, Packages, Crates, and You

Wooden pallets stacked one on top another
I know the code is in here… somewhere.

Come to find out, I’m learning Rust from old documentation. Both of the printed Rust books I have are for the pre-“2018 edition” and I think that’s contributing to some confusion I have about functions, modules, packages, and crates. A new version of the official book is coming out in the next month or so – I have a link to it through Amazon in the right sidebar. If you’ve been reading the online documentation, you’re ok – it is updated fir the “2018-edition”. I’ve looked at some of these parts of Rust before, but I recently found another new resource, the Edition Guide, which clears up some of my issues. Especially of interest here, is the section on Path Clarity which heavily influenced by RFC 2126 that improved this part of Rust.

I learned some of the history (and excitement) of RFC 2126 while listening to the Request for Explanation podcast, episode 10. Anyway, let’s go back to basics and have a look at Rust functions, modules, packages and crates as the language sits in mid-2019. I’ll present some examples from my web application we’ve been looking at. I’m going to cut out unnecessary bits to simplify things, so a “…” means there was more there in order for this to compile. You can always see whatever state it happens to be in, here.

Crates and Packages

A Rust crate (like Rocket or Diesel) is a binary or library of compiled code. A binary crate is runnable while a library crate is used for its functionality by being linked with another binary. A package (like my web app) ties together one or more crates with a single Cargo.toml file. The toml file configures the package‘s dependencies and some minimal information about compiling the source. A binary crate will have a src/main.rs with a main() function which directs how the binary runs. A library crate will have a src/lib.rs which is the top layer of the library. This top layer directs which pieces inside are available to users of the library.

Rust Functions

Functions are easy – subroutines in your source code. A function starts with fn, possibly receives some parameters and might return a value. Also, a function may be scoped as public or kept private. The main() function inside src/main.rs is a special function that runs when the binary is called from the command line. It dictates the start of your program and you take control from there. You may create other functions, just avoid reserved words (or use the r# prefix to indicate you mean YOUR function, not the reserved word, for instance r#expect if you want to name a function “expect”). Very similar to functions, are methods and traits, which we’ve looked at before.

<src/lib.rs>

...
use diesel::prelude::*;
...
pub fn setup_db() -> PgConnection {
    PgConnection::establish(&CONFIG.database_url)
        .expect(&format!("Error connecting to db"))
}

setup_db() is a fairly simple function – it accepts no incoming parameters and returns a database connection struct called PgConnection. It has pub before fn to indicate it is a “public” function. Without that, my web application bin/src/pps.rs could not call this function – it would not be in scope. Without pub, setup_db() would only be callable from within src/lib.rs. Since I am designing my application as a library crate, I choose to put setup_db() in the main src/lib.rs file. My binary that I will use to “run” my web application is in src/bin/pps.rs and contains a main() function.

Let’s look at the return type, PgConnection. This is a struct defined by the database ORM library crate, Diesel. The only way I could write a function that returns this particular type of struct is because I have use diesel::prelude::*; at the top (and it’s in the toml file as well). The Diesel library crate provides prelude as a simple way to bring in all Diesel has to offer my package. Diesel provides the PgConnection struct as public (or what good would the crate be), so I can now use that struct in my code. This also gives me the (method or trait, how can you tell?) establish(). Just like you’d call String::new() for a new string, I’m calling PgConnection::establish() for a new database connection and then returning it (see, no trailing ; on the line).




Rust Modules

Functions (and other things) can be grouped together into a Module. For instance, setup_logging() is also in src/lib.rs. However, I could have wrapped it inside a named module, like so:

<src/lib.rs>

...
pub mod setting_up {
    ...
    use logging::LOGGING;
    use settings::CONFIG;

    pub fn setup_logging() {
        let applogger = &LOGGING.logger;

        let run_level = &CONFIG.server.run_level;
        warn!(applogger, "Service starting"; "run_level" => run_level);
    }
}

Now it is part of my setting_up module. Here also, the module needs to be pub so that my application can use it and the public functions inside it. Now all of the enums and structs and functions inside the module setting_up are contained together. As long as they are public, I can still get to them in my application.

Notice I use logging::LOGGING; and use settings::CONFIG; These bring in those two structs so I can use the global statics that are built when then the application starts. I included pub mod logging; and pub mod settings; at the top level, in src/lib.rs, so they are available anyplace deeper in my app. I just need to use them since I reference them in this module’s code.

Splitting firewood with an axe

Split, for Clarity

On the other hand, instead of defining a module, or multiple modules, inside a single file like above, you can use a different file to signify a module. This helps split out and separate your code, making it easier to take in a bit at a time. I did that here, with logging.rs:

<src/logging.rs>

...
use slog::{FnValue, *};

pub struct Logging {
    pub logger: slog::Logger,
}

pub static LOGGING: Lazy<Logging> = Lazy::new(|| {
    let logconfig = &CONFIG.logconfig;

    let logfile = &logconfig.applog_path;
    let file = OpenOptions::new()
        .create(true)
        .write(true)
        .truncate(true)
        .open(logfile)
        .unwrap();

    let applogger = slog::Logger::root(
        Mutex::new(slog_bunyan::default(file)).fuse(),
        o!("location" => FnValue(move |info| {
        format!("{}:{} {}", info.file(), info.line(), info.module(), )
                })
        ),
    );

    Logging { logger: applogger }
});

I have a struct and a static instance of it, both of them public, defined in logging.rs. logging.rs becomes a module of my library crate when I specify it. At the top of src/lib.rs I have pub mod logging; which indicates my library crate uses that module file logging.rs and “exports” what it gets from that module as public (so my bin/src/pps.rs application can use what it provides).

In this case, you also see I use slog::{FnValue, *}}; which is like use slog::FnValue; (which I need for the FnValue struct) and use slog::*; which gives me the fuse struct and the o! macro. I was able to combine those into a single use statement to get just what I needed from that external crate.

The old books I have been referencing have you declaring the third-party crates you want to use in your application in your Cargo.toml file (which is still required), but also you’d have to bring each one in with an extern crate each_crate; at the top of main.rs or lib.rs. Thankfully, that’s no longer needed… 99% of the time. In fact, I had a long list of those myself – I am surprised cargo build didn’t warn me it was unneeded. Actually, I do have one crate I am using which still needs this “2015-edition” requirement: Diesel. Apparently, it is doing some fancy macro work and/or hasn’t been upgraded (yet?) for the “2018-edition” of Rust, so at the top of src/lib.rs, I have:

#[macro_use]
extern crate diesel;

A Few Standards and TOMLs

The Rust crate std is the standard library, and is included automatically. The primitive data types and a healthy list of macros and keywords are all included. But, if you need filesystem tools: use std::fs; and if you need a HashMap variable, you’ll need to use std::collections::HashMap; And yes, all external crates you depend on inside your source will need to be listed in Cargo.toml. This configuration helps you though – it updates crates automatically as minor versions become available, but does NOT update if a major version is released. You will need to do that manually, so you can test to see if the major release broke anything you depended on in your code. Here is a piece of my ever-growing Cargo.toml file for the web application so far:

...
[dependencies]
slog = "2.5.0"
slog-bunyan = "2.1.0"
base64 = "0.10.1"
rand = "0.7.0"
rand_core = "0.5.0"
rust-crypto = "0.2.36"
config = "0.9.3"
serde = "1.0.94"
serde_derive = "1.0.94"
serde_json = "1.0.40"
once_cell = "0.2.2"
dotenv = "0.14.1"
chrono = "0.4.7"
rocket = "0.4.2"
rocket-slog = "0.4.0"

[dependencies.diesel]
version = "1.4.2"
features = ["postgres","chrono"]

[dependencies.rocket_contrib]
version = "0.4.2"
default-features = false
features = ["serve","handlebars_templates","helmet","json"]