Mounting An Encrypted USB Drive From the Command Line

Some Background

I have a headless server that I use as sort of a remote heavy-lifter for my code and attached to it is a USB drive that I use for data files. Since USB drives are portable I decided to encrypt it with LUKS, which is easy enough to use on the desktop in ubuntu (the "files" GUI prompts you for the password and handles everything for you after that) but since I use the server headless I have to be able to mount it from the command line. If you search for it there's a Stack Overflow thread that tells you mostly how to do it but:

  • I didn't know the /dev file to use
  • Like many Stack Overflow threads there's a lot of noise that isn't relevant to me
  • I want to be able to remember how to do this without having to search for it and click through different links to figure out which one has the right information for me

So, here's the subset of steps that I did to mount the drive.

Middle

Find the USB Device Name

The first think to do is to make sure that the USB device is recognized by the operating system.

lsusb

Which produced a lot of listings, the most relevant one being:

Bus 001 Device 002: ID 1058:0748 Western Digital Technologies, Inc. My Passport (WDBKXH, WDBY8L)

Which is the drive I wanted to unencrypt and mount. The next thing is to find the file name (in this case I know the name of the device - "My Passport" - so I used grep, otherwise I'd use less).

sudo fdisk -l | grep "My Passport" -B 1

Which currently gives this:

Partition 2 does not start on physical sector boundary.
Disk /dev/sdb: 931.49 GiB, 1000170586112 bytes, 1953458176 sectors
Disk model: My Passport 0748

It might have looked a little different when I originally ran it since the drive is already mounted but whatever is in that second line is what we want.

That is the name we need for the drive, but we're going to mount a partition so you need to know the partition name. lsblk will show it to us.

lsblk -e7

Which gave me the output:

NAME                        MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINT
sda                           8:0    0 931.5G  0 disk  
├─sda1                        8:1    0     1M  0 part  
├─sda2                        8:2    0     1G  0 part  /boot
└─sda3                        8:3    0 930.5G  0 part  
  └─dm_crypt-0              253:0    0 930.5G  0 crypt 
    └─ubuntu--vg-ubuntu--lv 253:1    0   200G  0 lvm   /
sdb                           8:16   0 931.5G  0 disk  
└─sdb1                        8:17   0 931.5G  0 part

Now you can see that the partion for our disk is sdb1 (the last row where it's shown to be a child of sdb and that its TYPE is a partition).

Unlock the Drive

Note: This works, but there's an alternative way to do it with cryptsetup that I find a little easier (but not much). I documented that command as if it continued from this point in this post.

Next unlock the drive. When you do this it will create a file in /dev/mapper/ that you'll need so it would be a good idea to see what's there before you run it.

ls /dev/mapper/

And then do the decrypting, remembering that the partition is sdb1 and like our disk the file is in the /dev directory.

udisksctl unlock -b /dev/sdb1

This will bring up two prompts for you to fill out which are (confusingly) "Passphrase:" and "Password:". The first prompt ("Passphrase") is what you entered when the disk was encrypted so you need to enter whatever you normally enter to decrypt the disk. The second prompt ("Password:") is your admin password so that the program can run as root (assuming you have the right privileges).

Mount the Drive

If the last command went okay you now need to mount it. There's going to be a file in /dev/mapper that you need to know. When I did it there was only one new file (luks-3eea956c-e684-4bcb-a640-97d0c8c5a700) so I didn't have to do anything special to get it.

udisksctl mount -b /dev/mapper/luks-3eea956c-e684-4bcb-a640-97d0c8c5a700

If you run the command lsblk -e7 it will show you a tree with the /dev/mapper/ file mapped to the mount point where you can access it.

sdb                                             8:16   0 931.5G  0 disk  
└─sdb1                                          8:17   0 931.5G  0 part  
  └─luks-3eea956c-e684-4bcb-a640-97d0c8c5a700 253:3    0 931.5G  0 crypt /media/hades/WDData

So in this case the drive is accessible at /media/hades/WDData (it's always the same place but I wanted to document the lsblk -e7 command).

End

So, for my future self, if you need to mount an encrypted USB drive without a GUI, there you go. The two main steps are find the file for the USB drive and then run udisksctl.

sudo fdisk -l
udisksctl unlock -b /dev/sdb1
udisksctl mount -b /dev/mapper/luks-3eea956c-e684-4bcb-a640-97d0c8c5a700

Sources

  • sourcedigit.com - "How To List USB Devices On Ubuntu – Find USB Device Name On Linux Ubuntu"
  • Stack Overflow - "Mount encrypted volumes from command line?"

A Mind For Numbers

Citation

  • Oakley BA. A mind for numbers: how to excel at math and science (even if you flunked algebra). New York: Jeremy P. Tarcher/Penguin; 2014. 316 p.

Notes

This is a tour of ideas to help students learn how to learn, with an emphasis on math and science. She covers things varying from how to study, to procrastination, to test-taking. Her ten rules are (paraphrased):

  1. Recall things early and often to make them stick
  2. Used spaced repetition when recalling
  3. Interleave different types of problems so you don't overlearn one type.
  4. Build things into chunks of larger concepts
  5. Explain it like a five year old and use analogies to learn abstract ideas
  6. Test yourself
  7. Focus on one thing at a time
  8. Take breaks
  9. Eat frogs first (do the important but unpleasant things first)
  10. Use mental contrasting (comparing where you are now to where you expect to be) to motivate yourself

There's more in her book than those ten things, but these are the main points, and they sort of touch on her other points even if not stated. She particularly emphasizes chunking and the switching between "focused" and "diffused" modes of thinking, and many of the ideas address how to use both modes while learning.

Precision, Recall, and the F-Measure

Beginning

When we are looking at how well a model (or a person) is doing it's often best to have a numeric value that we can calculate to make it easy to see how well it is doing. The first thing many people reach for is measuring accuracy but this isn't always the best metric. Unbalanced data sets can distort this metric, for instance. If 90% of the data is spam then a model that always guessed that an email is spam will have decent accuracy, but really won't be all that useful (except for pointing out that you have too much spam). To remedy this and other problems I'll look at some alternative metrics (precision, recall, and the f-measure) which are useful for deciding how well classification models are doing.

The Metrics

Positive and Negative

First some terminology. We're going to assume that we want to label data as either being something or not being that thing. e.g. guilty or not guilty, duck or not a duck, etc. The label for things that are the thing is called Positive and the label for things that aren't the thing is Negative.

Term Acronym Description
True Positive TP We labeled it positive and it was positive
False Positive FP We labeled it positive and it was negative
True Negative TN We labeled it negative and it was negative
False Negative FN We labeled it negative and it was positive

This is sometimes represented using a matrix.

  Actually Positive Actually Negative
Predicted Positive True Positive False Positive
Predicted Negative False Negative True Negative

Accuracy

Okay, I said we aren't going to use accuracy, but just to be complete… accuracy asks what fraction of the anwsers did you get correct?

\[ \textrm{Accuracy} = \frac{TP + TN}{TP + TN + FP + FN} \]

This is probably what most of us are familiar with from being graded in school.

Precision

How much of what was predicted positive was really positive?

\[ \textrm{Precision} = \frac{TP}{TP+FP} \]

Since we have the count of false-positives in the denominator, your score will go down the more negatives you label positive (e.g. the more innocents you convict, the lower the score).

Recall

How many of the positives did you catch?

\[ \textrm{Recall} = \frac{TP}{TP + FN} \]

Here your score will go down the more positives you miss (the more guilty you find innocent).

F-Measure

So, in some cases you might want to favor Precision over Recall and vice-versa, but what if you don't really want one over the other? The F-Measure allows us to combine them into one metric.

\[ F_{\beta} = \frac{(\beta^2 + 1) Precision \times Recall}{\beta^2 Precision + Recall} \]

To make it simpler I'll just use P for precision and R for recall from here on.

\(\beta\) in the equation is a parameter that we can tune to favor precision or recall. If you'll notice, \(\beta\) in the numerator affects both precision and recall equally, while it only affects precision in the denominator, so the larger it is, the more precision diminishes the score.

\begin{align} \beta > 1 &: \textit{Favor Recall}\\ \beta < 1 &: \textit{Favor Precision}\\ \end{align}

F1 Measure

If you look at the inequalities for the effects of \(\beta\) on the F-Measure you might notice that they don't include 1. That's because when \(\beta\) is 1 it doesn't favor either precision or recall, giving a case that combines both of them and treating them equally.

\[ F_1 = \frac{2PR}{P + R} \]

Neural correlates of maintaining one's political beliefs in the face of counterevidence

Citation

  • Kaplan JT, Gimbel SI, Harris S. Neural correlates of maintaining one’s political beliefs in the face of counterevidence. Scientific reports. 2016 Dec 23;6:39589. nature.com

Abstract

Context

People often discount evidence that contradicts their firmly held beliefs.

Conflict

However, little is known about the neural mechanisms that govern this behavior.

Consequence

Method

We used neuroimaging to investigate the neural systems involved in maintaining belief in the face of counterevidence, presenting 40 liberals with arguments that contradicted their strongly held political and non-political views.

Results

Challenges to political beliefs produced increased activity in the default mode network—a set of interconnected structures associated with self-representation and disengagement from the external world. Trials with greater belief resistance showed increased response in the dorsomedial prefrontal cortex and decreased activity in the orbitofrontal cortex. We also found that participants who changed their minds more showed less signal in the insula and the amygdala when evaluating counterevidence. These results highlight the role of emotion in belief-change resistance and offer insight into the neural systems involved in belief maintenance, motivated reasoning, and related phenomena.

The Peculiar Blindness of Experts

Citation

Commentary

The Bet

Paul R. Ehrlich predicted that population increases would cause scarcity and starvation and pushed for regulations to protect the environment and resources. Julian Simon predicted that technological innovation would solve the scarcity problem and clean up the environment. They placed a bet based on the price of five metals which they used as a proxy for resource scarcity. Ehrlich lost the bet but doubled-down on his predictions, saying the timing was off. Later studies showed that the price of metals didn't reflect scarcity and Simon just had the luck of the market. Ehrlich was wrong on scarcity but right on the need for regulation to protect the environment. Simon was right that technology would prevent the catastrophe Ehrlich predicted but couldn't concede that it was regulation and not technology that helped the environment.

Tetlock's Study

Philip Tetlock set up a study where he collected the predictions of experts on political science for twenty years. At the end of it he found that they were generally horrible at forecasting and that even when shown their predictions, the experts wouldn't concede that they were wrong. The experts tended to make predictions based on their political identifications but one group took ideas from multiple camps and were more successful at making predictions. Tetlock named the two types of experts Hedgehogs and Foxes.

Hedgehog Fox
specialized integrative
know "one thing" know "many things"
narow broad

The more experienced Hedgehogs became the more they tended to use their added knowledge to bend reality to fit their view.

See this wikipedia page - the Hedgehog and the Fox - for the source of those terms.

Multa novit vulpes, verum echinus unum magnum - "a fox knows many things, but a hedgehog one important thing".

The Prediction Tournament

IARPA saw Tetlock's study and set up a prediction tournament to test expert teams. Tetlock entered using volunteers identified as Foxes and won. The Foxes tended to see teammates as sources of information to learn from, while Hedgehogs saw teammates as adversaries who needed to be convinced of their opinions. Both Foxes and Hedgehogs saw successes as reinforcing what they believed, but when information came up that conflicted with those beliefs the Foxes updated their positions while the Hedgehogs doubled-down on their prior beliefs instead.

Peter Elbow's Believing Game

Description

Background

In his paper on The Believing Game Peter Elbow proposes that in addition to the traditional "Doubting Game" (skeptical thinking) we should also engage in the "Believing Game" which involves understanding an argument by accepting it as true. He argues that while the skeptical, scientific, method (searching for flaws) is valuable it needs to be augmented by an accepting method (searching for virtues) as well. The Doubting Game has dominated because of its usefulness but it can lead us to nurture blind spots that are protected by our skepticism. By accepting the arguments of someone whose position we dislike we can, potentially find flaws in our own thinking.

An important point that Elbow makes is that there are two levels at which we have to look at arguments - the logic behind the arguments and the actual thing that is being argued for or against. It is possible to make a flawed argument for a good idea and a sound argument for a bad idea. The Doubting Game is a search for flaws in the argument while the Believing Game is a way to examine the underlying position that the argument is trying to make.

Doubting Game Believing Game
Propositions Experience
Analyze Understand
Detach Jump In

Within the sciences you can see many cases where adopting a believing mindset ultimately proved useful, leading to paradigm shifts - the switch from the Earth as the center of the cosmos to having the Earth orbit the Sun, miasma theory switching to germ theory, and the acceptance of Plate Tectonics, among other examples - but Elbow says that it is also helpful to play the Believing Game even if you ultimately don't accept the underlying premise, because without the suspension of disbelief you won't fully understand what's being proposed and ultimately "They may seem wrong or crazy–they may be wrong or crazy–but nevertheless they may still be able to see something that none of us can see."

The Game

Graff & Birkstein propose a concrete version of the Believing Game as a game:

To get a feel for Peter Elbow's "believing game," write a summary of some belief that you strongly disagree with. Then write a summary of the position that you actually hold on this topic. Give both summaries to a classmate or two, and see if they can tell which position you endorse. If you've succeeded, they won't be able to tell.

Hedgehogs and Foxes

Besides paradigm shifts this also makes me think of an article in the Atlantic about how experts tend to make horrible predictions because they hold to a specific view and bend conflicting information to fit their view (Hedgehogs) while those who tend to be cross-disciplined generalists who are able to take in information from other experts and update their prior beliefs when conflicting information comes in (Foxes) make better predictions than the specialized "experts".

Einstellung and Shoshin

Now I'm straying way into left field, but thinking of the Hedgehogs and Foxes puts me in the mind of the Einstellung Effect wherein people become stuck trying to apply the same solution even when it is no longe applicable, whereas those lacking experience in the problem can sometimes see possibilities that are not as obvious to the experienced. The point of reading Elbow's paper was to get an idea of how to write effective summaries of other people's work, but instead I think I've talked myself into believing it's a way to keep a "beginner's mind".