Apt-Fast In Sparky Linux

Background

I have an old eeePC netbook that I thought I'd revive by loading Sparky Linux onto it. One of the things I set up on it is apt-fast, which the README on the github repository describes like this:

apt-fast is a shellscript wrapper for apt-get and aptitude that can drastically improve apt download times by downloading packages in parallel, with multiple connections per package.

I've used it for a while on ubuntu but Sparky Linux didn't have it in the repositories. The apt-fast documentation has instructions for installing it on debian (and derivatives) and since Sparky Linux is based on debian (the current version, SparkyLinux 6.7 (Po-Tolo) is based on debian bullseye) I decided to try that. Ultimately I got it working, but as is often the case, it wasn't quite so straightforward as I would like for it to have been.

Unusual Ingredients List:

  • SparkyLinux 6.7 (Po-Tolo)
  • fish shell

The Intructions

Although the PPA system is built for Ubuntu, the recommendation from apt-fast is to use it with debian-based systems too (apt-fast is just a shell-script that runs aria2 and apt (or apt-get, etc.) so it's not like there's a lot of dependencies that might conflict). This is what they say to do.

Create An Apt Entry

First I created a file for the sources at /etc/apt/sources.list.d/apt-fast.list and put these lines in it.

deb http://ppa.launchpad.net/apt-fast/stable/ubuntu bionic main 
deb-src http://ppa.launchpad.net/apt-fast/stable/ubuntu bionic main

Bionic came out in 2018 so they maybe haven't updated the instructions in a while.

Add the Keyring and Install

Once the file was in place I ran the commands.

apt-key adv --keyserver keyserver.ubuntu.com --recv-keys A2166B8DE8BDC3367D1901C11EE2FF37CA8DA16B
apt-get update
apt-get install apt-fast

The first output I saw was a warning:

Warning: apt-key is deprecated. Manage keyring files in trusted.gpg.d instead (see apt-key(8)).

This is something I've seen on ubuntu as well so fixing it seemed like a useful thing to do, especially since at the end of the regular output I got an error.

Between the warning and the final error there was the usual output that I've seen:

Executing: /tmp/apt-key-gpghome.YL04bWmGAF/gpg.1.sh --keyserver keyserver.ubuntu.com --recv-keys A2166B8DE8BDC3367D1901C11EE2FF37CA8DA16B
gpg: key 1EE2FF37CA8DA16B: public key "Launchpad PPA for apt-fast" imported
gpg: Total number processed: 1
gpg:               imported: 1

Then came this, the error I mentioned, which is not what I usually see:

gpg: no writable keyring found: Not found
gpg: error reading '[stdin]': General error
gpg: import from '[stdin]' failed: General error
gpg: Total number processed: 0

I don't know if this is a debian problem or a sparky linux problem, but since you're not supposed to be using this method anyway, I went looking for a different solution.

The apt-key Solution

The Start of the Solution

The first part of the solution was pointed to by this Stack Overflow Answer. There was a problem with the answer, though, in that the person asking the question was using a URL that pointed to a gpg file, so the answers all assumed you could download it with curl, wget, etc. (all the answers that I could understand, anyway). So now that I had the answer I had a new problem - how do you get the file from the keyserver?

Getting the GPG File

Once again to the web, this time this answer from SuperUser seemed to work.

First, I made a temporary directory and pointed GNUPGHOME to it so that I wasn't adding anything to my actual gpg setup (this is fish-shell syntax).

set -x GNUPGHOME $(mktemp -d)

echo $GNUPGHOME showed that this created a file at /tmp/tmp.dUDUEgFQ0x (but I didn't actually need to know that, I'm just mentioning it).

Taking the --recv-keys argument from the instructions above (apt-key adv --keyserver keyserver.ubuntu.com --recv-keys A2166B8DE8BDC3367D1901C11EE2FF37CA8DA16B) I added the key.

gpg --keyserver keyserver.ubuntu.com --recv-keys A2166B8DE8BDC3367D1901C11EE2FF37CA8DA16B

Next I made a place to put the gpg file.

sudo mkdir /etc/apt/keyrings

Then I output the file in my home directory (the GNUPGHOME environment variable is only available to my user, so I put the file somewhere that I didn't need to be root, i.e. my home directory).

gpg -o A2166B8DE8BDC3367D1901C11EE2FF37CA8DA16B.gpg --export A2166B8DE8BDC3367D1901C11EE2FF37CA8DA16B

Then I moved the file into the directory I created for it.

sudo mv A2166B8DE8BDC3367D1901C11EE2FF37CA8DA16B.gpg /etc/apt/keyrings/

The SuperUser answer I linked to used gpg -ao but the a option makes it an "armored" file, and part of the Stack Overflow answer for setting up the key is about de-armoring it so I just left that option out.

Now Back to Setting it Up

So now that we have the keyring we need to edit the /etc/apt/sources.list.d/apt-fast.list file that we created at the beginning of this.

This is what I started with.

deb http://ppa.launchpad.net/apt-fast/stable/ubuntu bionic main 
deb-src http://ppa.launchpad.net/apt-fast/stable/ubuntu bionic main

And I changed it to refer to the gpg file that I created.

deb [signed-by=/etc/apt/keyrings/A2166B8DE8BDC3367D1901C11EE2FF37CA8DA16B.gpg] http://ppa.launchpad.net/apt-fast/stable/ubuntu bionic main
deb-src  [signed-by=/etc/apt/keyrings/A2166B8DE8BDC3367D1901C11EE2FF37CA8DA16B.gpg]  http://ppa.launchpad.net/apt-fast/stable/ubuntu bionic main

Update and Install

So then I updated apt and installed it.

sudo apt update
sudo apt install apt-fast

And it worked.

And Now, Another Problem

This got me to a working apt-fast installation but the fact that I was using bionic seemed off to me so I decided to update the apt-fast.list. Under the instructions for adding the PPA is this note.

Note that the PPA version bionic might need to be updated with the recent Ubuntu LTS codename to stay up-to-date.

So I went and looked up the Ubuntu Release Cycle and saw that "jammy" is the most recent version so I updated the apt-fast.list file to match.

deb [signed-by=/etc/apt/keyrings/A2166B8DE8BDC3367D1901C11EE2FF37CA8DA16B.gpg] http://ppa.launchpad.net/apt-fast/stable/ubuntu jammy main
deb-src  [signed-by=/etc/apt/keyrings/A2166B8DE8BDC3367D1901C11EE2FF37CA8DA16B.gpg]  http://ppa.launchpad.net/apt-fast/stable/ubuntu jammy main

And then I installed the newer version.

sudo apt update
sudo apt install apt-fast

And I got a nice long stack-trace and error message at the bottom of which was this:

dpkg-deb: error: archive '/var/cache/apt/archives/apt-fast_1.9.12-1~ubuntu22.04.1_all.deb' uses unknown compression for member 'control.tar.zst', giving up
dpkg: error processing archive /var/cache/apt/archives/apt-fast_1.9.12-1~ubuntu22.04.1_all.deb (--unpack):
 dpkg-deb --control subprocess returned error exit status 2
Errors were encountered while processing:
 /var/cache/apt/archives/apt-fast_1.9.12-1~ubuntu22.04.1_all.deb
E: Sub-process /usr/bin/dpkg returned an error code (1)

The part of it that seemed like it might matter the most was the fragment:

uses unknown compression for member 'control.tar.zst', giving up~

What is a zst file? According to wikipedia it's a "Zstandard" file and Debian and Ubuntu added support for using it to compress deb packages back in 2018. There is a package listed in apt called zstd that it says supports zst compression so I installed it but the error remained.

Once again, someone ran into this and asked about it on Stack Exchange. One of the answers said:

Debian’s dpkg package didn’t support zstd compression prior to version 1.21.18. Support was added just in time for Debian 12.

Since the SparkyLinux install is based on Debian 11 that seemed like it might be the problem. I checked the dpkg-version and got back:

Debian 'dpkg' package management program version 1.20.12 (i386).

So that seemed like the likely culprit. The ubuntu release dates page noted that there was a LTS version between "bionic" and "jammy" called "focal" so I edited the apt-fast.list file again, replacing "jammy" with "focal" and re-ran the installation and so far… it works.

What Have We Learned Today Children?

Further down in the installation instructions it says that you can just download the files and install them along with the aria2 package, so going through this whole thing was kind of unnecessary, but getting around the apt-key problem was something that I'd wondered about before, so it might be useful in the future, if PPA creators keep using it and they don't come up with an automatic fix for it.

I guess the main thing I learned is that I should have read to the end of the instructions and picked the easy way out instead of trying to force the old familiar way to work.

Links Collected

Fish, Mocha, Chai - A Local Global Installation In Ubuntu

What This Is About

I've been getting back into p5.js lately and thought that I should add some testing so I went to their site and found that they had a tutorial page called "Unit Testing and Test Driven Development" which I decided to follow along with to get re-acquainted with testing javascript but then I ran into a problem running mocha, or more specifically, running mocha crashed because it couldn't find chai even though I'd followed the instructions to install it. So here goes what I did to fix it.

This is another case where you can basically find the answer online if you look at the right page - but there seems to be more pages with the unhelpful answers and I use the fish shell and ubuntu so it's a little different from the stuff I found that did work.

The Tutorial's Installation

This is how they tell you to install mocha and chai.

First, update npm (assuming you've already installed it somehow).

sudo npm install npm@latest -g

Then install mocha and chai using npm.

npm install mocha -g
npm install chai -g

This right here is actually where the trouble starts. If you try to install things globally, you need to run it as root, thus the use of sudo when updating npm. But their instructions on installing mocha and chai don't say to use sudo, which will result in a permission error, so did they forget to run it as root, or did they not mean to install it globally? I decided to re-run their instructions as root.

sudo npm install mocha -g
sudo npm install chai -g

This seemed to work, but when I ran mocha on the folder where I put the code given in the tutorial:

mocha color_unit_tests/

It gave me an error.

Error: Cannot find module 'chai'
Require stack:
- /home/hades/projects/ape-iron/p5tests/color_unit_tests/test.js
    at Module._resolveFilename (node:internal/modules/cjs/loader:1097:15)
    at Module._load (node:internal/modules/cjs/loader:942:27)
    at Module.require (node:internal/modules/cjs/loader:1163:19)
    at require (node:internal/modules/cjs/helpers:110:18)
    at Object.<anonymous> (/home/hades/projects/ape-iron/p5tests/color_unit_tests/test.js:5:16
)
    at Module._compile (node:internal/modules/cjs/loader:1276:14)
    at Module._extensions..js (node:internal/modules/cjs/loader:1330:10)
    at Module.load (node:internal/modules/cjs/loader:1139:32)
    at Module._load (node:internal/modules/cjs/loader:980:12)
    at ModuleWrap.<anonymous> (node:internal/modules/esm/translators:169:29)
    at ModuleJob.run (node:internal/modules/esm/module_job:194:25)

So, maybe that wasn't the answer.

This Might Be the Wrong Way

I found a Stack Overflow question that described the exact problem I had, but one of the comments had this to say:

Mocha can be installed either locally or globally, but Chai can only be installed locally. Has to do with the way it is applied (i.e., to the specific app instance). – Steve Carey May 30, 2020 at 21:11

I don't know who "Steve Carey" is and whether what he is saying is true, but the chai installation instructions do tell you to install it locally, rather than globally, but when you do this for every project you end up with node_modules and package.json files all over the place. I suppose there's a reason for this, maybe to couple the version of chai you're using with the project, but I decided to try another way.

The Local Global

This answer on Stack Overflow describes how to install npm-packages into your home directory as your global directory. It assumes you're using bash, though, so I had to change it up a little bit.

Make a local package directory

First I made a local package directory.

mkdir ~/.npm-packages

Then I created a file called ~/.npmrc that had one line in it.

prefix = /home/hades/.npm-packages

With /home/hades/ being my home-directory.

Edit the Fish Configuration

At the bottom of the ~/.config/fish/config.fish file I added these lines.

set -x NPM_PACKAGES $HOME/.npm-packages

This is where npm will install stuff if you tell it to install files globally once we're done. The folder can be named anything, I imagine, but it will need to match what's in the .npmrc file.

When npm installs packages some of them will be executable commands (like mocha) and so I had to update the fish PATH.

fish_add_path $HOME/.npm-packages/bin

Although this will make mocha available, chai isn't an executable so you have to set the NODE_PATH variable so that node will no where to look for modules to import.

set --export NODE_PATH $NPM_PACKAGES/lib/node_modules

I was originally appending the current contents of NODE_PATH to the end, like you would with a regular path variable ($NPM_PACKAGES/lib/node_modules:$NODE_PATH) but for some reason this breaks something and the variable doesn't get set. Or at least it was always empty when I tried to run mocha. So the solution for me was to always clobber the entire path (the variable was empty before I started using it anyway).

And Now

Running the tests again:

mocha ../../ape-iron/p5tests/color_unit_tests/


these are my first tests for p5js
  ✔ should be a string
  ✔ should be equal to awesome


2 passing (5ms)

The path is different because I'm writing this post in a different repository, but, anyway, it looks like it works.

Sources

Ubuntu 22.04, Python 3.11 and the "emacs-jupyter Symbol's variable is void" Error

What This Is About

Ubuntu 22.04 no longer let's you install python packages globally using pip by default (you can override it but they warn you not to). This has caused a cascade of broken parts on my system, since I use python so much. This particular case started with me trying to start the jupyter kernel so that I could run some python code in org-mode and getting (what looked like) an error and fixing it ended up uncovering the fact that working with the new policy for pip broke my emacs setup a little too, so this is a dump of how I got it back up and running again. I recorded it as I was fixing things so there might be a better way, but this is the first pass I took.

The Jupyter Kernel Warning

This is what happened when I tried to start the jupyter kernel.

(Ape-Iron) hades@erebus ~> jupyter kernel
[KernelApp] Starting kernel 'python3'
0.00s - Debugger warning: It seems that frozen modules are being used, which may
0.00s - make the debugger miss breakpoints. Please pass -Xfrozen_modules=off
0.00s - to python to disable frozen modules.
0.00s - Note: Debugging will proceed. Set PYDEVD_DISABLE_FILE_VALIDATION=1 to disable this val
idation.
[KernelApp] Connection file: /home/hades/.local/share/jupyter/runtime/kernel-a57a8231-bfea-468
0-9f8b-6bf1b1e3a7ac.json
[KernelApp] To connect a client: --existing kernel-a57a8231-bfea-4680-9f8b-6bf1b1e3a7ac.json
0.00s - Debugger warning: It seems that frozen modules are being used, which may
0.00s - make the debugger miss breakpoints. Please pass -Xfrozen_modules=off
0.00s - to python to disable frozen modules.
0.00s - Note: Debugging will proceed. Set PYDEVD_DISABLE_FILE_VALIDATION=1 to disable this validation.

According to this Stack Overflow post the output, though scary-looking, is only a warning, and you should be able to ignore it. It's happening because python 3.11 uses a "frozen" version of python with code objects for some of the built-in python modules that get loaded when the interpreter starts up already pre-allocated in order to reduce their load time during python's start up (i.e. they set it up to start faster), and their doing this means that the debugger might not work correctly - but since I'm not using the debugger, it shouldn't matter.

Ah, but there's always a problem lurking behind the advice to ignore "harmless warnings". Even with the kernel running, I couldn't get python/jupyter to work in my org-babel source blocks, so there was more to do.

Getting emacs-jupyter Working

The Problem

The first clue as to what might be happing was this line in emacs' startup messages.

Symbol’s function definition is void: org-babel-execute:jupyter-python

It looked like emacs-jupyter wasn't loading properly. There was also this message in the output:

Error retrieving kernelspecs: (json-number-format 5)

Searching for that error-message brought up this bug-report on github:

Wherein the author of the bug-report mentions that loading emacs-jupyter is failing because it's trying to parse the output of jupyter and the warnings I was seeing causes it to fail (the bug-report references a different jupyter command, but the problematic output is the same).

Testing Turning Off the Warning

The first thing I tried was to follow the directions in the output and supress the warnings by setting an environment variable.

set --universal --export PYDEVD_DISABLE_FILE_VALIDATION 1

Note: This is fish-shell syntax.

I restarted the jupyter kernel and the warnings had gone away, so this much worked.

Really Turning Off the Warning

Setting the environment variable at the command-line changes the environment for my user, but I'm running emacs as a daemon so I needed to edit the systemctl file for my emacs service. I opened up the ~/.config/systemd/user/emacs.service file in emacs and added the line to set the environment variable for the emacs daemon.

[Service]
Environment="PYDEVD_DISABLE_FILE_VALIDATION=1"

Then I restarted the service.

systemctl restart --user emacs

Which gave me a warning that my changes to the configuration have to be re-loaded before restarting the service.

Warning: The unit file, source configuration file or drop-ins of emacs.service changed on disk
. Run 'systemctl --user daemon-reload' to reload units.

Oops.

systemctl --user daemon-reload
systemctl restart --user emacs

This time the emacs startup messages didn't have the jupyter errors so it looked like things were fixed.

Swapping a Virtual Environment For pipx

Suppressing the warnings pretty much solved the problem, but while I was getting this fixed I was also trying to set up a USB Windows installer using WoeUSB and found that pipx couldn't install it because of a dependency error. Pipx is good at installing some standalone python commands but it won't install things that are just libraries and it seems to sometimes also have problems installing dependencies for the commands that it will install. This has come up for me before, and the old solution was just for me to install the dependencies separately using pip before trying to install whatever it was that I was installing with pipx. Now, though, since ubuntu is trying to keep you from installing python modules globally, installing the dependencies means they either have to be available through apt or you have to set up a virtual environment and install them there (when I say have to I mean that since that's the way I know how to do it, that's the way I have to do it, not that there aren't other ways to do it that I just don't know about).

Doing it this way is easy enough, since I use python virtual environments a lot anyway, but then I ran into another problem which was that once I got the virtual environment set up I found out I had to run woeUSB as root, which then bypasses the whole virtual environment setup. The solution to that was to pass the full path to the virtual environment's woeUSB launcher to sudo, but it took enough time experimenting with other ways to do it before I got to that step that I decided I should minimalize how much I use pipx as much as possible - and in particular I should avoid using it with my emacs setup, since emacs will sometimss just quietly fail if there's a python-based error and it's only when things don't work that I'll realize there's a problem. So I decided to go with a dedicated virtual environment instead of installing jupyter with pipx.

This, once again was not a big deal in hindsight, but it took enough experimenting with other options before coming to the conclusion that this was the way to go that I thought I should make a note to my future self about it. To get jupyter working with jupyter-emacs:

  • create a virtual environment (python3 -m venv emacs-environment) in the .virtualenvs folder
  • activate it, then use pip to install wheels and jupyter

In the /.emacs.d/init.el file, activate the virtual environment before you load emacs-jupyter or anything else that needs python:

(require 'pyvenv)
(pyvenv-activate "~/.virtualenvs/emacs-environment")

Then restart emacs. So far this seems to have fixed it.

Other Links

Kernel Panic At the Lunar Lobster

What Is This About?

I got a notification yesterday (May 3, 2023) that there's a new version of Ubuntu out (Lunar Lobster - 23.04) so I ran do-release-upgrade while watching a video (An Honest Liar) about James Randi. At the end of the upgrade there was some kind of error message about not being able to configure the Linux headers, which in retrospect should have worried me, but I was distracted so I just dismissed it. Then today when I booted up the computer I got an error message saying Kernel Panic - Not syncing: VFS: Unable to mount root fs on unknown-block (0,0). I managed to get it working (eventually) but now that Ubuntu's been around long enough and search engines seem to be choked with outdated answers I thought I'd document what I did in case it happens on another update.

First Get It Working

The first thing I tried was to follow the answers on this Stackoverflow page. The only thing this did was get my computer up and running again - which is, I suppose a big thing, not to be minimized, but there's a lot of stuff on that page and the only relevant parts were:

  1. Reboot
  2. At the menu that comes up choose "Advanced Options for Ubuntu"
  3. Pick a prior version of the Linux kernel (5.19.26 in my case) and let it finish starting up.

Not a Solution But A Finger Pointing

The first thing I tried from that page was from this answer:

sudo dpkg --configure -a

This gave me error processing package linux-image-6.2.0-generic (--configure). So I searched some more and tried the suggestion from this askubuntu answer:

sudo dpkg --purge linux-image-6.2.0-20-generic

This gives an error along the lines of "dependency problems, not purging". Oi. So then I tried:

sudo apt autoremove

This gave me another error:

Purge Error

This time in the error message it said to check out /var/lib/dkms/nvidia/510.108.03/build/make.log which turned out not to have any useful information (to me, anyway) but it did tell me that there was something going on with my nvidia drivers that was causing the configuration of the new Linux kernel to fail.

Blame It On the Drivers

"Something going on with my nvidia drivers" being a little too vague for me to troubleshoot, I decided to go the brute force way and uninstall the nvidia-drivers. This actually proved a little harder than I thought it would be (which seems to always be the case, maybe I'm too optimistic about this kind of stuff). Every time I tried to run

sudo apt remove nvidia-driver-510

apt would try and configure the Linux kernel and run into the same error that I had before and exit without uninstalling the driver. Some kind of chicken and the egg thing. So then I tried dpkg instead of apt first:

sudo dpkg --purge nvidia-driver-510
sudo apt autoremove

dpkg managed to uninstall the driver and running apt autoremove not only cleaned out the unused packages but also triggered the kernel configuration and this time… no errors.

From Nouveau To Nvidia

After a reboot it started up okay and this time uname -r showed that I was using the newer kernel (6.2.0-20-generic). Yay. But now when I tried to re-install the nvidia drivers neither ubuntu-drivers nor apt seemed to know that they existed. It turns out that updating the ubuntu installation removes the proprietary drivers from the apt sources. So I launched the "Software & Updates" GUI.

Software & Updates

And checked the "Proprietary drivers for devices (restricted)" button.

Proprietary Drivers

I chose to update the apt listing when I closed the GUI and then installed the drivers at the command line:

sudo ubuntu-drivers install

And then, after another reboot, now it works (I checked with nvidia-smi).

And the End

I haven't run any tests other than using the system but this seems to have fixed the problem. Well, the problem with the kernel, updating also broke all my python virtual environments, but, oh, well, better than a kernel panic, I suppose.

Converting A Date To Day of the Year In Python

This is a quick note on how to take a date and convert it to the day of the year (and back again) using python.

Date To Day of Year

We're going to use python's built-in datetime object to create the date then convert it to a timetuple using its timetuple method. The timetuple has an attribute tm_yday which is the number of days in the year that the date represents.

from datetime import datetime

YEAR, MONTH, DAY = 2023, 2, 7

DAY_OF_YEAR = datetime(YEAR, MONTH, DAY).timetuple().tm_yday
print(DAY_OF_YEAR)
38

So, February 7, 2023 is the 38th day of the year.

Day Of Year To Date

Now to go in the other direction we start with the first day of the year (represented as a datetime object) and add the number of days into the year we want. You can't create a datetime object with day zero so we need to start it on day one and then subtract one day from the number of days that we want.

from datetime import timedelta

JANUARY = 1

date = datetime(YEAR, JANUARY, 1) + timedelta(DAY_OF_YEAR - 1)

print(date.strftime("%Y-%m-%d"))
2023-02-07

Easy-peasey.

Source

PyTorch and the Unknown CUDA Error

The Problem

I recently decided to get back into using neural networks again and tried to update my docker container to get fastai up and running, but couldn't get CUDA working. After a while spent trying different configurations of CUDA, pytorch, pip, conda, and on and on I eventually found out that there's some kind of problem with using CUDA after suspending and then resuming your system (at least with linux/Ubuntu). This is a documentation of that particular problem and it's fixes (fastest but not necessarily the best answer: always shutdown or reboot the machine, don't suspend and resume).

The Symptom

This is what happens if I try to use CUDA after waking the machine from a suspend.

import torch

torch.cuda.is_available()
/home/athena/.conda/envs/neurotic-fastai/lib/python3.9/site-packages/torch/cuda/__init__.py:88: UserWarning: CUDA initialization: CUDA unknown error - this may be due to an incorrectly set up environment, e.g. changing env variable CUDA_VISIBLE_DEVICES after program start. Setting the available devices to be zero. (Triggered internally at /opt/conda/conda-bld/pytorch_1666642975993/work/c10/cuda/CUDAFunctions.cpp:109.)
  return torch._C._cuda_getDeviceCount() > 0

As you can see, the error message doesn't really give any useful information about what's wrong - there are a couple of suggestions but neither seems relevant or at least doesn't lead you to the fix.

The Disease and Its Cure

There's a post on the pytorch discussion boards about this error in which "ptrblck" says that he runs into this problem if his machine is put into the suspend state. While mentioning this he also says that restarting his machine fixes the problem, but restarting it every time seems to defeat the purpose of using suspend (and I'd have to walk to a different room to log in and decrypt the drive after restarting the machine - ugh, so much work).

Luckily, in a later post in the thread the same user mentions that you can also fix it by reloading the nvidia_uvm kernel module by entering these commands in the terminal:

sudo rmmod nvidia_uvm
sudo modprobe nvidia_uvm

Which seems to fix the problem for me right at the moment, without the need to restart the machine.

print(torch.cuda.is_available())
False

Ummm… oops. Well, it did sort of fix one problem - the CUDA unknown error, but now it's saying that CUDA isn't available on this machine. Every fix begets a new problem. Let's try it again after restarting the Jupyter kernel.

import torch
print(torch.cuda.is_available())
True

Okay, that's better, I guess. It feels a little inelegant to have to do this, but at least it seems to work.

Tangling Multiple Org Files

I've been looking off and on for ways to combine separate code-blocks in org-mode into a single tangled file. I wanted to use it because I tangle code that I want to re-use out of posts but then if I want to break the posts up I need to create a separate file (tangle) for each post. I'm hopeful that this method will allow me to break up a tangle across multiple posts. I've only tried it on toy files but I want to get some initial documentation for it in place.

The Steps

Let's say that there are two source org-files:

  • one.org: contains the tangle block and a source block
  • two.org: contains another block that we want to tangle with the one in one.org

The steps are:

  1. Put an #+INCLUDE directive to include two.org into one.org
  2. Export one.org to an org file
  3. Open the exported org file (one.org.org)
  4. Tangle it.

Create one.org

The file one.org is going to have the tangle and the first source-block:

#+begin_src python :tangle ~/test.py :exports none
<<block-one>>

<<block-two>>
#+end_src
#+begin_src python :noweb-ref block-one
def one():
    print("One")
#+end_src

We also need to include what's in the second file (two.org). The code we want to include is in a section called Two so we can include just that section by adding a search term at the end.

#+INCLUDE: "./two.org::*Two"

Create two.org

In the other file add the section header to match the INCLUDE search term (*Two) and put a code block with a reference named block-two to match what's in the tangle block above.

* Two
#+begin_src python :noweb-ref block-two
def two():
print("Two")
#+end_src

Export one.org

Tangling unfortunately ignores the INCLUDE directive so we have to export it first to another org-file in order to get the text from org.two into our source file. By default, exporting to org is disabled so you need to enable it (e.g. starting with M-x customize org-export-backends).

Once it's enabled you can export one.org to an org-mode file using C-c C-e O v (the default name will be one.org.org).

Tangle one.org.org

The last choice when we exported the file in the previous step (v) will save it to a file and open it up in an emacs buffer. When the buffer is open you can then tangle it (C-c C-v C-t) and the output (/test.py from our tangle block) should contain both of our functions.

Sources

This is where I got the information on breaking up the files. It includes some emacs-lisp to run the steps automatically (although I didn't try it):

This is the post that mentions that exporting org-files to org-format needs to be enabled (and how to do it):

This is the manual page explaining the search syntax (which is what the #+INCLUDE format uses).

This explains the #+INCLUDE directive options:

CodeWars: Pick Peaks

Table of Contents

Beginning

The problem given is to write a function that returns the location and values of local maxima within a list (array). The inputs will be (possibly empty) lists with integers. The first and last elements cannot be called peaks since we don't know what comes before the first element or after the last element.

Code

Imports

# pypi
from expects import equal, expect

The Submission

def pick_peaks(array: list) -> dict:
    """Find local maxima

    Args:
     array: list of integers to search

    Returns:
     pos, peaks dict of maxima
    """
    output = dict(pos=[], peaks=[])
    peak = position = None

    for index in range(1, len(array)):
        if array[index - 1] < array[index]:
            position = index
            peak = array[index]
        elif peak is not None and array[index - 1] > array[index]:
            output["pos"].append(position)
            output["peaks"].append(peak)
            peak = position = None
    return output
expect(pick_peaks([1,2,3,6,4,1,2,3,2,1])).to(equal({"pos":[3,7], "peaks":[6,3]}))
expect(pick_peaks([3,2,3,6,4,1,2,3,2,1,2,3])).to(equal({"pos":[3,7], "peaks":[6,3]}))
expect(pick_peaks([3,2,3,6,4,1,2,3,2,1,2,2,2,1])).to(equal({"pos":[3,7,10], "peaks":[6,3,2]}))
expect(pick_peaks([2,1,3,1,2,2,2,2,1])).to(equal({"pos":[2,4], "peaks":[3,2]}))
expect(pick_peaks([2,1,3,1,2,2,2,2])).to(equal({"pos":[2], "peaks":[3]}))
expect(pick_peaks([2,1,3,2,2,2,2,5,6])).to(equal({"pos":[2], "peaks":[3]}))
expect(pick_peaks([2,1,3,2,2,2,2,1])).to(equal({"pos":[2], "peaks":[3]}))
expect(pick_peaks([1,2,5,4,3,2,3,6,4,1,2,3,3,4,5,3,2,1,2,3,5,5,4,3])).to(equal({"pos":[2,7,14,20], "peaks":[5,6,5,5]}))
expect(pick_peaks([18, 18, 10, -3, -4, 15, 15, -1, 13, 17, 11, 4, 18, -4, 19, 4, 18, 10, -4, 8, 13, 9, 16, 18, 6, 7])).to(equal({'pos': [5, 9, 12, 14, 16, 20, 23], 'peaks': [15, 17, 18, 19, 18, 13, 18]}))
expect(pick_peaks([])).to(equal({"pos":[],"peaks":[]}))
expect(pick_peaks([1,1,1,1])).to(equal({"pos":[],"peaks":[]}))

CodeWars: Simple Pig Latin

Description

Move the first letter of each word to the end of it, then add "ay" to the end of the word. Leave punctuation marks untouched.

Code

Imports

# pypi
from expects import equal, expect

Submission

import re

LETTERS = re.compile(r"[a-zA-Z]")
WORD_BOUNDARY = re.compile(r"\b")


def convert(token: str) -> str:
    """Convert a single word to pig-latin

    Args:
     token: string representing a single token

    Returns: 
     pig-latinized word (if appropriate)
    """
    return (f"{token[1:]}{token[0]}ay"
            if token and LETTERS.match(token) else token)


def pig_it(text: str) -> str:
    """Basic pig latin converter

    Moves first letter of words to the end and adds 'ay' to the end

    Args:
     text: string to pig-latinize

    Returns:
     pig-latin version of text
    """
    return "".join(convert(token) for token in WORD_BOUNDARY.split(text))
expect(pig_it('Pig latin is cool')).to(equal('igPay atinlay siay oolcay'))
expect(pig_it('This is my string')).to(equal('hisTay siay ymay tringsay'))
expect(pig_it("Hello World !")).to(equal("elloHay orldWay !"))