Moving BeyondPod Files To the SDCard (Android 9)

Introduction

I don't follow the changes going on with Android closely enough to know exactly when all the changes were made to disable moving things to the SD Card, but I've been running out of storage recently, even though my SD Card has over 60 GB of free space so I looked into it and there are currently three things that seem to have changed that caused this problem with my Moto-X running Android Pie (9):

  • The option to use the SDCard as an extention of the Internal Storage has disappeared from the storage options.
  • The option to move any of my apps to the SDCard has disappeared from the Apps' settings
  • The last update seems to have broken all the connections between my apps and the SDCard so none of them are (were) using the external storage

There might be a way to get around the first two problems, but I don't really feel like chasing that right now. It turns out that fixing the last case for some of my apps works, but isn't as intuitive as I would like it to be. Here's what to do.

Give Your App Storage Permissions

Settings

In your Android settings menu pick "Apps & notifications".

apps_and_notifications.png

Apps & Notifications

Next pick Beyond Pod from the list of applications (in this case it was one of my recently opened applications, but it isn't always).

apps_list.png

App Info

In the BeyondPod settings make sure that Permissions has Storage listed, if not tap the Permissions to get to that setting.

beyond_pod_settings.png

App Permissions

In the "App permissions" make sure the switch next to "Storage" is turned on.

beyond_pod_storage.png

Figure Out The Path To Your SDCard

Using ADB

I couldn't any way to find out the path to the sdcard in the Android settings themselves. The easiest way (to me) is to set up android debug bridge and then list the contents of the storage folder.

hades@erebus ~/d/datasets [1]> adb shell
payton_sprout:/ $ ls /storage/
ls: /storage//193D-4160: Permission denied
56DC-7D9D emulated self 
payton_sprout:/ $ df -h storage/56DC-7D9D/
Filesystem              Size  Used Avail Use% Mounted on
/mnt/media_rw/56DC-7D9D  60G  3.5G   56G   7% /storage/56DC-7D9D

The Wrong Way

If you look at the permissions for the folder you can see that the folder itself has read-write permissions if you're root or part of the sdcard_sw group

payton_sprout:/ $ ls -l storage/56DC-7D9D/
ls: storage/56DC-7D9D//.android_secure: Permission denied
total 128
drwxrwx--x 3 root sdcard_rw 131072 2019-01-27 14:04 Android

and although there is that Permission denied for the .android_secure file, it let me create folders and files in there so I figured I would create a folder for downloads and point BeyondPod to it.

It turns out that this doesn't work. I was going to walk through the error but I've already set it up the right way and I don't want to undo it. The key to figuring out why it kept telling me my folder didn't exist or was read only was finding this beyondpod forums thread. It looks like when you give permission to BeyondPod to use the SDCard, Android creates a specific folder that BeyondPod can use and you have to point it there. The format is:

/storage/<sd card>/d/data/mobi.beyondpod/files/

So in my case the path is:

/storage/56DC-7D9D/Android/data/mobi.beyondpod/files/

Using "Files" Instead of ADB

Even though the Settings menus don't seem to show you the path to the sdcard you can use a file browser app if you don't want to use adb. Here's my sdcards name in the files app (it's not a file browser, but it's rather something that's supposed to help you clean up your storage but it works for this case).

files_browser.png

Point Beyond Pod To the SDCard

Settings

Open Beyond Pod and scroll to the bottom feeds list and tap on the Settings option.

beyond_pod_settings_menu.png

Advanced Settings

Now click on the hamburger menu icon on the top right to open it up and tap on Advanced Settings.

beyond_pod_advanced_settings_menu.png

Podcast Storage Location

Scroll all the way down until you reach the Podcast Storage Location section and tap on Episode Download Path to enter the folder path. You should probably also click Lock to Current Path as well.

beyond_pod_storage_path.png

Once you change the settings BeyondPod will move the files and restart and at this point it should be storing everything to the SDCard. Now, on to all the other apps in there.

pyLDAvis In org-mode With JQuery

Introduction

In my last post I loaded the pyLDAvis widget by dumping the HTML/Javascript right into the org-mode document. The problem with doing this is that the document has a lot of lines of text in it, which slows down emacs a noticeable amount, making it hard to display one widget, and pretty much impractical to show more than one. So, since Nikola (or maybe bootstrap or one of the other plugins I'm using) is loading JQuery anyway, I'm going to use javascript to add the HTML after it loads from a file.

Imports

Python

datetime is just to show how long things take. In this case the data-set is fairly small so it doesn't take very long, but in other cases it might take a very long time to build the LDA model so I like to time it so I know the next time about how long I should wait.

from datetime import datetime
from pathlib import Path

From PyPi

from sklearn.datasets import fetch_20newsgroups
from sklearn.decomposition import LatentDirichletAllocation
from sklearn.feature_extraction.text import CountVectorizer
import pyLDAvis
import pyLDAvis.sklearn

The Data

I'm going to use the Twenty Newsgroups data-set, not because of anything significant, but because sklearn has a downloader for it so I figured it'd be easiest.

path = Path("~/datasets/newsgroups/").expanduser()
newsgroups = fetch_20newsgroups(data_home=path, subset="train")
print(path)
/home/brunhilde/datasets/newsgroups

The newsgroups.data is a list, so it doesn't have a shape attribute like it would it it were a numpy array.

print("{:,}".format(len(newsgroups.data)))
print("{:.2f}".format(len(newsgroups.data)/18000))
11,314
0.63

The documentation for the fetch_20newsgroups function says that the full dataset has 18,000 entries, so we have about 63% of the full set.

The Vectorizer

I'm going to use sklearn's CountVectorizer to convert the newsgroups documents to arrays of token counts. This is about the visualization, not making an accurate model so I'm going to use the built-in tokenizer. I'm not sure what the fit method is for, but the fit_transform method returns the document-term matrix that we need (each row represents a document, the columns are the tokens, and the cells hold the counts for each token in the document).

started = datetime.now()
vectorizer = CountVectorizer(stop_words="english")
document_term_matrix = vectorizer.fit_transform(newsgroups.data)
print("Elapsed: {}".format(datetime.now() - started))
Elapsed: 0:00:03.033235

The LDA

Now we'll build the Latent Dirichlet Allocation Model.

start = datetime.now()
topics = len(newsgroups.target_names)
lda = LatentDirichletAllocation(topics)
lda.fit(document_term_matrix)
print("Elapsed: {}".format(datetime.now() - start))
Elapsed: 0:02:37.479097

PyLDAvis

Okay so here's where we try and get pyLDAvis into this thing.

Prepare the Data for the Visualization

The Prepared Data

The first step in using pyLDAvis is to create a PreparedData named-tuple using the prepare function.

start = datetime.now()
prepared_data = pyLDAvis.sklearn.prepare(lda, document_term_matrix, vectorizer)
print("Elapsed: {}".format(datetime.now() - start))
Elapsed: 0:00:34.293668

Build the HTML

Now we can create an HTML fragment using the prepared_data function. The output is a string of HTML script, style, and div tags. It adds the entire data-set as a javascript object so the more data you have, the longer the string will be.

div_id = "pyldavis-in-org-mode"
html = pyLDAvis.prepared_data_to_html(prepared_data,
                                      template_type="simple",
                                      visid=div_id)

Export the HTML

Now I'm going to save the html to a file so we can load it later.

slug = "pyldavis-in-org-mode-with-jquery"
posts = Path("../files/posts/")
folder = posts.joinpath(slug)
filename = "pyldavis_fragment.html"
if not folder.is_dir():
    folder.mkdir()

output = folder.joinpath(filename)
output.write_text(html)
assert output.is_file()

So here's where we create the HTML that will be embedded in this post. The JQuery load function puts the content of our saved file into the div. I added the css call because I have my site's font-size set to extra-large, since the Goudy Bookstyle looks too small to me otherwise (I think nice fonts look better when they're big), which causes the buttons in the pyLDAvis widget to overflow out of the header. Under normal circumstances you wouldn't need to do this, but if you do want to do any one-off styling, here's an example of how to do it. Otherwise maybe an update to the style-sheet would be better.

The right-hand box is still messed up, but it's good enough for this example.

print('''#+BEGIN_EXPORT html
<div id="{0}"></div>
<script>
$("#{0}").load("{1}")
$("#{0}-top").css("font-size", "large")
</script>
#+END_EXPORT'''.format(div_id, filename))

pyLDAvis in org-mode

Imports

Python

from datetime import datetime
from pathlib import Path

From PyPi

from sklearn.datasets import fetch_20newsgroups
from sklearn.decomposition import LatentDirichletAllocation
from sklearn.feature_extraction.text import CountVectorizer
import pyLDAvis
import pyLDAvis.sklearn

The Data

path = Path("~/datasets/newsgroups/").expanduser()
newsgroups = fetch_20newsgroups(data_home=path, subset="train")
print(path)
/home/brunhilde/datasets/newsgroups

The newsgroups.data is a list, so it doesn't have a shape attribute like it would it it were a numpy array.

print("{:,}".format(len(newsgroups.data)))
11,314

The documentation for the fetch_20newsgroups function says that the full dataset has 18,000 entries, so we have about 63% of the full set.

The Vectorizer

I'm going to use sklearn's CountVectorizer to convert the newsgroups to convert the documents to arrays of token counts. This is about the visualization, not making an accurate model so I'm going to use the built-in tokenizer. I'm not sure what the fit method is for, but the fit_transform method returns the document-term matrix that we need (each row represents a document, the columns are the tokens, and the cells hold the counts for each token in the document).

started = datetime.now()
vectorizer = CountVectorizer(stop_words="english")
document_term_matrix = vectorizer.fit_transform(newsgroups.data)
print("Elapsed: {}".format(datetime.now() - started))
Elapsed: 0:00:02.798860

That was pretty fast, I guess this data set is sort of small.

The LDA

Now we'll build the Latent Dirichlet Allocation Model.

start = datetime.now()
topics = len(newsgroups.target_names)
lda = LatentDirichletAllocation(topics)
lda.fit(document_term_matrix)
print("Elapsed: {}".format(datetime.now() - start))
Elapsed: 0:02:30.557142

PyLDAvis

Okay so here's where we try and get pyLDAvis into this thing.

Prepare the Data for the Visualization

The Prepared Data

start = datetime.now()
prepared_data = pyLDAvis.sklearn.prepare(lda, document_term_matrix, vectorizer)
print("Elapsed: {}".format(datetime.now() - start))

Elapsed: 0:00:33.152028

Build the HTML

The HTML that creates the plot is fairly large. The browser seems to handle it okay, but emacs gets noticeably slower. I'll try the simple template to see if that makes any difference (the default works in both jupyter notebooks and any other HTML, but simple won't work in jupyter notebooks). I'm also going to set the ID because the CSS doesn't work so well with mine so I'm going to try and override the font-size on the header.

div_id = "pyldavis-in-org-mode"
html = pyLDAvis.prepared_data_to_html(prepared_data,
                                      template_type="simple",
                                      visid=div_id)

Embed the HTML

print('''#+BEGIN_EXPORT html
{}
<script>
document.querySelector("div#{}-top").style.fontSize="large"
</script>
#+END_EXPORT'''.format(html, div_id))

Slip Box System Parts List

The Four Parts

A Capture System

This should be paper-based, or at least something that's always there and quick to use.

  • a notebook
  • index cards
  • loose paper
  • napkins…

A Reference System

This is where you put information about your sources. For books and papers Zotero is handy, although once again, the fact that I have to fire up this GUI-based program adds a little bit of overhead. The original system was just another box so I'm going to try something like that. Maybe a sub-folder…

The Slip Box

The original system was a wooden box with A6 paper. I'm using a static site with plain text (org-mode).

Something To Produce a Final Product

The system is aimed at writers, but I'm a computer programmer, and I think it might work with other types of output (like drawing) so it's really just having a way to produce something from your project.

Related Posts

Reference

  • HTTSN - How To Take Smart Notes

Using Your Slip Box

Introduction

This is my re-wording of the Slip Box Method.

The Method

Capture Everything

Write everything down - ideas don't count until they're out of your head and on paper. Writing it down also frees your mind to move on to other things.

Take Notes

Whenever you are taking in someone else's ideas (e.g. reading, listening) take notes.

Make Your Notes Permanent

The initial notes are just temporary inputs, later in the day you need to convert them into some form that has these attributes:

  • They are complete - write them for your future self, don't rely on being able to remember what else is needed to understand the note.
  • They are written in a way that relates to your interests.
  • There is only one idea per note.

Put the Permanent Notes in the Slip Box

  • When you file your note look through the other notes and try and place it behind a related note.
  • Add links to other notes that are related.
  • Add the new note to an entry-point note that holds links to other notes.

Work Bottoms-Up

  • Don't try and come up with a project by "thinking", look through the slip box and let it tell you what you're interested in.
  • If you have an idea for what to do but there isn't enough in the box yet, take more notes.
  • Keep the notes in one folder - don't sort them into sub-categories. This way you can make new associations that you didn't have when you first made the note.

Build Projects From Copies

Copy everything that seems relevant to a project folder on your desktop and see what needs to be filled in and what seems redundant (or maybe just wrong).

Translate Your Notes

Take these fragmented notes and convert them into a coherent argument. If you can't then look to take more notes to fill in what's missing.

Revise

Don't accept the first draft, edit, erase, re-do.

Move On

When you're done with the project, start over with a new one.

Implementation Details

The original method used paper and a wooden box. I really like paper and am tempted to try this, but I don't think doing something that makes me even more if a pack-rat is a good idea. The book (How To Take Smart Notes) recommends a computer program written specifically for this system, but I am a little leery of getting tied into one program, and all these GUI programs are starting to turn me off.

Instead, I'm going to try and use this blog as my slip-box, so, as far as "equipment" goes, this is what I'm going to use:

Flattening out the file-system makes it hard to browse the files, though. I guess less and ls are going to be the main thing I use (and maybe ag and deft). We'll see, I only started reading the book yesterday so I'm still trying to figure this out as I go.

Related Posts

Reference

  • HTTSN - How To Take Smart Notes

Bibliography: How To Take Smart Notes

Table of Contents

Description

This book describes the Slip-Box method developed by Niklas Luhmann. Its focus is on research writing, but seems like a good system for projects in general. It points that people are generally taught that you should work in a series of "next-steps", but if you are doing something creative (or at least something you haven't done before) then this is an impractical, if not impossible, way to work. Instead the author proposes that you use a system of note-taking to capture everything and then look for patterns in your notes - a bottoms-up approach rather than a top-down one.

Reference

[HTTSN] Ahrens S. How to take smart notes: one simple technique to boost writing, learning and thinking: for students, academics and nonfiction book writers. North Charleston, SC: CreateSpace; 2017. 170 p.

Encrypt Dropbox Folders on Ubuntu With CryFS

Introduction

This is one way to encrypt the contents for cloud-synchronized folders using CryFS. I'm going to illustrate it using the Dropbox folder that the dropbox client creates.

Encrypt the Folders

Install It

Ubuntu currently (November 10, 2018) has CryFS as part of its packages so you can install it with apt.

sudo apt install cryfs

Create the Target and Source Folders

The cryfs command will create the two folders and set them up for you. The syntax is cryfs <target> <source>. The target will contain the encrypted folders so it will go in the Dropbox folder, while the source will hold the unencrypted files.

cryfs Dropbox/encrypted dropbox_source

This is the same command you would use on another computer to set-up the existing encrypted folder on your new computer. The source folder can be named differently, but the target folder and the password need to be the same one you used when you created it.

Unmount It

If you need to unmount it you can use fusermount.

fusermount -u dropbox_source

Since you are doing all this within your home directory you don't need root privileges (except to install cryfs with apt).

References

  • I got this from Linux Babe.
  • But the CryFS Tutorial is pretty straight-forward, just skips the part about install cryfs.

Grep Coroutines

Set Up

Imports

Python

from io import StringIO

PyPi

import requests

Constants

PRIDE_AND_PREJUDICE = "https://www.gutenberg.org/files/1342/1342-0.txt"

Grab the Source

response = requests.get(PRIDE_AND_PREJUDICE)
assert response.ok

Functions

Coroutine

def coroutine(function):
    """Sets up the co-routine

    Args:
     function: coroutine function

    Returns:
     wrapper futnction that starts the co-routine
    """
    def wrapper(*args, **kwargs):
        co_routine = function(*args, **kwargs)
        next(co_routine)
        return co_routine
    return wrapper

A Cat

def process(lines, receiver, case_insensitive: bool=True):
    """Sends the lines in the text to the receiver

    Args:
     text: iterable text lines
     receiver: thing to send lines to
     case_insensitive: whether to lowercase the lines
    """
    lines = StringIO(lines)
    if case_insensitive:
        processor = lambda line: line.lower()
    else:
        processor = lambda line: line

    for line in lines:
        receiver.send(processor(line))

GREP

@coroutine
def tokens(token, case_insensitive, receiver):
    """count tokens in the line"""
    if case_insensitive:
        token = token.lower()
    while True:
        text = (yield)
        receiver.send(text.count(token))

Count

@coroutine
def count(token):
    counter = 0
    try:
        while True:
            counter += (yield)
    except GeneratorExit:
        print(token, counter)
    return

Fanout

@coroutine
def fork(children):
    while True:
        data = (yield)
        for child in children:
            child.send(data)
    return

Try It

text = StringIO(response.content.decode("utf-8"))
process(text, tokens("feelings", True, count("feelings")))
feelings 86

text = StringIO(response.content.decode("utf-8"))
process(text, tokens("beauty", True, count("beauty")))
beauty 27

text = response.content.decode("utf-8")
process(text, tokens("cried", True, count("cried")))
cried 91