Fatal Python Error

I was going to make my first nikola post in a few months but when I tried the nikola new_post command I got the following error.

Could not find platform independent libraries <prefix>
Could not find platform dependent libraries <exec_prefix>
Consider setting $PYTHONHOME to <prefix>[:<exec_prefix>]
Fatal Python error: Py_Initialize: Unable to get the locale encoding
ImportError: No module named 'encodings'

I had no idea what this meant so I tried searching the web for the error and found people saying different things about what it meant to them when they encountered it, but the one that pointed the way for me was a bug report for virtualenv where a user reported that he got this error because, it turned out, the Windows version didn't work with symlinks if the window was opened as an administrator.

I'm not using Windows, but when I changed into the directory for my nikola virtualenv installation, ls -l showed that all my symbolic links were broken. I don't know how it happened… maybe something got moved, but the point of this post was to make a note for myself if I see this error again - check the sym-links for the virtualenv installation.

Testing KaTex

This is a test to see if KaTex is working.

\begin{align*} f(x) = \pi r^2\\ \end{align*}

The answer is no, but mathjax does seem to work.

Getting it Working

Edit the conf.py file.

  1. Uncomment the second MATHJAX_CONFIG default (the one with actual content instead of an empty string).

  2. Set EXTRA_HEAD_DATA to the mathjax CDN.

EXTRA_HEAD_DATA = '''
<script type="text/javascript" async
  src="https://cdn.mathjax.org/mathjax/latest/MathJax.js?config=TeX-MML-AM_CHTML">
</script>
'''

Converting Nikola from a Blog to a Site

These are my notes on converting this site to be a web-site first (with a blog on the side). There is an official page on creating a site instead of a blog, but I had a little bit of a hard time figuring out what was going on so these are the main points in case I need to do it again.

In a nutshell:

  1. Get rid of the sub-folder argument in the PAGES variable in conf.py
  2. Set INDEX_PATH to point to the posts sub-folder
  3. Create an index page for the site.

Updating conf.py

The main things to do are to edit the conf.py file so that the pages you create get copied over as the root of the output folder (instead of in a sub-folder called pages) and moving the blog-index down into a sub-folder.

Note: The "pages" and "posts" folder have to match the names of the actual folders you use. If you call the folder with your web-site files source, for instance, then instead of "pages/" you would put "source/" in the conf.py settings that we're updating.

Making "pages" the Site

To make the pages that you create the root of the site you need to change the PAGES variable to not have a sub-folder as the target (this is the second-entry in the tuple). So if it originally was:

PAGES = (
    ("pages/*.rst", "pages", "page.tmpl"),
)

You would change the second value in the tuple to an empty string:

PAGES = (
    ("pages/*.rst", "", "page.tmpl"),
)

Now when you build the site (nikola build) the output folder will have your 'pages at the top-level. This means that when you refer to pages (e.g. in the navigation configuration) you don't add 'pages/' as a prefix anymore.

Note: The page.tmpl used to be called story.tmpl but somewhere along the way it got changed.

Moving the Blog-index

Since Nikola assumes that the blog is your main-page you need to tell it to create the index in a sub-folder by setting the INDEX_PATH to the name of the sub-folder. If, for example, the blog-posts are being put into a folder named posts that's located next to the conf.py file, then the setting would be:

INDEX_PATH = "posts"

Note: This was commented out by default so uncomment it and make the change.

Creating the Home Page

At this point if you build the site and navigate to it you'll find that your home-page is now a directory of your output folder. You can navigate to a page by going through the folders, but this is probably not the intended way to get around. The easiest way (that I found) to create the home-page is to create a new-page (nikola new_page) and when prompted for a title, call it index. This will create pages/index.rst (unless you pass in a different format (e.g. -f orgmode) which you can edit to become your home page (make sure to change the title if you don't want the page headline to be 'index').

Note:

Some other things might need to be re-done in the conf.py as well, since the folder structure has changed. These are only the basic steps to switch. The NAVIGATION_LINKS in particular may need updating.

Some Emacs Notes (mostly hide-show)

I spent most of yesterday troubleshooting someone else's code and I found these commands useful (among others - these are just the ones that I had to look up).

Occur

One problem that I ran across was that there were around two dozen class definitions, all of which were similar so the bugs that were in one tended to be in all the other classes as well. I found that using the M-x occur command was useful for looking at all instances of certain lines, so I could search for things that were common among all the class definitions (occur is aliased by the list-matching-lines command).

At one point, for instance, I thought that two class names looked the same, so I guessed that whoever wrote the code probably lost track of the classes that they had defined and duplicated (at least) one of them. To check I entered:

M-x occur class

Which brought up a list of lines that had the word 'class' in them and found that they had indeed duplicated that class definition. The argument 'class' is a regular expression so if there were other places where the word 'class' was used (e.g. in a nested class or a comment), you could add a start of line character:

 M-x occur ^class
 
This would only bring up lines that started with the work 'class'.

Hide-show

This is an emacs mode that I keep rediscovering so I figured I should write it down. This mode enables code-folding (since I program in python this means hiding code that is indented.

If, for example, you had these class definition:


class Test(object):
def __init__(self, x):
self.x = x

class TestTest(Test):
def __init__(self, y, *args, **kwargs):
super(TestTest, self).__init__(*args, **kwargs)
self.y = y

After you folded it you would see:


class Test(object):

class TestTest(Test):



To enable hide-show for the buffer you're in:

M-x hs-minor-mode

But in my case I want it to always be on when I'm editing a python file so I added this to my ~/.emacs.d/init.el file:

(defun turn-on-hideshow () (hs-minor-mode 1))
(add-hook 'python-mode-hook 'turn-on-hideshow)


I don't know why but emacs doesn't seem to have a way to automatically associate minor-modes with file-extensions so the work-around is to first define a function that turns on the minor mode (called 'turn-on-hideshow' in this case) and then add it to the hook of the major-mode ('python-mode-hook' in this case). Once you have hideshow working, you have a few options available to you (see the wiki) but I only use three of them most of the time.

The main trigger for 'hs-minor-mode' is C-c @ which you follow with the actual command. For instance, to hide (fold) everything that isn't flush-left:

C-c @ C-M-H

which translates to control-c, @, control-alt-shift-h ('H' is for hide).

The opposite (show all the hidden text) is:

C-c @ C-M-S

The only other command that I use a lot (so far) is toggle block:

C-c @ C-c

What this does is  toggle the block where your cursor is currently located. A block is the line that is flush left and all the lines that follow it that are indented. So, if you have multiple classes and functions defined, each one of them would be a block. If the block you're at isn't folded, then entering C-c @ C-c will hide the indented lines and if they are already hidden then it will un-hide them.

Paragraph Navigation 

This is one of those things that you learn when you first go through the emacs tutorial, but somehow I always forget it - emacs will jump between paragraphs (it uses newlines so it doesn't work quite right for code, but works for expository text). 

To jump to the previous paragraph:

M-x {

To go to the next paragraph:

M-x }

 How I Used It

 I don't want to share the code (since it wasn't mine) but in a nutshell what I faced was a file with a couple-dozen class definitions (this was a django-factory-boy module so each model (database table) had two classes, a straight factory and a fuzzy version), so the first thing I did was fold all the classes to make it easier to get a high-level view:

C-c @ C-M-H

Then I noticed the duplicate class names so I listed all the classes that had that name:

M-x occur class <class name>

Occur gives you the line numbers so you can jump right to the line (although if the line is within a folded block you have to un-fold it first) so I created two windows:

C-x 2

Then I jumped to the starting line of the first definition  in the first window (line 5 in this example) and un-hid it:

M-g-g 5
C-c @ C-c

Then navigated to the other window and jumped to the starting line of  the second definition (line 50 in this example) and un-hid it:

C-x o
M-g-g 50
C-c @ C-c

Inspecting the definitions revealed that they had the same attributes but were assigned different values so I had no way of knowing which was the class definition to keep and the person who wrote them was on vacation so I decided not to fix it (I needed a different class in the module so this didn't directly affect me).

Ubuntu 14.10 and the Brother HL-2140

I have a Brother HL-2140 Laser Printer which was working previously but for some reason gave me a CUPs error when I tried to print today (I think this was the first time I'd tried to print since upgrading to Ubuntu 14.10). I searched for the error on the web and found this forum post that didn't address my problem directly but did address a problem I had when I set up my printer before (on Ubuntu 14.04 - I'm using 14.10 now) where the printer would churn out blank pages instead of printing what I wanted so I decided to give it a try again. There are a few suggestions on the page but the two I tried both suggested using a different driver.

The first suggestion I tried was to use the HL-2170 instead of the/HL-2140./ This worked when I printed the first test page but after that it just silently failed no matter what I tried to install.

The next suggestion I tried was to use the Brother HL-2140 Foomatic/hpijs-pcl5e driver. This didn't print any pages for me and gave a Idle - filter failed error in the printer properties dialogue box. I don't know what the message means, but since I fixed it without knowing, I guess I don't need to know.

Sandwiched between the HL-2170 and the HL-2140 models in the list of available drivers was the HL-2142 model. Since it seemed close enough to the 2140 (only off by two) I decided to try it and for whatever reason it worked. I'm pretty sure I used one of drivers the forum post suggested when I was using 14.04 (and the default HL-2140 driver before that) but something seems to have changed again.

There's two lessons here:

  1. Use the HL-2142 driver for the Brother HL-2140 printer on Ubuntu 14.10   2. Try drivers for similar models if you upgrade your Ubuntu installation and the printer stops working

Using pudb with Behave and Fish

What is this about?

behave is a behavior-driven-development (BDD) tool for python that tests whether you have properly implemented the features you have defined in your features file(s). In their tutorial they tell you how you can set it up so that it will drop into ipdb (ipython debugger) when a test fails, but I use pudb and the fish shell (not bash) so this documents what I had to do to get it to work.

How do you do it then?

The first thing to do is create a file named environment.py in the same folder as the features file. Inside of it put the following:
from distutils.util import strtobool as _bool
import os

BEHAVE_DEBUG_ON_ERROR = _bool(os.environ.get("BEHAVE_DEBUG_ON_ERROR",
"no"))
def after_step(context, step):
if BEHAVE_DEBUG_ON_ERROR and step.status == 'failed':
import pudb
pudb.post_mortem(tb=step.exc_traceback,
e_type=None,
e_value=None)
return
This is more-or-less exactly what was in the tutorial except I swapped out pudb for pdb. This code tells behave to run the pudb.post_mortem after a step is finished (a step corresponds to one of the functions you define to implement the tests) if the step failed and your shell has an environment variable named BEHAVE_DEBUG_ON_ERROR and it is set to something that strtobool recognizes as True. This is from the docstring documentation for strtobool:
distutils.util.strtobool(val)
Convert a string representation of truth to true (1) or false (0).
  • True values are y, yes, t, true, on and 1
  • false values are n, no, f, false, off and 0
  • Raises ValueError if val is anything else.
The 'no' in the os.environ.get function means that it won't execute by default. To have it run you need to set the environment variable to one of the 'true' values. In fish this would be:
set -x BEHAVE_DEBUG_ON_ERROR yes
Now when you run behave it will drop into pudb when a test fails.

So, what then?

Using this has so far been less useful than I thought it would be, since it tends to drop me into the pyhamcrest call that failed and although I've managed to step through to the behave code I haven't managed to figure out how to get to my own code. It is still useful, though, since behave will not stop when it encounters a failed test so this makes it easier to figure out what has failed.
Even though the pudb-behave combination is less exciting than I thought it would be, there were several things I learned that I want to document here for later.

Setting an environment variable in fish

To set a fish environment variable:
set -x <variable> <value>
And then unset it:
set -e <variable>
I've done this before to set my PATH variable but for some reason when I tried to search for it this time I got some false-starts at first.

Python's String to Boolean

I also learned that python has a built in way to translate strings to booleans. This isn't really a hard thing to do on your own, but it was an interesting discovery. I don't think I would have looked in distutils for it.

pudb's post_mortem function

Another interesting thing to find out was that pudb has a post_mortem function. I like pudb but it doesn't seem to be well documented. The readme does say that it displays the same interface as python's pdb so I suppose I could just read their documentation, but it seems like one of those things where you have to know what you don't know to know to look for it. In this case I figured out how to call it by looking at the code (it's defined in pudb.__init__.py).

Using environment variables for debugging

Probably the most interesting thing was the way they used os.environ to change the behavior of the code. I normally use command-line options to enable debugging but this might be a better pattern since it pulls it out of the user interface. This means that it won't be as obvious to the user, but I suppose if they're going to debug my code they had better read the documentation and not just rely on --help anyway. I think I'll probably get rid of in in the environment.py file, though, since I want it to run pretty much all the time, but it's an interesting idea anyway.

Conclusion

This was a translation of how to set up a post-mortem debugger for behave using pudb instead of ipdb and fish instead of bash. It is primarily meant to be a record for me to look at in the future, since I don't set up my behave environment on a regular basis and tend to have a hard time re-searching for things (possibly because I use DuckDuckGo so my search history isn't being used). I think the most valuable thing I got out of it was the pattern for setting up debuggers that I think I'll steal (use) for my own code.

Installing a Python Package for a Single User

Normally when installing a package that I'm working on I'm using a virtualenv so it's installed within that environment only, but I wanted to test part of my code that was using ssh to run a command that I didn't want to install system-wide. Creating a virtualenv for the test-user and activating it before running the command via ssh seemed excessive (and maybe not possible - I didn't try) but it turns out that you can install packages at the user-level using the 'setup.py' file.

In this case I wanted the setup.py to create a command-line command called 'rotate' and install it in the user's ~/bin folder so I could run it something like this:

ssh test@localhost rotate 90

First I changed the .bashrc to add the bin folder to the PATH:

PATH=$HOME/bin:$PATH

 This has to be added to the top of the .bashrc file because the first thing there by default is a conditional that prevents it from using the things in the .bashrc file:

# If not running interactively, don't do anything
case $- in
    *i*) ;;
      *) return;;

esac

Next I changed into the directory where the package's setup.py file was and installed the package:

python setup.py install --install-scripts=$HOME/bin --user

The --user option is what tells python to install it for the local user instead of /usr/local/bin and the --install-scripts tells it where to put the commands it creates. Without the --install-scripts option it will install it in .local/bin so another option would be to change the PATH variable instead:

PATH=$HOME/.local/bin:$PATH

But I use ~/bin for other commands anyway so it seemed to make more sense to put it there.

OS : Ubuntu 14.04.1 LTS
Python: 2.7.6

Mocking Print

Background

I wanted to check to make sure I was sending the right output to the screen so I thought I would use mock to catch what I was sending to it. These are some notes about what happened.

Mocking print

If you try to patch print with MagicMock, here is what you get.


from mock import MagicMock, patch, call

mock_print = MagicMock()
try:
with patch('print', mock_print):
print 'test'
except TypeError as error:
print error

Need a valid target to patch. You supplied: 'print'

So it looks like 'print' is not the right thing to patch. Maybe you need to call it a built-in funtion:


with patch('__builtin__.print', mock_print):
print 'test'

print mock_print.mock_calls

test
[]

What if you call it as a function?


with patch('__builtin__.print', mock_print):
print( 'test')

print mock_print.mock_calls

test
[]

So neither of those raise an error, but they also do not manage to mock print. Well, if you look at the description for print, it turns out that the print function has the signature:


print(*objects, sep=' ', end='\n', file=sys.stdout)

It also says that the function call is not normally available unless you import it from the future.

But, presuming that the print statement works the same way as the function call, what happens if you mock sys.stdout?

Mocking sys.stdout

This doesn't answer the question of why I can't mock print directly, but maybe mocking sys.stdout will work instead.


with patch('sys.stdout', mock_print):
print 'test'

expected = [call('test')]
actual = mock_print.mock_calls

try:
assert actual == expected, "Expected: {0} Actual: {1}".format(expected,
actual)
except AssertionError as error:
print error

Expected: [call('test')] Actual: [call.write('test'), call.write('\n')]

It looks like the mock works this time, but it did not return what I was expecting -- print makes two calls to stdout.write, the first is the string you pass it which it then follows with a newline character. Given this slightly more complete understanding of print:


# create some output to send to print
lines = "a b c".split()

# reset the mock so the previous calls are gone
mock_print.reset_mock()
with patch('sys.stdout', mock_print):
for line in lines:
print line

expected = []
for line in lines:
expected.append(call.write(line))
expected.append(call.write('\n'))

actual = mock_print.mock_calls

try:
assert actual == expected, "Expected: {0} Actual: {1}".format(expected,
actual)
except AssertionError as error:
print error

Conclusion

Not earth-shattering, but I thought it was interesting that even after using Python for years, something as basic as print can yield something new if looked at more closely. It was also useful to see how mock can be used to discover the calls that are being made on an object, not just to test that expected calls are being made.

argparse and the Argument Parser

Mocking the argeparse ArgumentParser might not seem like a necessary thing, since you can pass in a list of strings to fake the command-line arguments, but I ran into trouble trying to figure out how to test it embedded in one of my classes so I thought I would explore it anyway, out of curiosity if nothing else. I am primarily interested in mocking the calls to sys.argv to see how it works.

sys.argv Calls

Using the mock_calls list from mock can be useful in figuring out how an object is being used.


parser = argparse.ArgumentParser()
parser.add_argument('--debug', action='store_true')
parser.add_argument('-d')

sys = MagicMock()
with patch('sys.argv', sys):
args = parser.parse_args()
for item in sys.mock_calls:
print item
print args

call.__getitem__(slice(1, None, None))
call.__getitem__().__iter__()
call.__getitem__().__getitem__(slice(0, None, None))
call.__getitem__().__getitem__().__iter__()
call.__getitem__().__getitem__().__len__()
Namespace(d=None, debug=False)

The getitem and slice

The first thing to note is the __getitem__ calls. According to the documentation it is:

Called to implement evaluation of self[key]. For sequence types, the accepted keys should be integers and slice objects.

So it looks like it is first using the built-in slice function to get a particular argument. According to the documentation the arguments are the same as for the range function (start, stop, step).

So it looks like it is doing the equivalent of [1:] in the first slice:


test = [0,1,2]

# what does it do?
print test.__getitem__(slice(1, None, None))

# are they the same?
print test[1:] == test.__getitem__(slice(1, None, None))

[1, 2]
True

One thing to note is that slice(1) is not the same thing as slice(1, None, None):


print slice(1)
print slice(1, None, None)

slice(None, 1, None)
slice(1, None, None)

Trying a lambda

So, if I give __getitem__ a function to return the arguments I want, will this work?


sys.__getitem__ = lambda x,y: ['--debug']
with patch('sys.argv', sys):
args = parser.parse_args()
for item in sys.mock_calls:
print item

print args

call.__getitem__(slice(1, None, None))
call.__getitem__().__iter__()
call.__getitem__().__getitem__(slice(0, None, None))
call.__getitem__().__getitem__().__iter__()
call.__getitem__().__getitem__().__len__()
Namespace(d=None, debug=True)

It looks like it does, but would it be better to just make argv a list?

argv as a list


args = ['--debug']

def getitem(index):
return args[index]

# make a new mock since I set __getitem__ to a lambda function
sys = MagicMock()
sys.__getitem__.side_effect = getitem

with patch('sys.argv', sys):
parsed_args = parser.parse_args()

for item in sys.mock_calls:
print item
print parsed_args

call.__getitem__(slice(1, None, None))
Namespace(d=None, debug=False)

It now does not make the other calls and it also does not set the debug to True, so it did not work.

But I seem to have forgotten my earlier slice check -- it is starting at the second item. I think that normally the name of the program is the first thing passed in so maybe there needs to be an extra (first) entry to simulate the command name.

Adding a Command Name


args = 'commandname --debug'.split()

def getitem(index):
return args[index]

sys.__getitem__.side_effect = getitem

with patch('sys.argv', sys):
parsed_args = parser.parse_args()

for item in sys.mock_calls:
print item
print parsed_args

call.__getitem__(slice(1, None, None))
call.__getitem__(slice(1, None, None))
Namespace(d=None, debug=True)

It looks like it worked, and all but the first two calls went away, so it perhaps they were a result of me using the mock, not a normal part of the way parse_args works.

The Whole Thing

Okay, but what about the option -d?


args = 'commandname -d cow --debug'.split()

def getitem(index):
return args[index]

sys.__getitem__.side_effect = getitem

with patch('sys.argv', sys):
try:
parsed_args = parser.parse_args()
except Exception as error:
print error

for item in sys.mock_calls:
print item
print parsed_args

call.__getitem__(slice(1, None, None))
call.__getitem__(slice(1, None, None))
call.__getitem__(slice(1, None, None))
Namespace(d='cow', debug=True)

Well, that was kind of painful. On the one hand, I got it to work, on the other hand, I do not really know what the slice is doing since it seems to slice the same items over and over. I think, looking at the first set of calls, after the initial slice it manipulates the sliced copy and since I am passing a real list instead of a mock, the calls are now hidden.

Looking at the Code

I downloaded the python 2.7 code and looked in argparse.py and found this:


def parse_args(self, args=None, namespace=None):
args, argv = self.parse_known_args(args, namespace)

There is more to that function, but since it is calling parse_known_args I jumped to it:


def parse_known_args(self, args=None, namespace=None):
if args is None:
# args default to the system args
args = _sys.argv[1:]

Once again there is more code after that, but this explains the slice that is seen in the calls.

Later on it calls:


namespace, args = self._parse_known_args(args, namespace)

So jumping to _parse_known_args:


arg_strings_iter = iter(arg_strings)
for i, arg_string in enumerate(arg_strings_iter):

which I think explains the __iter__ call in the first set of calls. I tried stepping through the code with pudb but could only find one slice, I am not sure what the other calls were for. I suppose it would have been smarter to look at the source code first, but this is about figuring out how to use mock so I think it was helpful to try it empirically first. No fair peeking in the back of the book until you have tried at least once.

A Test Of Sphinx Cut and Paste

This is a test of dumping a cut-and-paste of body text from a sphinx-generated html page.
This is a puzzle from [RTNS].

The Puzzle

A Spanish treasure fleet of three ships was sunk off the coast of Mexico:
  • One had a trunk of gold forward and a trunk of gold aft
  • One had a trunk of gold forward and a trunk of silver aft
  • One had a trunk of silver forward and a trunk of silver aft
Divers just found one of the ships and a trunk of silver in it.
  • What is the probability that the other trunk has silver?

A Reasoning

The way to think of this is to not think of each category (silver vs gold) but to identify each trunk and how it is paired with another trunk. For example, we have six trunks:
G_1, G_2, G_3, S_1, S_2, S_3
In the three ships they were paired up:
  • Ship_1 = \{G_1, G_2\}
  • Ship_2 = \{G_3, S_1\}
  • Ship_3 = \{S_2, S_3\}
The trunk found had silver so the ship was either Ship_2 or Ship_3 and the trunk was one of S_1, S_2, \textrm{or} S_3. Call the trunk found T_f.
  • Case 1: T_f = S_1 then the other trunk will be G_3
  • Case 2: T_f = S_2 then the other trunk will be S_3
  • Case 3: T_f = S_3 then the other trunk will be S_2
In 2 out of 3 cases the trunk will be silver and in 1 out of 3 cases the trunk will be gold. So the probability that the next trunk pulled up (from the same ship) will be silver will be 2/3.

Simulation

This is the method the book gives:
  1. Create three urns: {7,7}, {7,8}, and {8,8}
  2. Choose an urn at random and a random element from the chosen urn
  3. If the element chosen was an 8 (gold), do nothing, if it was a 7 continue
  4. Record the other element in the chosen urn
  5. Calculate the proportion of 7s recorded to 8s
This seems confusing at first – we know that the trunk found was silver so ship 1 was not the one found, why include it in the simulation? My guess is that we do not need the third urn since we discard all the cases where it is chosen and we are not using the number of trials to find the probabilities. I think the given method might be a clearer simulation if we were trying to recreate what happened in that it reflects the entire story, but it does not really reflect the puzzle at the point it takes up – after the first trunk is found – so adds unnecessary computation (well, I guess the whole random choice thing is probably doing that anyway).
Try this:
  1. Create two urns: {0,1}, {1,1}
  2. Pick a random urn and an element from it
  3. If the element was a 0, go back a step
  4. Record the remaining element in the urn
  5. calculate the ratio of 1’s to 0’s
This is confusing, he says find the odds, but 2/3 is probability. The odds of finding silver should be 2:1 (he also flip-flops between saying they found silver and gold in the first trunk, but that is another problem – and the book was free, so what the heck).

GOLD = 0
SILVER = 1
ship_2 = (GOLD, SILVER)
ship_3 = (SILVER, SILVER)
fleet = (ship_2, ship_3)
trials = 10**5

# ships is a list of random ships from the fleet
ships = [random.choice(fleet) for trial in xrange(trials)]

# found_trunks is a list of trunk-indices chosen for each ship (the trunk found by the diver)
# although the values are the same as gold and silver (0 and 1)
# in this case they are tuple indices for the trunk-tuples in the ships
found_trunks = [random.randint(0,1) for ship in ships]

# next_trunks is the type of trunk not chosen for found_trunks if found_trunks wasn't gold
# because it's filtered, its length is the count of all cases where the first trunk was silver
next_trunks = [ships[index][(found_trunks[index] + 1) % 2] for index in range(len(ships))
if ships[index][found_trunks[index]] != GOLD]

# silvers is a count of the next_trunks that were silver
silvers = sum(1 for trunk in next_trunks if trunk == SILVER)

print "Probability next trunk is silver: {0:.2f}".format(float(silvers)/len(next_trunks))
print "(Compare to 2/3 = {0:.2f}).".format(2./3)
 
Probability next trunk is silver: 0.67 (Compare to 2/3 = 0.67).