Winning at scrabble with Python and Raspberry Pi

September 11th, 2020 Written by Wayne Covell

During lockdown, it’s been a case of playing lots of board games with the girlfriend; however after losing at one too many games of Scrabble, I came up with a Raspberry Pi–powered solution to help me win. Using a Raspberry Pi High Quality Camera and a bit of Python, you can quickly figure out the highest-scoring word your available Scrabble tiles allow you to play!

I gained quite a bit of attention from this, with Raspberry Pi foundation even blogging about it. The video of this and some of my other cool projects are on my YouTube channel – Devscover, and I write about them on my software engineering blog.

Guide

Hardware

  • Raspberry Pi 3B, 3B+ or 4
  • Raspberry Pi compatible touchscreen
  • Raspberry Pi High Quality Camera
  • Power supply for the touchscreen and Raspberry Pi
  • Scrabble board

For the hardware, you need a Raspberry Pi model that has both display and camera ports. I also chose to use an Official Raspberry Pi Touch Display because it can power the Pi, but any screen that can talk to your Raspberry Pi should be fine.

Running the software

Running the project begins with simply calling python3 scrabbleWordFinder.py

Firstly, the build takes a photo of your Scrabble tiles using raspistill.

Next, a Python script processes the image of your tiles and then relays the highest-scoring word you can play to your touchscreen.

The key bit of code here is twl, a Python script that contains every possible word you can play in Scrabble.

From 4.00 minutes into the build video, I walk you through what each bit of code does, but I’ve also added it here (See Techie section below).

Techie instructions

If you’re a techie and want to actually build this yourself, here’s what you need to do:

Installation was complicated, because the default way of installing should be

sudo apt-get install tesseract-ocr

However, that installs (at time of writing) version 4 of tesseract, which does not work with a whitelist.

Many attempts were made to use --oem 0 with tesseract 4 to enable whitelisting, but then more errors sprang up about needing dictionaries.

Therefore the solution was to compile the library locally from the source using these commands:

git clone --depth 1 https://github.com/tesseract-ocr/tesseract.git

cd tesseract-ocr/

./autogen.sh

./configure

make

sudo make install

sudo ldconfig

tesseract -v

Then you must download the English trained data from: https://github.com/tesseract-ocr/tessdata/blob/master/eng.traineddata

Then you need to set an environment variable to the location of that english trained data.

You can use export TESSDATA_PREFIX=/home/pi/tesseract/tessdata (replacing the location with the location of the folder you downloaded the traineddata into) in your CLI or add this to your ~/.bashrc to ensure it is set on startup

You may or may not need to install the following dependencies, which allow Optical Character Recognition:

sudo apt install libsm6 libxrender1 libfontconfig1 libatlas3-base libqtgui4 libqt4-test libwebp6 libtiff5 libjasper1 libilmbase12 libopenexr22 libilmbase12 libgstreamer1.0-0 libavcodec57 libavformat57 libavutil55 libswscale4 libqtgui4 libqt4-test libqtcore4

You may also need to run a sudo apt update

You also need to use pip3 to install some libraries

pip3 install opencv-python==3.4.6.27

pip3 install pytesseract

pip3 install time

pip3 install PySimpleGUI

pip3 install Image

How good is it?

Well the way it works for now, it’s actually very good if you have the right lighting. But the character recognition has a very fine tolerance and moving the board or even the light changing in the room can mean tweaks to the contrast options are needed.

Also, it’s obviously not very discreet. If you did want to make this work in a real scenario, you’d probably have to hide the camera somewhere and have the pi send the information to something like a smartwatch.

But… it is great fun, and a fantastic learning experience for both the hardware and software perspectives, from basic python to character recognition, to considering best practices for the code to make it as fast as possible.

I’d love to see others take this on and remix it a bit, for example taking into consideration blank tiles or even parsing the board to find which words are playable.

Do check this out on Youtube, and you can read the Raspberry Pi Foundation’s post about it here.