Block Occupancy Detector and Position Sensors for the "Smaller Railways"

by davidgoddard9 in Circuits > Sensors

1266 Views, 2 Favorites, 0 Comments

Block Occupancy Detector and Position Sensors for the "Smaller Railways"

Screenshot 2021-01-21 at 08.07.02.png

For many people with model railways on small boards, the cost and complexity of adding block-occupancy and location detection sensors, cutting tracks and modifying rolling stock etc. may seem out of reach. However, if you can get a Raspberry Pi up and running and have a web camera, and use JMRI, this instructable may be a fun starting point and shows a way to provide both block occupancy and location sensor inputs to the JMRI application without the need to cut any tracks, attach any resistors to rolling stock or buy and wire up lots of sensors.

The instructions given here are at a high level and assume you have working knowledge to get a Raspberry Pi up and running and installing software such as Python and OpenCV - or can at least follow instructions found on the web!

What's presented here can be added to any model railway. The Raspberry Pi broadcasts messages as sensor change. You can subscribe to these messages in your own code or microcontrollers or you can configure JMRI to use those messages.

The whole solution costs no more than a Raspberry Pi 3A (about 25 UK pounds) with its power supply plus a camera (another 15 UK pounds) and no additional wiring to the layout.

Pros:

  • Relatively cheap
  • No layout wiring
  • No drilling holes for sensors
  • No cutting tracks
  • No need to add resistors to rolling stock
  • Detects occupied blocks - even a derailment or uncoupled truck or coach and stalled locomotives.
  • Add as many location sensors as you wish using a simple web page
  • Easily adjust where a sensor is located by dragging it with the mouse on screen
  • Add as many blocks as you need using the simple web page and drawing with the mouse
  • Multiple devices can be created and cover different areas of the layout if required

Cons:

  • If the layout moves with respect to the camera, you may need to adjust your sensors - not suitable if your layout will be moved regularly
  • Requires reference images of the layout without rolling stock in all lighting conditions
  • Need to be able to position a camera over the area of the layout to be monitored
  • Strong shadows moving over the layout or reaching over the layout may be detected
  • Only produces MQTT messages of sensor and block state, this works fine with JMRI but I do not know about other software

This is offered as a working solution but it could be taken much further as there is a lot that could be done with it (such automatically re-aligning should the layout or camera be moved) and what is offered is as a starting point which I hope forms a basis of a new idea or an improved solution.

The software provides a web page to allow the adding/editing/removing of sensor positions and block positions based on the user viewing an image from the camera and clicking/dragging the mouse to define the positions. Once saved, the remaining software uses the defined sensor points and sends out messages when something makes those sensor points change how they look to the camera.

The Raspberry Pi rapidly takes images from a camera, detects which sensors are triggered and notifies JMRI or any other configured software via messages which go via an MQTT broker. The following instructions describe installing a MQTT broker onto the Raspberry Pi if you are not already using one. (Note that if you wish to setup multiple cameras and Raspberry Pis, only one of them needs to have the MQTT broker installed and running!)

Supplies

The basic parts are:

  • A WiFi network (It should work with Ethernet too but I've not tried it)
  • A webcam - I have used Raspberry Pi cameras and USB connected webcams - both fine
  • A Raspberry Pi (3 or 4)
  • Mosquitto - Open source MQTT Broker (Optional - not needed if you already use one)
  • JMRI running on a Mac/PC configured for use with MQTT

The software is available via GitHub

Track Monitor code on GitHub

How It Works - Detecting

Track Monitor and JMRI

This does not use Machine Learned Object detection or anything else fancy - I tried YOLOv4 but could not get it go fast enough. Instead, it uses an adaptation of the more common solution of "background subtraction" from images. By taking a picture of the layout with no rolling stock on the track, as soon as rolling stock is added to the track and a new picture taken, the difference between the new and the reference pictures will be the added rolling stock. By defining some parts of the picture to be a 'sensor' or 'block' it is possible to tell if the rolling stock is over a sensor or in a block.

A common approach to this is to create the reference images using a few images taken over the last few seconds and averaging them. This creates a very good way to handle subtle changes in light but if a locomotive becomes stationary over time it becomes part of the averaged background and is no longer detected.

The solution here is to allow the user to take a number of different reference images where there is no rolling stock on the track under all expected lighting conditions (and more can be added over time). Then as a new picture is taken it is quickly compared to all the references to find the closest match. It then uses this image as the background which it then compares to the new image. When the images are similar, the sensor or block is not triggered but when the images differ, presumably because some rolling stock has entered the scene, the corresponding sensor or block is 'triggered'.

In short - the software plays "spot the difference" using one image of the layout with no rolling stock in it and one of the layout in normal use.

The software uses a background thread to fetch images from the camera leaving the main thread to process those images. Yet another thread is then sending the messages to the MQTT broker.

There is a lag caused by the camera between something moving in front of the camera and a photo of it being ready to process. In the setup I have this is not enough to cause problems as it is a fraction of a second. But it does mean that there is a short period of time between a train passing a point and JMRI being notified about it and depending upon how fast the train is going, this may be an issue - but unlikely!

On a Raspberry Pi 3a, with 10 reference images and approximately 1000 points of interest - some as single point sensors and the others marking out where a 'block' is, the system processes around 100 frames per second. However, the camera will not provide new images at anything near that rate, perhaps only at 25 frames per second. On top of this is the very short delay as the messages bounce off the message broker and get decoded by the controlling app, JMRI in this case. But the result is pretty much instant.

For basic testing I used a standard Raspberry Pi camera over the middle of the board with a simple oval of 2nd radius. A locomotive going flat out (ridiculously fast) completed one revolution in about 4 seconds - therefore approximately 1 second is a 90 degree 2nd radius curve made of 2 pieces of track. At this high speed the train will move approximately 10 to 20 cm past the sensor before it is registered. If you use far more realistic speeds on the locomotives, the system will detect and trigger a sensor before the locomotive has moved more than a centimetre or two. Note that it is not possible for even a short locomotive such as a shunter to pass a sensor without being detected even at full speed!

Raspberry Pi Basic Setup

You will need a Raspberry Pi, The software has been tried on both a 3 and a 4 and the frame-rate handled by the application only differs by 2ms per frame therefore if you have a Pi 3 sitting around, I would use it, it is plenty fast enough - you then do not need a heatsink or fan as you will with the Pi 4 and Pi 3's are cheaper.

To setup the Raspberry Pi you will need:

  • Raspberry Pi plus power supply
  • uSD card installed with Buster - operating system
  • Camera, SSH and VNC enabled
  • WiFi enabled and configured for your network
  • Python libraries - Python3.7, OpenCV and paho-mqtt
  • NodeJS and NPM
  • If you do not currently have a MQTT broker running, install Mosquitto
  • If you will have more than one Raspberry Pi sensor devices on your layout, change the 'hostname' via raspi-config - this name can be used such as "raspberrypi.local" as the IP address for accessing web pages etc. which makes it much simpler!

Please note - these are guides and the required instructions to install 3rd party software may change over time - do not simply copy / paste and expect it to work!

Step 1 - Set up the Pi

Create SD card from the current Buster image from the Raspberry Pi site with the desktop.

Setting up your Raspberry Pi

Before ejecting (or after remounting)

create "ssh" on boot using 'touch' or otherwise create an empty file.

create wpa_supplicant.conf

country=GB # Your 2-digit country code
ctrl_interface=DIR=/var/run/wpa_supplicant GROUP=netdev
network={
    ssid="YOUR WIFI NETWORK NAME"
    psk="YOUR WIFI PASSWORD"
    key_mgmt=WPA-PSK
}

Boot up

By default the Pi will have a hostname of "raspberrypi" so should be able to SSH onto it using "raspberrypi.local" where you would use the IP address. If not, locate the IP address of the Pi on your network - I use LanScan app.

Open a SSH window and login as pi / raspberry

sudo apt update
sudo apt full-upgrade
sudo apt clean
sudo raspi-config

Make sure you have

  • Boot to desktop as pi auto login - can change later if you wish
  • Camera if you will use the Pi one - not needed for USB camera
  • VNC enabled
  • Expand disk space
  • changed the hostname if you may have more than one of these Pis.

save and exit which will reboot

Step 2 - Install more 3rd party software

Install Python

https://www.pyimagesearch.com/2019/09/16/install-o...

Follow ALL instructions. It takes about 10-15 minutes to go through the downloads depending upon your network and which Pi.

Then install some additional bits: For MQTT client library for python

pip install paho-mqtt

For MQTT Broker follow:

https://www.instructables.com/Installing-MQTT-Bro...

In addition:

pip install scipy
pip install scikit-image

Install NodeJS

sudo apt-get install nodejs
sudo apt-get install npm

Install Netatalk if you use Mac as this allows you to connect Finder windows to the Raspberry Pi which makes downloading and editing files easier.

https://pimylifeup.com/raspberry-pi-afp/

<a href="https://pimylifeup.com/raspberry-pi-afp/"></a>sudo apt install netatalk

Recommended - Install VNC on your PC. You can then locate your Pi's IP address and connect to the desktop.

You may wish to go into the raspi-config and set the display size to the maximum size you can handle on your PC

Optional - Install Mosquitto - MQTT Broker

You only need these instructions if you do not have a MQTT broker already installed anywhere on your network.

I installed Mosquitto MQTT Broker on the Raspberry Pi but you can place it on any PC including the one you use for JMRI, but if not on the Pi, you will need to modify the Python code to define the URL to the broker so that it can send messages to it.

Please follow the instructions to install and setup.

Mosquitto Home Page

Once installed, locate the configuration file and edit it by locating these lines and changing them to the values shown here. To edit:

nano /usr/local/etc/mosquitto/mosquitto.conf

You view the Mosquitto log files using

sudo tail -f /var/log/mosquitto/mosquitto.log

Edit the configuration file here /etc/mosquitto/mosquitto.conf as follows:

# Place your local configuration in /etc/mosquitto/conf.d/<br>#
# A full description of the configuration file is at
# /usr/share/doc/mosquitto/examples/mosquitto.conf.example

pid_file /var/run/mosquitto.pid

persistence true
persistence_location /var/lib/mosquitto/

log_dest file /var/log/mosquitto/mosquitto.log

listener 1883
allow_anonymous true
listener 9001
protocol websockets

include_dir /etc/mosquitto/conf.d

then restart it

sudo service mosquitto stop
sudo systemctl stop mosquitto.service

Raspberry Pi - Installing the Track Monitor Code

Screenshot 2021-01-30 at 12.48.45.png

The track monitor code is split into two parts:

  • A Python application which takes the camera images and identifies which sensors are active or not
  • A NodeJS server application to host a web page allowing a simple web app to setup the sensors and blocks as well as getting the reference images.

Download all the code onto the Raspberry Pi into the Pi user's home directory into the folders as shown. Note that there is a folder and a shell script. The shell script will start the code each time the computer starts up once the following instructions have been followed and is useful if you wish to change anything and re-run it.

Track Monitor code on GitHub

Setting Up the Camera and Setting Up to Run on "boot"

The tutorials for installing OpenCV will create a virtual environment. Open a terminal window. Then set the virtual environment ready to work with Python and OpenCV

workon cv

You can check that the camera is working and correctly positioned using the 'test.py' file. Open a terminal window on the Pi and "cd" to the files:

cd /home/pi/ModelRailway/TrackMonitor
python test.py

This will produce a live view in a window of the image as the application will see it. Once you are done, use Control-C to stop the program.

To enable all programs to run when the Pi boots up, edit the "/etc/rc.local" file

sudo nano /etc/rc.local

Before the final "exit" line, insert the following:

/sbin/runuser pi -s /bin/bash -c "/home/pi/startup.sh" 2>&1 > /home/pi/boot.log

Next - install the web app and then use it to create the reference images and sensors.

Installing the Web Server

The web server requires a few NodeJS libraries.

cd ModelRailway/SensorEditor
npm install

You can verify the install by then starting the server manually.

sudo node server.js

Note - The 'sudo' is required because by default the server will use port 80 which requires system privileges to use. If you wish, you can change the port number and then you may no longer require the "sudo".

Next, open a web browser on your PC and if you have not changed your Rapsberry Pi's hostname, enter "http://raspberrypi.local". If you have changed the hostname, replace the 'raspberrypi' with your new hostname.

(Note the hostname does not need to change unless you wish to run several of these Raspberry Pis on the same network!)

The page will open with a big red cross visible. This is a freebee reference image - simply navigate using the button "Manage Reference Images" where you will be able to add new images from your layout and you can then remove this freebee - which is there just to make sure the code has something to work with.

Configure JMRI to Work With MQTT

Screenshot 2021-01-18 at 15.21.49.png

If you are using JMRI, Open Panel Pro and go into 'preferences'. Add a connection of type 'MQTT' as shown.

The URL for the broker/server is "raspberrypi.local" or its IP address if this does not work on your network.

Once saved JMRI will restart. Note it will complain if it cannot connect to the MQTT Broker, so wait until the end!

Adding Sensors Into JMRI

Screenshot 2021-01-18 at 15.31.40.png
Screenshot 2021-01-18 at 15.31.50.png

Now when you go to add a sensor, there will be a tab for MQTT and you simply choose this and add the sensors you want. The detector will automatically number the sensors starting at 1 but there may be gaps if you remove some later.

Note that you can create many all in one go and JMRI will number them appropriately.

Using the Web App

Screenshot 2021-01-23 at 14.01.53.png

The web app is running on a simple NodeJS server.

The code is in

/home/pi/ModelRailway/SensorEditor

Once you have edited the /etc/rc.local file as described above and re-started, the server should be running. On your PC, open a web browser and enter the name "http://raspberrypi.local" or the IP address of the Pi if this does not work (Does not always work on MS-Windows).

<a href="http://raspberrypi.local">http://raspberrypi.local</a><br>

If you need to, you can change the hostname of the Pi for example if you wish to have more than one of these devices in use - i.e. you have a large layout and need several camera to watch over it.

sudo raspi-config

The web page will initially show a blank area and a few buttons. Start by taking a reference image of the layout. Clear the track of all rolling stock and then click on the button to manage the reference images. On this page click to 'take

Sensors and Blocks

Screenshot 2021-01-23 at 13.34.18.png

Using the buttons you can add sensors and blocks. Simply press the button of the required one and then move the mouse over the image and click.

If you add a sensor and click a second time it will more. Alternatively you can drag it to reposition.

For blocks, click and keep pressed while you drag the mouse along the path of the track for the entire block.

The above images show the way it may look. Note that as you move the mouse, the system will automatically add more circles to the path - you do not need to keep clicking to add them!

Note too that you can change the size of the sensitive area (the circle shown) - see later for adjustments.

The Details Table

Screenshot 2021-01-30 at 13.05.06.png

Each sensor or block is added to this table. When you re-open this page, the current data will be used to fill it. The data lives in a file called POI.json which acts as the database for the application.

Each row is one sensor or block.

The ID is auto-allocated and is the number which this sensor will be known as by this system. You need to match these numbers up to those of the same sensor within JMRI. I.e. sensor 2 here is sensor 2 in JMRI.

A sensor is just a small area of the image marked by a circle of a given radius. You can change the size to suit. Note that a locomotive or truck must cause this circle's image to change with respect to the reference images when the train was not there. Therefore if you make the area too large, the difference when the train is there may not be enough to trigger it. Likewise, if you make the area too small, the camera noise (the constant changing colour of the pixels) may cause the sensor to trigger too easily and without a train or truck being present. 5 pixels radius is the default.

As mentioned, each area is tested against the reference images. It is scored using PNSR which is measured in dB (decibels). You can vary the sensitivity of a sensor by changing the value from the default 22. For a good match of images in good lighting when no obstruction is in the sensor view, the score will be around 30+, but as soon as a locomotive enters the area it will drop to around 10. But this depends on lighting and the colour the background with respect to the that of the locomotive or truck and so may need to be adjusted for best results.

Once you have changed values or added sensors, click the save button to send these to the application.

You can watch the sensors being triggered (but not with a live camera view - it can only show a static reference image) by clicking on the 'overlay sensor activity (live)' button. The web page is constantly monitoring the live sensor data and updating its own state model. Thus it can show the current state at any time by highlighting the sensors that are triggered.

If you need to remove a sensor or block definition, click on the 'delete' button on its row in the table.

To edit it, click on the sensor or block and it will be highlighted in the table. Then either drag a sensor to a new position or add more points to the block. Once you have clicked 'edit' that sensor or block is in scope. If you then click 'undo' the last point added will be removed even if you added it ages ago. So if you trace a block region badly, you can just click undo repeatedly until you are back to a good point and then start dragging the mouse to produce new positions.

Allowing for Different Lighting Conditions

Screenshot 2021-01-23 at 14.02.58.png

If you have different ways in which the layout may be lit, avoiding very dark or very bright extremes, then you can click on the "take reference image" button which will store the current camera view as a new reference image.

Make sure to remove ALL rolling stock before taking a new reference image! The new image will be shown in the web page when it is ready allowing you to start adding sensors to it.

Hints When Tuning Sensor Sensitivity

Screenshot 2021-01-23 at 14.06.01.png

Each sensor and block can have its own sensitivity adjusted to help get a reliable trigger.

Click on the "Monitor Live" button for the sensor and after a few seconds a graph will appear showing samples as they are gathered. The graph updates every few samples and shows a maximum of 1000 giving a few seconds within the graph display.

Adjust the sensor sensitivity value to position it between the active and inactive values as you see them in the graph. Once done, click 'save' and the software will restart and use the new values.

Note that it is possible due to some camera angles that it may not be able to clearly see the track when another locomotive is on a nearer track - in this case just move the sensor position so that it would see the top of the train rather than the track. If you cannot resolve the issue this way, consider repositioning the camera.