Node-RED Console for Teaching Physical Computing / AI / IoT

by brendonhatcher in Circuits > Electronics

1813 Views, 6 Favorites, 0 Comments

Node-RED Console for Teaching Physical Computing / AI / IoT

the-console.jpg

Through Code Club I have come to appreciate the value of visual, no-code approaches to teaching computational thinking. I wanted to expand upon the physical computing education possible with a Micro:Bit and also to introduce IoT and AI to Code Club audiences (9 - 13 year olds in schools, libraries, makerspaces and youth clubs).

I stumbled on TJBot, which was my first exposure to node.js. I was intrigued by the API-based method of interacting with Watson using a Raspberry Pi, but I considered node.js to be too complex for my intended audience, and TJBot’s sensors and actuators to be too limited for my purpose.

At the same time, my employer was throwing away an old 3G router, and I wondered if I could squeeze a Raspberry Pi and other bits inside. Gradually the idea of repurposing the router enclosure to create a physical computing / IoT / AI console took shape.

My current build includes:

  • A Raspberry Pi
  • A webcam
  • A microphone (in the webcam)
  • An ultrasonic sensor
  • Capacitive touch sensor
  • Momentary switch
  • A DHT11 or DHT22 temperature / humidity sensor
  • 4 LED outputs (Red, Green, White, and a RGB LED)
  • A SG90 servo motor
  • A Bluetooth speaker

Combined in a multitude of ways, the above kit allows the user to explore various aspects of physical computing / IoT and AI.

All of the interactions - inputs, outputs, AI interaction etc are handled using Node-RED, in keeping with my goal of employing visual / no-code programming (although, strictly speaking, Node-RED is a low-code platform).

This instructable covers:

  • The build, and lessons learnt
  • Dealing with wireless connections in various locations (I’ll be using the console in many workshop locations)
  • How I’ll be using the console in educational and maker contexts
  • Plans for the next iteration of the build

NOTE: All hyperlinks to products are Amazon UK affiliate links

Updates:

5 Jan 2021 - added Fritzing diagram file

Component Selection

component-selection.jpg

Initially I ran Node-RED on my PC.

However, this didn’t give me the physical computing elements that I wanted to include. It did however confirm that Node-RED was a good platform choice for my educational goals.

I turned to Raspberry Pi, because I had one spare, and I had always wanted to learn how to use the GPIO pins. In retrospect, this was a good choice because the Graphical User Interface reduced both the build complexity, setup, and end user learning curve.

I thought that I would run everything on the Raspberry Pi, but soon realised that my “console” ceased to be a compact, console-ish device by the time I added a screen, keyboard and mouse. Doing everything on what would effectively be a Raspberry Pi desktop replacement also drove up the cost. I want to keep the cost down so that I can make lots of these for use in workshops for small groups

A headless Raspberry Pi keeps the build low cost and compact. In a workshop setting, several devices can login to the same console at once, expanding user access (provided the participants can manage who is editing and who is viewing the flows).

Later on I discovered that running the Node-RED interface on my laptop via an http connection to the Raspberry Pi gave a much better performance than running Node-RED in Chromium on the actual Raspberry Pi, so again, a good choice.

I then turned to the other components.

I knew from my research into TJBot that I definitely wanted to include all of the voice assistant elements of Watson, along with Watson’s image recognition and audio analysis capabilities (tone analysis, language detection and translation). So I needed a webcam, microphone and speaker.

I tried a USB microphone (dongle form factor), but found the audio pickup too poor (especially once the microphone was inside the enclosure). I also had a cabled microphone, but this went against my goal of having an integrated console without peripherals. I was about to break it apart to just use the components, when I realised that my webcam had a perfectly acceptable microphone.

I wanted user input to break out of the virtual world to the physical, so I added a momentary switch, capacitive touch sensor and ultrasonic distance sensor.

Turning to IoT, I wanted to include some environmental sensors, starting with temperature and humidity. I discovered that the temperature module I had was analogue, and the Raspberry Pi has no hardware analogue to digital converter. Fortunately, there are 2 digital temperature modules available (DHT11 and DHT22). Adafruit explains the differences between these. I had one of these, wired it in and promptly burnt it out because the pins were incorrectly labelled (that’s my story, and I’m sticking to it!).

Update: I got a new DHT11 module and it is working correctly.

The 3G router enclosure had light pipes for a series of LED indicators (e.g. wifi connection status). I used these to include a red, a green, a white and a RGB LED.

In keeping with the physical computing theme, and also to introduce a bit of robotics, I added a SG90 servo. At the moment it just has a standard horn on it, but I plan to make a dial for it using an Inkscape plugin.

Finally, I added a Bluetooth speaker for audio output. I had intended connecting a speaker via the audio jack, but the audio jack and LEDs on GPIO both use the same PWM controller, and thus the LED flickers when playing audio! I could have used a USB-to-audio adapter, but it wouldn’t fit in the enclosure.

The Bluetooth speaker came with its own challenges.

The one I bought had to have the battery attached in order to work. I was hoping to simply disconnect the battery and power the device using 5V off the Raspberry Pi. I didn’t want to leave the battery connected and permanently charging from the 5V pin, as I couldn’t trust that the speaker had a smart charging circuit. Eventually I left the battery attached, ran a line from the Pi to the speaker charging port, and then interrupted the line with a latching switch accessible from outside the enclosure. When the battery runs flat, I flip the switch and the battery is recharged from the Pi. Of course, the battery will eventually run out of charging cycles, so a better solution would be welcome.

The other problem is that the speaker board has a momentary switch for powering on and pairing. Once inside the enclosure, I had no access to the button. Again, I wired in my own momentary switch accessible from outside the enclosure.

Layout

layout svg.png

Now that I had assembled all the pieces, it was time to see if they could all fit appropriately in the 4G router enclosure. Initially I thought that there was plenty of room. However, the enclosure had some internal elements that limited the space, and some components had connections that logically determined their location and orientation. It turned out that this was a really tight fit.

I used Inkscape to mock up the layout, using my digital calipers to measure the enclosure internals. This took way longer than I anticipated. I downloaded a vector file of a Raspberry Pi and then added rectangles next to the ports I would be using so that I could account for the plugs when positioning the Pi. USB plugs are ridiculously huge when viewed from this perspective.

Wiring

fritzing-breadboard.png
fritzing-perfboard.png

Bearing in mind that this was my first attempt to use the Raspberry Pi GPIO pins, I had a bit of a learning curve ahead of me. I paid close attention to this learning, because I’ll need to replicate at least some of it in my educational programs for contexts in which GPIO is new to participants.

  1. Pins can be referenced numerically by their position (proceeding logically through the pins) or by GPIO number (scattered seemingly at random).
  2. Some simply provide power (on, if the Pi is on), some are low/high digital pins, some have specialised functions.
  3. There are multiple ground pins (I still haven’t worked out if the ground pins in one row are any different than those in the other row).

This website has a useful interactive tool for researching the pins.

I used Fritzing to plan my wiring, and this helped me to iterate quickly as my ideas took shape.

I then modelled and tested my wiring on a breadboard, before considering the final build.

Initially, I planned on wiring all the components directly to the GPIO pins. However, all components required a ground pin, in excess of the number available on the Pi. In addition to a signal pin, some components required either 3.3V or 5V, again, more than the number of available voltage pins on the Pi.

Perfboard to the rescue! I soldered header pins to the Perfboard and joined up the appropriate pins to create 3.3V, 5V and ground rails. This meant that I only needed 1 connection between all components (via the perfboard) and a single 3.3V, 5V and Ground pin on the Pi.

By the time I had planned the perfboard layout, I realised that having wires going from each component to the perfboard and directly to the signal pins on the Pi was confusing, and would make positioning the components more complex.

I therefore added (otherwise superfluous) signal pins to the perfboard as well. All wires from a component terminate on the perfboard (so they are all the same length), and a signal wire then connects from the perfboard to the Pi.

I used Dupont cables to connect everything.

Strictly speaking the momentary push button should have been wired with a resistor to protect the GPIO pin, but the circumstances under which the pin could get damaged are rare, and probably not possible in this particular setup. If I’m wrong, please tell me why in the comments!

Downloads

Software Setup

update_sudo_rpi.png

Raspberry Pi

  1. I installed a standard Raspberry Pi OS (Raspian Buster)
  2. Changed the default password
  3. Ran all updates

Node-RED and dependencies

  1. I installed these using the instructions here.
  2. Set to run as a service
  3. Set to Autostart on boot

Node-RED security and https

I am still reviewing all the security options (in particular, security requirements for IoT, AI, and workshop scenarios). For now, here are some links:

Node-RED nodes

Node-RED has a modular architecture, with a library of nodes (modules) to meet specific needs. I installed some as needed for each component and function.

Just a couple of notes on this:

  • The Raspberry Pi nodes are installed when you follow the Node-RED installation instructions above.
  • I cover specific nodes for specific functions below.
  • As with all user-contributed plugins, some are abandoned and no longer work, and in some cases, there are several nodes for the same function, and you’ll need to compare them.

Service registration and API keys

Interacting with Watson and other services requires user registration, followed by generating API keys (and this process can be quite complex in itself).

Although it would be useful for participants to go through this process, time constraints, the complexities of Watson setup etc, storing and retaining passwords for future sessions etc suggest that it would be better for the instructor to register and provide keys for participants to use.

​Multiple Wifi Environments

vnc.png

I'll be taking one or more of these consoles to various locations to run workshops. I'll need to connect each console to the local wifi. There are three challenges:

  1. Connecting a headless Raspberry Pi to a new wifi network.
  2. Interacting with the headless Raspberry Pi GUI.
  3. Connecting to Node-RED on the Pi from a separate device.

Connecting a headless Raspberry Pi to a new wifi network

Most tutorials on connecting a headless Pi to a new wifi involve creating a wpa_supplicant.conf file. However, this involves taking out the SD card, which is a hassle because I have to open the whole console to get at it. Imagine setting up for a workshop in a new venue and having to pull SD cards out of multiple consoles!

Another option is using a Direct Ethernet Connection with an ethernet crossover cable and Internet Connection Sharing from the laptop. Once connected, overcome challenge 2, and then connect to the wifi from the wifi icon on the top right.

I guess I could use the workshop leader’s PC to do this for each console, but I wouldn’t want the extra work of getting each participant to do it. I also want to avoid a cable running from the laptop to the console, since I want to model IoT implementations where the sensors and actuators are remote from the control interface.

The most useful approach is to turn the Raspberry Pi into a temporary Wireless Access Point with a known SSID and access credentials. Once connected to the Pi-as-AP, select the actual wifi SSID and enter the access credentials. The Pi reboots and uses the wifi access details to connect to the Internet. Neat! I found 2 projects that do this: raspberry-wifi-conf and RaspiWiFi. I think the second is easier to configure.

If the router is accessible, using the WPS push button may be an option.

Interacting with the headless Raspberry Pi GUI

So far, I can see no reason why workshop participants will need to do this, as all their activities will be done in the Node-RED web interface from a laptop.

From a technical point of view, I needed access to the GUI to:

  • Test the microphone and webcam
  • Use terminal to update the stack (yes, I could have used SSH)
  • Review, reorganise, rename, and delete saved flows (these are in /home/pi/.node-red/lib/flows)

I found the simplest way was to use VNC. I registered a RealVNC cloud account so that I can always detect the console no matter what wifi it is connected to. The cloud account is free for personal use, so it will be fine to use for free workshops.

Connecting to Node-RED on the Pi from a separate device

I changed the hostname to a unique name to simplify running a workshop with multiple consoles, I need to be able to distinguish them).

Accessing Node-RED via a device (e.g. laptop) on the same network is as simple as browsing to http://uniquehostname.local:1880/.

Assembly

breadboard.jpg
internals.jpg

I first assembled and tested everything on a breadboard, then did the perfboard assembly.

I removed everything from the inside of the 3G router enclosure.

Using a Dremel I cut holes for the webcam cable, ultrasonic sensor, servo and momentary switch, and trimmed away some of the internal enclosure struts and reinforcing to make space for my components.

I found it really hard to position the Dremel and cut away only the necessary material. To be honest, the result is embarrassingly shoddy. I would have used a laser cutter, but the Covid-19 lockdown prevented me from accessing the London Hackspace. Next time...

I held everything in place using a glue gun since I didn’t have stand-offs to properly mount the boards.

I discovered some wiring and component conflicts. For example, a wire ran across the back of the capacitive touch sensor and prevented it from working. I had to add some foam padding to prevent this.

The Webcam USB cable has a ferrite bead. I coiled the cable and shoved it into the corner of the enclosure. The recording quality has dropped, and I need to see if this is the reason.

The Master Flow and Dashboard

master-flow.PNG
master-dashboard.PNG

I created a flow (at http://uniquehostname.local:1880/) and dashboard (at http://uniquehostname.local:1880/ui/) containing simple examples for all of the components.

Each example forms one or more flows using a combination of:

  1. The default nodes available in a standard Node-RED installation.
  2. Raspberry Pi-specific nodes installed with Node-RED.
  3. Additional nodes required for the specific component.

These masters are used for the initial activities, and also as a reference for future activities and tinkering that participants might want to do.

Component Configuration

node-library.png

This is a level summary of the nodes installed and configured to enable the use of each of the components. Where appropriate, I have explained why I chose a specific node, flow or configuration to suit the context of this project.

Ultrasonic sensor

Nodes required: node-red-node-pisrf
Master flow: Ultrasonic sensor. Outputs current value to the debug window every x seconds.
Master dashboard: node-red-dashboard text node

Capacitive touch sensor

Nodes required: node-red-node-pi-gpio
Master flow: Capacitive touch. Output 1/0 to the debug window only when the state changes.
Master dashboard: node-red-dashboard switch node

Momentary switch

Nodes required: node-red-node-pi-gpio
Master flow: Momentary push. Output 1/0 to the debug window only when the state changes.
Master dashboard: node-red-dashboard switch node

Single color LEDs

Nodes required: inject, node-red-node-pi-gpio
Master flow: Any of the Red, Green or White LEDs. Inject to trigger the LED on or off. Set the GPIO state to turn the LED on or off.
Master dashboard: node-red-contrib-ui-led

RGB LED

Nodes required: inject, node-red-node-pi-gpio
Master flow: Uses a KY-009 RGB LED. Inject to trigger the individual R, G, B LED elements on or off. Set the GPIO state to turn the individual R, G, B LED elements on or off.
Master dashboard: node-red-contrib-ui-led

I need to add a diffuser to blend the outputs from the 3 LEDs, as they are visible as discrete bulbs.

SG90 servo motor

Nodes required: node-red-node-pi-gpiod
Master flow: Servo control. Inject nodes to trigger rotation. GPIOd passes instructions to the servo.
Master dashboard: node-red-dashboard gauge node

Use GPIOD because the standard GPIO node PWM causes the servo to jitter a lot.

The node has options to tweak rotation and alignment, so fiddle with the range to influence alignment.

Note the various ways to define angle.

In addition to installing the node, you'll need to follow these instructions to enable pigpiod and set it to autostart.

Webcam

Nodes required: node-red-contrib-rpi-imagecapture
Master flow: Inject to take a shot, Base64 to encode, custom UI template to display
Master dashboard: Follow these instructions:

  1. Install node-red-contrib-rpi-imagecapture
  2. Install node-red-node-base64
  3. Install the fswebcam package
  4. Create a fswebcam config file at /home/pi/fswebcam.conf
  5. Insert and adjust the config settings from the example found here
  6. Apply the code fix
  7. Create a flow based on this one, replacing raspistill (which only works with the Raspberry Pi camera module) with imagecapture (which works with a USB webcam)

Image dimensions are set in fswebcam.conf AND the template node hard coded html (so I just removed the image dimensions in the template html).

I’ll probably need to review and tweak capabilities as I seek additional functionality - e.g. webcam streaming.

I reviewed and excluded the following nodes as unsuitable for my specific context:

  • Node-red-node-ui-webcam uses the laptop webcam not the one attached to the Raspberry Pi!
  • Node-red-contrib-node-webcam is unsupported and appears broken
  • Node-red-contrib-camerapi uses the RPi camera module (not a camera attached via USB)

Bluetooth speaker

Nodes required: node-red-contrib-speakerpi
Master flow: Play sound sample. Inject node to trigger play.
Master dashboard: None working yet

DHT11 temperature / humidity sensor

Nodes required: node-red-contrib-dht-sensor
Master flow: Trigger a sample reading, or trigger a series of readings.
Master dashboard: node-red-contrib-ui-value-trail to show the data.

Follow the installation instructions, starting with those at the end of the doc for running without sudo access.

APIs

Getting started

My initial goal is to register a set of generally-useful APIs that can be used by workshop participants for both the recipe-based activities and the tinkering/freeform activities.

The initial set include:

  • All of the free Watson features - assistant, language and tone analysis, image recognition etc -
  • A weather service - Open Weathermap
  • TfL (Transport for London)
  • Twitter

I’m not going to document the registration and API setup details here, because that will make the Instructable way too long (and those platforms are prone to sweeping UI reorganisation anyway, making detailed instructions obsolete). The Node-RED forum is a good place to start, if you are looking for API instructions that match Node-RED requirements. Managing multi-user access I could use a single email account, single platform registration and single API for multiple consoles, but many APIs have concurrency or data fetch limits.

I may need to consider creating a Gmail account per console and then registering for each API using each Gmail account. However, that is a lot of work.

Privacy and Safeguarding

My target audience is minors within Code Club and similar settings. Given the data collection and transfer functions of the webcam, microphone and API calls there are privacy and safeguarding issues to consider.

One of my Code Club participants showed me the way that these issues should be approached. Whenever he had to fill in a form, he would carefully interrogate each field, wanting to know why the information was needed, and what it would be used for. Then he would pause, consider whether that was acceptable, and then fill in that field if he was happy to do so!

So, this is what I would tell participants and their guardians, and it would form a part of the workshop experience and learning, rather than something to just rush through as a formality.

Loading the Node-RED Master flow and dashboard interfaces

The following statements apply when loading the master screens, but doing nothing further:

  1. No user accounts are required or created for any workshop activities.
  2. Your personal information is not collected.
  3. No cookies are created, stored or accessed for the Master flow.
  4. A single cookie is created for the Dashboard (session only, no personal data).
  5. No third party sites or code is loaded.
  6. No local data is transferred to a third party site.

Workshop activities

  1. Depending on the nodes used, data may be collected and transferred.
  2. A privacy statement at the beginning of each activity will define the data collected and transferred, and provide further guidance on appropriate steps to be taken.
  3. Where appropriate, means of mitigating privacy concerns are provided.

Next Build Iteration

Whilst the router enclosure is rugged (important to kids workshops), it is a black box (see what I did there?). If I want participants to understand the hardware layer, I’ll need to be more transparent in my build. I think a clear acrylic enclosure like this one is probably the way to go.

I realised that the current wiring would be almost impossible for my target audience to follow. If I want them to engage with the hardware layer, I’ll need to do it differently. This instructable on glass circuit boards looks like it could work well with the clear enclosure idea. At the very least, I need to run the wires in a way that users can visually follow.

My green and red LEDs are from one source, and the white from another. The white is MUCH brighter than the others, and I am bothered by the lack of consistency. In my next build, I'll seek LEDs of matching brightness.

Given the frustrations I am experiencing with Bluetooth speakers, I’d probably opt for a USB-to-audio adapter and forgo Bluetooth audio entirely (I won't be using the router enclosure, so space won't be an issue).

I'm very aware that the whole setup so far has been for just a single console. I am sure that I'll have to modify my approach to API registrations, API keys, managing master flows and example flows etc when I am managing multiple consoles.

Activities

I’m going to divide the activities into Introductory, Intermediate and Advanced, and, to some extent, presenting activities in a specific order to build on previous ones.

Introductory activities - Introduce the console’s capabilities, nodes and flows, privacy issues, themes (physical computing, IoT, AI). Activities are largely recipe-based (step-by-step instructions provided).

Intermediate activities - Deepen understanding of concepts already covered, and introduce new concepts. Activities are still mostly recipe-based.

Advanced activities - These cover important principles and approaches for participants to create their own flows from general principles. Activities provide guidance and suggest areas for further exploration.

I'm not yet sure where and how I'll present these. I'm not going to add the full details here, as that would make this instructable ridiculously long. I'll dump a summary here, and dump the more complete activity descriptions in an attached PDF.

​Introductory Activities

1. First encounter
Explore the physical console, open and interact with the Master flow to control the hardware. View changes on the dashboard. Build a very basic flow following a recipe.

2. Blink
Create a new flow. Manually trigger one of the single colour LEDs. Modify the flow.

3. Trigger
Explore the Momentary push button. Connect to an output (e.g. make a LED light up).

4. Sense
Explore the capacitive touch button. Connect it to an output.

5. Move the servo
Wiggle the servo arm. Modify the flow to establish new end points on the dial.

6. Take a photo
Trigger the camera to take a photo. See the photo on the dashboard.

​Intermediate Activities

1. Dashboard development
What is the purpose of a dashboard? Explore the master flow dashboard, with an emphasis on the dashboard nodes and how they work. Add a new dashboard node to the master flow.

2. Sense - distance
Watch a video on how these sensors work. Review the code in the example flow. Watch the output in the debug window.

3. Momentary push button revisited
Explore pull up and pull down resistors. Revising the Trigger activity and explain the purpose of the switch function.

4. Sense - Temperature, humidity
Explore obtaining environmental data. Trigger the sensor manually, and expand the resulting payload in the debug window. Modify the sample rate with reference to the datasheet. Add the real time data to a dashboard.

5. Data fetch
Introduce concept of APIs. Explore geocoding approaches for location-specific data. Use an API call for local weather data.

6. Rainbow
Flash and cycle RGB LEDS using GPIO PWM.

7. Startup systems check
Introduce the concept of system checks and the need for them. Look at the example flow. Explore options and elaborations on the concept

8. Image recognition
Explore the challenges of image recognition. Explore image classification by AI. Test Watson’s image classification.

9. Data push - Twitter
Push button posts a message on Twitter sending onboard data. Refresh the Twitter feed on the dashboard.

10. Voice - translation
Translate voice data in real time using Babelfish.

11. Voice - control
Use your voice to control the servo.

12. Voice - chat
Explore Watson Assistant (like Siri, Google Assistant etc), but programmable.

13. Scheduling
Schedule turning on an LED (to simulate turning on a light in the home).

​Advanced Activities

1. Making your own flows
Planning flows, installing nodes, saving and exporting, deploying.

2. Running multiple flows at the same time
Techniques and implications, subflows.

3. Understanding the stack
Understanding the hardware, software and user interaction layers as an aid to building Node-RED applications.

4. Go faster!
Explore the effects of sampling rate on functionality and performance.

5. Beyond GPIO basics
The pins can be used for other forms of communication, such as I2C. Also compare RPi GPIO PWM node vs GPIOD.

6. Persistent data storage
Building database-connected flows and Node-RED applications.

7. Persistent Node-RED applications
Node-RED inputs, outputs and data storage on a Raspberry Pi stops when the device goes offline. Explore cloud-based Node-RED as an “always on” alternative.