Solve a Sudoku Puzzle With a CNC Machine
by GregF10 in Circuits > Robots
399 Views, 3 Favorites, 0 Comments
Solve a Sudoku Puzzle With a CNC Machine
I created my first Sudoku robot in 2013. It was the first robot I created, as a capstone project for a robotics course at the local community college. I based it on an industrial-class "XYZ table" that was around 6 feet x 4 feet. I used a Raspberry Pi 1 microcomputer as the "brain", with an attached a Raspberry Pi camera v1. I used a 24" x 18" whiteboard to present a 4x4 Sudoku puzzle. The Pi took a photo of the whiteboard, ran a simple heuristic to find the Sudoku puzzle, solved the puzzle, and drove the XYZ table to draw the solution on the whiteboard with a dry-erase marker.
After the 2013 project, I spent several years creating a handful of autonomous mobile robots; the last took over two years to finish. I then spent a few years on various projects, mostly unrelated to robotics. Recently, I felt the urge to create another robot.
While pondering what to build, I developed some criteria for shaping the nature of the to-be robot. First, I wanted to scope the project so that it might take closer to two months rather than two years. Second, I wanted it to be fun. Third, I wanted to learn something new. Fourth, to the extent possible, I wanted to reuse components I already owned.
I thought back to that first Sudoku robot. I decided to replace the massive XYZ table with something that would fit on a desktop. I also decided to replace the whiteboard with a capacitive touch tablet running a Sudoku app with 9x9 puzzles.
I examined my parts inventory. I found I was missing primarily the parts needed to replace the XYZ table. A bit of investigation suggested an entry-level CNC machine would recreate the required capabilities at a reasonable cost, and with little work for me. The approach for the Sudoku robot became:
- Take a photo of a Sudoku puzzle on a tablet
- Analyze the photo to identify the numbers and blank cells in the puzzle
- Solve the puzzle
- Drive the CNC machine to render the puzzle solution on the tablet
I concluded this scope could satisfy all the criteria, and I initiated the project to solve a Sudoku puzzle with a CNC machine.
As with any project, I expected surprises on the journey from concept to reality. Indeed, the journey was both cautious, due to my unfamiliarity with CNC machines, and meandering, as I adapted to surprises and improvements. What I'll describe eliminates most of the meandering, but retains most of the caution on the journey.
Supplies
Consistent with the Introduction's generic description of the Sudoku robot, I provide generic component names for my design for the robot. In Step 1, I identify the real components that I used in my implementation. If you create a different design even this generic list could change.
CNC machine
Electronics
- Microcomputer
- Camera
- Tablet with Sudoku app
Camera support frame
- Vertical and horizontal parts
- Camera holder
- Fasteners
Miscellaneous
- Stylus for tapping on the tablet
Tools
- Soldering iron
- Hex wrench
- Screw drivers
Choose the Components
CNC machine
I had to purchase a CNC machine, and I found lots of options. After learning a bit about CNC machines, I decided on the SainSmart Genmitsu 3018 PROVer V2. The size is quite compatible with my limited space. The specifications are quite good for the very reasonable price.
Electronics
For the microcomputer, I could choose between a Raspberry Pi 3B or Pi 4B. I used the 4B because it has the best performance.
For the camera, I could choose between a Raspberry Pi camera v1 or a Raspberry Pi camera v2. The v1 has a fixed focus and would not work well because the minimum focal distance is 1 meter. So, I used the v2. This describes the Raspberry Pi camera modules and how to attach one to a Pi.
For the tablet, I used an old iPad Pro, for a few reasons. I did not want to purchase a tablet. It fit into the CNC machine's workspace. If I damaged it, I would not loose much.
For the Sudoku app, I chose Sudoku - No ads which is available from the Apple App Store. It is free. It is true to its name -- no ads. It also has no silly delays after the completion of a row, column, or block. It allows many levels of difficulty. There are a few missing features, e.g., no 'undo', but they don't impact the robot implementation. The first figure shows the New Game screen; it contains a "New game" button. A tap on the button starts a new game. The second figure shows the puzzle screen; a new unsolved puzzle dominates the left 2/3 of the screen; and at bottom right of the screen is the number pad. A tap in the puzzle selects an empty cell; a tap on a number fills that empty cell.
Camera support frame
To enable good photos of the Sudoku puzzle, the camera has to be mounted above puzzle area on the iPad. So, the camera support frame had to have a vertical part and a horizontal part.
Because I had some, I used Actobotics® channel for the vertical and horizontal parts. You might have trouble finding the channel parts, as I believe they are no longer manufactured.
For a previous robot, I hand-crafted a holder for the Raspberry Pi camera v1 from a piece of wood. I reused it, making some slight modifications for the v2.
Miscellaneous
For the stylus, I purchased a capacitive Stylus pen because I had nothing I felt would work as well. I originally hoped I could cut the stylus pen and fit it into a collet for the CNC machine. Alas, no; a surprise. The collet shipped with the machine supports only a 1/8" diameter bit. I purchased a 1/8" brass rod, and soldered a replacement tip for the stylus pen to the rod to create a custom stylus.
Build the Hardware: Part 1
CNC Machine
The Genmitsu 3018 PROVer V2 ships as a partially assembled kit. It came with reasonably clear assembly instructions. It took me about 2 hours to assemble the machine. The hardest part was wire management!
I did not use the baffles because there was no need for them. I did install the spindle because I concluded I'd need to use it to hold the custom stylus; however, I did not bother to connect the spindle motor to the controller.
NOTE: In my actual journey to reality, I built the CNC machine after developing a majority of the software. I wanted to confirm that I could implement most of the software components before purchasing the CNC machine.
Camera Support Frame
I mounted the camera about 32.5 cm above the iPad, roughly centered over the Sudoku puzzle; that distance allows photos with sufficient focus and resolution. I put the channel pieces together with Actobotics®-compatible fasteners. I attached the complete camera support frame to the side rails of the CNC machine using M3 screws and M3 T Nuts. I attached the frame in a position that allowed minimum movement to get an unobscured photo of the puzzle screen.
Camera holder
I attached the camera to the holder using some 2.5M nuts and bolts. I attached the holder to the horizontal channel using Actobotics®-compatible nuts and bolts.
The camera connects to the Raspberry Pi with a very short ribbon cable. That forced me to attach the Raspberry Pi to the horizontal channel. I used a velcro band; definitely somewhat embarrassing, but the heat-dissipating case for the Pi gave me no choice.
iPad
I felt it important to prevent the iPad from moving on the CNC machine bed to maintain alignment with the camera and ensure accurate taps. I clamped the iPad to the CNC machine bed using the clamps from the machine kit. I kept the iPad in its case, so that the pressure from the clamps was on the case rather than the iPad itself.
Connections
I connected the CNC machine USB cable to the machine and the Raspberry Pi. I connected a 5V power supply to the Pi. I connected the CNC machine power supply. I used some zip ties to minimize cable chaos.
Set Up the Raspberry Pi and a Development Environment
When I develop code for a Raspberry Pi, I prefer to do as much as possible on my MacBook because it offers a superior development environment (better tools, better performance, bigger screen, and so forth). So, I had work to do in the following areas.
Raspberry Pi OS
When I use a Raspberry Pi in a project, I use what is now called the Raspberry Pi OS. My Pi 4B has 8GB of memory, and only the 64-bit OS can leverage all that memory. Since I wanted do some development on the Pi itself, I decided on the desktop version of the OS, which includes development tools. You can find everything you need to know about installing the proper OS and setting up a Raspberry Pi here.
Remote access
There is no reason for the robot to have a display, keyboard, or mouse. Thus, I needed remote access to the Pi to develop code and control the robot. So, after installing Raspberry Pi OS, I connected the Pi to my WiFi network, and I enabled SSH and VNC. You can find out about remote access for the Raspberry Pi here. I used the RealVNC server and client.
Python
In keeping with the goal of a short project timeframe, I decided to program in Python, even though I consider myself a novice in the language. Python is mainly platform neutral, so I could develop and test a lot of code on my MacBook. Python supports the image processing library OpenCV on many platforms. Python has support on the Raspberry Pi for the camera, and for USB serial communication needed for the CNC machine.
Python development tools
The Raspberry Pi OS desktop provides Thonny for Python development. I find it pretty primitive, but it does offer debugging and decent support for virtual environments. Given that the robot software requires some non-OS provided libraries, using a virtual environment is mandatory on RPi OS.
For macOS, in my opinion, PyCharm is by far the best choice. It has more features than I can use, and certainly supports virtual environments.
Development workflow
Given the above, you can develop Python scripts on Thonny or PyCharm. You can use the file transfer tool in the RealVNC viewer to copy files from one platform to the other. RealVNC also supports copy/paste of text between the RealVNC viewer window and macOS windows.
Enable Puzzle Photos
The robot starts with a photo of a Sudoku puzzle on the iPad. I had to implement the basic ability to take photos, focus the camera, and align the camera and iPad to maximize the quality of a photo. Then I could implement the necessary camera software for the project.
Take photos
A quick search for Raspberry Pi camera support in Python turned up the picamzero library. However, during subsequent searches, Google's AI assistant said "the picamera2 library ... is the recommended library for newer Raspberry Pi models". I also found this helpful document on picamera2. Using it, I was able to create a simple Python script (shown below) to take a photo in a form compatible with OpenCV and save it to a file. I could then look at the photo, and the cropped version, using Image Viewer on RPi OS.
I must mention that when running the script above I observed that creating and configuring an instance of the PiCamera2 class produces several Warning and Info messages. It took quite a bit of investigation to determine that the messages, while very distracting, are no cause for worry.
Adjust the focus
The Raspberry Pi camera v2 ships with a minimum focal distance of roughly 1m. I repeatedly rotated the lens in small increments, took a photo, and looked at the photo until I achieved a decent (though not perfect) focus at 30-35cm. Rotating the camera lens is not easy; you can find some hints on how to do it here.
Align the camera and iPad
I had to adjust the position of the iPad, using the manual controls on the CNC machine, and the orientation of the camera holder to get the best possible alignment with the camera and the center of the puzzle on the iPad. I used an iterative approach similar to that with focusing.
Figure 1 is an example photo from the script above. Figure 2 shows the cropped image. When I took the photo, the focus and alignment adjustments were completed.
Camera software for the project
The simple script above does not support the needs of the project, so there was more work to do. The final script for camera support is attached. The SCamera class has a few points of interest:
- the class separates configuration from taking a photo to improve performance
- the take_photo method crops to the full puzzle or the left half of the puzzle; the latter supports differentiation between the New Game screen and the puzzle screen (more on this in Step 12); see the figures in Step 1
- the main method offers a better way of assessing camera focus or alignment, since it loops to eliminate the lengthy configuration
Downloads
Find a Sudoku Puzzle
After getting a photo of the Sudoku puzzle, the next action is finding the puzzle in the photo. Fortunately this was something I could implement and test on the MacBook.
I did not expect to discover existing technology to find a Sudoku puzzle in a photo from my environment. But I thought that I could use some form of OCR and adapt it. A bit of searching produced this article which mentioned EasyOCR. It seemed pretty easy to use, so I began testing it.
Testing
I did early testing with a quite pristine puzzle drawing from the internet and a pristine screenshot of a puzzle on the iPad (see figure 2 in Step 1). I learned very quickly that EasyOCR did not function well at all when looking at the entire puzzle photo.
I decided to split the puzzle image into 81 cell images and then use EasyOCR on each cell. Splitting of course involved a fair amount of OpenCV image processing. Even after splitting, I got a mixture of success and failure (e.g., at one point, EasyOCR consistently recognized all digits except '7').
I continued experimenting with forms of OpenCV image processing, and with some EasyOCR parameters. I eventually found a configuration that worked consistently. But, the performance was pretty slow.
Looking at an unsolved puzzle, I realized that the majority of the puzzle cells are blank, and there is nothing to recognize. I derived a simple approach to detecting an empty cell; it worked, but the overall performance was roughly the same. I realized that the image processing and EasyOCR are simply slow.
Next, I took a photo of the iPad puzzle screen. Compared to the crisp screenshot, the photo was somewhat distorted, had slightly blurry numbers and lines, had some noise, and had much less contrast. That said, a test resulted in success!
I was pleased, but decided that the photo, while not as good as the screenshot, might be about the best I could expect from the Pi camera. I added some extra OpenCV image processing to widen the range of acceptable images. I used Gaussian blur, Canny edge detection, and Hough line detection; I had to experiment with the parameters for the Canny and Hough algorithms, but found a good configuration that made finding the puzzle more robust.
During the Canny/Hough tuning, I thought the result of the Canny algorithm produced edge information could support a much more robust empty cell detection. An implementation using edge information for empty cell detection proved I was correct.
A surprise
Once I had puzzle finder working on my M1-based MacBook Pro, I copied it to the Raspberry Pi and tested it in that environment. I was surprised to find the Pi performed the task 60 times slower than the MacBook! Since the Pi took over a minute, I modified the overall approach so that the Pi sends the puzzle photo to the MacBook and the MacBook finds the puzzle and returns it to the Pi. Thus, the MacBook becomes a "puzzle server" for the Pi. The puzzle server might not be required with higher performance microcomputers.
I must admit that my surprise at the performance difference between an Apple M1-based computing system and a Raspberry Pi 4-based computing system lasted about 2 seconds. If anything still surprises me, it is that the factor isn't larger than 60.
Sudoku puzzle finder for the project
The final algorithm for the Sudoku puzzle finder is shown below. The puzzle finder produces a string to describe the puzzle for reasons discussed in Step 6.
- Rotate the incoming image
- Clean the image: convert to gray, Gaussian blur, find Canny edges, find Hough lines; crop to puzzle boundary
- Find the 81 cells in the original image
- Find the 81 cells in the Canny edges image
- For the 81 original cells
- if the edges cell is empty
- Append a '0' to the puzzle string
- else
- Gaussian blur the image cell to improve OCR accuracy
- Use EasyOCR to detect the number in the cell
- Append the number to the puzzle string
- Return the puzzle string
I've attached the Python script that implements the algorithm. The SudokuReader class separates the configuration of the EasyOCR reader from finding the Sudoku puzzle via the find_puzzle method so that the slow EasyOCR configuration has to be done only once.
Downloads
Implement a Puzzle Server
As described in Step 5, the Raspberry Pi sends a puzzle photo to the MacBook-based puzzle server and the MacBook finds the puzzle and returns it to the Pi. I believed the most efficient communication approach would use sockets.
Send images and puzzle descriptions between Raspberry Pi OS and macOS
I started developing a socket-based approach to transport OpenCV images from RPi OS to macOS, and return a puzzle description to RPi OS. While searching for answers to some forgotten question, I stumbled upon imageZMQ, specifically designed to "transport OpenCV images from one computer to another". While it has some limitations, a bit of testing showed that using it would save a lot of work.
Puzzle server for the project
In the imageZMQ client/server model, the client sends a text string and an OpenCV image to the server. The server returns a text string. So the puzzle server has to get a puzzle image, find the puzzle in the image, and return the puzzle as a string.
It is a good practice for the client to tell the server it is finished. So, the puzzle server also looks for 'end' in the text string, and terminates when found.
The Pi needs to understand if the puzzle server is running. So, the puzzle server also looks for 'test' in the text string, and simply returns 'ok' when found.
The algorithm for the puzzle server is
- Initialize imageZMQ
- Initialize a puzzle finder (see Step 5)
- Wait for a string/image
- if string equals 'test' Return 'ok'
- else if string equals 'end' Exit
- else
- Find the puzzle in the image
- Return the puzzle as a string
- Loop to 3
I've attached the Python script that implements the algorithm.
Downloads
Solve a Sudoku Puzzle
I wanted to find an existing, robust Sudoku puzzle solver written in Python. Naturally I started with an internet search. I quickly found this Wikipedia article that led to another article that led to this web page that led to this Python code.
I admit that I don't really understand how the algorithm works. But I didn't really try to do so, given my desire for rapid progress on the overall project. I confirmed that it worked via testing solve_sudoku.py script on my MacBook. I copied the script to the Raspberry Pi, and it worked with acceptable performance.
Explore the CNC Ecosystem
At this point, I had the software needed to:
- Pi: take a photo of the iPad screen and crop it
- Pi: send the cropped photo to the MacBook
- MacBook: receive the photo
- MacBook: find the Sudoku puzzle in the photo
- MacBook: return the puzzle to the Pi
- Pi: solve the puzzle
The remaining, and hardest, part was rendering the puzzle solution using the CNC machine. To achieve the desired goals for the robot, I had to learn a lot about what I'll call the "CNC ecosystem". In this Step, I'll discuss the important concepts and provide references for more detail.
CAD, CAM, and CNC
At a high level there are three major components in the CNC ecosystem:
- Computer Aided Design (CAD)
- Computer Aided Manufacturing (CAM)
- Computer Numerical Control (CNC)
CAD software tools allow a human to visually design a part. The tool produces a digital representation of the part. Examples include Autodesk Fusion and SOLIDWORKS.
CAM software tools use the CAD-produced digital representation of a part to produce the instructions needed to actually manufacture the part. Examples include Autodesk Fusion and SolidCAM. As is obvious from the examples, some tools provide both CAD and CAM capabilities.
CNC machines use the CAM-produced instructions to manufacture the part. An example is the SainSmart Genmitsu 3018 PROVer V2 used in this project.
Implications for the Sudoku-solving robot
There are no existing CAD or CAM tools applicable to the Sudoku-solving robot. To render the solution to a puzzle, a custom "CAD" component has to look at the solution and turn it into a set of movements to render the solution. Then a custom "CAM" component can turn the movements into commands for the CNC machine. The custom "CAM" component required me to learn a great deal more about CNC machines.
CNC Machines, GRBL, and G-code
Modern CNC machines have a GRBL-based controller coordinating the 3D movements necessary for manufacturing parts. GRBL typically runs on an Arduino-class microcontroller. A host system communicates with a GRBL controller using a serial protocol.
A GRBL controller accepts three classes of commands. Two classes of system commands "do things like control machine state, report saved parameters or what Grbl is doing, save or [report] machine settings". The third class of commands (G-code), tells the CNC machine how to move. There are many references for G-code: this, this, this, this, this, and this.
In this project, I'm not manufacturing a part. I am simply leveraging the 3D movement capabilities of a CNC machine to solve a Sudoku puzzle by tapping on an iPad. I determined that describing the simple linear movements needed for the rendering a Sudoku puzzle solution requires only a small subset of GRBL commands. However, it was important to identify and understand how to use those commands properly.
Basic G-code commands required
The first set of commands establish a working environment. They are modal commands, and thus only need to be issued once per power-on cycle.
- G17 sets the working plane to XY
- G21 sets the measurement units to millimeters
- G90 sets absolute coordinates for movement
- G92 X0 Y0 Z0 sets the current location as the workspace origin; subsequent absolute movements are relative to that origin
I used the XY plane because the iPad is in that plane. I used millimeters (not inches) because most of the world does so. I used absolute coordinates because I felt doing so would make the custom CAM much simpler than using the alternative, which is relative coordinates.
The next set of commands define actual movement. Movement can be at fast speed or feed speed. Fast speed means as fast as a machine can move safely; it is generally used to move from one location to another when not doing any machining. Fast speed is set by the GRBL configuration, and is usually determined by the CNC machine manufacturer. Feed speed means the speed proper for the current machining. It gets set based on the needs of the type of machining taking place, and it usually significantly slower than fast speed.
The following examples assume millimeters and absolute coordinates relative to the workspace origin.
- G00 causes movement to the indicated coordinates at fast speed
- G00 X50 moves on the X axis to X=50 mm; the Y and Z coordinates are unchanged
- G00 X50 Y30 moves on the X and Y axes simultaneously to X=50 mm, Y=30 mm; the Z coordinate is unchanged
- G00 X50 Y30 Z3 move on all 3 axes simultaneously to X=50 mm, Y=30 mm, Z=3 mm
- G01 causes movement to the indicated coordinates at feed speed
- G01 X50 moves on the X axis to X=50 mm; the Y and Z coordinates are unchanged
- G01 X50 Y30 moves on the X and Y axes simultaneously to X=50 mm, Y=30 mm; the Z coordinate is unchanged
- G01 X50 Y30 Z3 move on all 3 axes simultaneously to X=50 mm, Y= 30 mm, Z=3 mm
- F sets the feed speed for movement; it is a modal command, so the feed speed stays the same until another 'F' command is issued
- F200 sets the feed speed to 200 mm/minute
- F1000 sets the feed speed to 1000 mm/minute
I'll discuss some additional GRBL commands required for the project in later steps.
Generate the G-code
As a novice regarding CNC machines, I felt it necessary to do a lot testing, especially simulation, to learn more about GRBL commands and specifically G-code commands. I did some searching for tools that would help, but because I wanted to use the tools on macOS (cautious journey, remember), the choices were limited.
After looking at tool recommendations from SainSmart, the GRBL WiKi, and other sources, I identified a few tools that I found helpful. G-Code Q'n'dirty is a web-based tool that simulates the results of a set of G-code commands. NC Viewer is a web-based tool that allows you to load G-code files and simulate the result; you can also create a set of commands using the keyboard. OpenBuilds Control is an installable tool that allows you to load G-code files and simulate the result, or connect to a physical machine and send it G-code; it also has a nice "console" feature that allows keyboard entry of individual command and examination of the returned string.
Design the custom "CAD"
Figure 1 shows the Sudoku app's puzzle screen, with the puzzle on the left and the number pad on the right. To solve the puzzle, you must tap an empty cell in the puzzle and then tap the correct number for that cell in the pad until no empty cells remain. From a CAD perspective, that means:
- establish a working environment
- move to the centroid of a puzzle cell
- tap the cell (move down, then up)
- move to the centroid of the proper number in the pad
- tap the number
- repeat from 2 until no empty cells remain
Figure 2 shows the movements needed to tap the first two empty cells in the unsolved puzzle. Assume the workspace origin is set at the lower left corner of the puzzle. The red dot in the puzzle identifies the centroid of the first empty cell, and the red dot in the pad identifies the centroid of the number (3) to put into the cell. The pink arrows designate the movements from the origin to the first empty cell, and then to the proper number. The green dot in the puzzle identifies the centroid of the second empty cell, and the green dot in the pad identifies the centroid of the number (1) to put into the cell. The light green arrows designate the movements from the previously tapped number to the second empty cell, and then to the proper number. Not shown are the taps that must be made.
This work provides a logical description of the movements, but the "CAM" requires a physical description, based on the puzzle screen geometry. I measured the dimensions of the puzzle on my iPad; it is 129x129 mm. Thus the centroid of any cell in the puzzle is
(14.33 * gx + 7.17, 14.33 * gy + 7.17), where (gx, gy) refers to a cell in the puzzle, from the bottom left of the puzzle
I took similar measurements for the number pad. The bottom left corner is at (139, 1) relative to the puzzle origin. The pad is 53x41 mm. Thus the centroid of any number in the pad is
(17.67 * px + 147.83, 13.67 * py + 7.83), where (px, py) refers to a number in the pad, relative to bottom left of the pad
Design the custom "CAM"
The custom "CAM" must take the movement coordinates and produce the appropriate G-Code. Using the basic G-code commands identified in the previous step, I manually generated the following G-code to render the solution for the two puzzle cells. It sets the current stylus location to the workspace origin, and so the stylus is at (0, 0, 0) before the execution of the 4th line. The commands move in the X and Y axes at fast speed. Movement in the Z axis up is at fast speed, but down is at feed speed. I felt slower down movements would be less likely to damage the iPad. NOTE: A ';' means the rest of a line is a comment.
Figure 3 shows the simulation of this G-code using G-Code Q'n'dirty. The top of the figure shows an isometric view, where it is possible to see movement in all three axes. The bottom of the figure shows the XY plane with a scale that indicates the G-code worked as expected.
In any case, after the theoretical work on the custom "CAD" and "CAM" components, I felt comfortable that
- I understood the CNC ecosystem sufficiently, at least for now
- I could create Python scripts to do programmatically what I'd done manually
Implement CAD/CAM for the project
I decided to combine the CAD and CAM components, since was easy to incorporate the CAM function into the CAD algorithm.
The implementation of the G-code generator for rendering a Sudoku puzzle solution assumes the following:
- the CNC machine working environment is established prior to using it
- all move coordinates get calculated using the puzzle/pad geometry measured on the iPad
- the stylus is initially positioned so that no possible XY movement produced by the generated G-code will cause the stylus to impact anything, except for taps on the iPad
- the unsolved puzzle (see Step 6) has been given to the generator to differentiate unsolved puzzle cells from the solved puzzle cells
The algorithm for the implementation is:
- if unsolved puzzle copy not present
- Return
- Put the solution into a 2D list, like the unsolved puzzle
- for each row in the unsolved puzzle
- for each cell in the row
- if the unsolved puzzle cell is empty
- Generate the G-code to move to the cell centroid
- Generate the G-code to tap
- Generate the G-code to move to the centroid of the number defined by the solution
- Generate the G-code to tap
- Return the G-code
I've attached the final Python script that contains the implementation. The GenerateGcode class separates initialization, which needs to be done only once, from the generate_solution_gcode method which can called multiple times. The class has a few other important goodies:
- the copy_puzzle method for copying the unsolved puzzle prior to generating the solution
- the generate_new_game_gcode method for creating the G-code to tap the New game button on the Sudoku app New Game screen
- some GRBL commands necessary for configuration of the CNC machine
- the main function for testing
I tested the implementation on the MacBook using the main function. I then ran a simulation of the generated G-code on OpenBuilds Control. Figure 4 shows a view of the XY plane. It is relatively easy to see the location of the tapped cells in the puzzle and the tapped numbers in the pad. Figure 5 shows an isometric view and you can see the Z axis movement (in a light yellow).
Once I'd verified the script worked, I copied it to the Raspberry Pi.
Downloads
Drive the CNC Machine
Internet searches led to conflicting results regarding whether I'd need to install a CH341SER-compatible device driver in Raspberry Pi OS to drive the SainSmart CNC machine; I definitely had to for macOS. I crossed my fingers and plugged the machine into the Raspberry Pi and turned on the controller; I found /dev/ttyUSB0 in RPi OS. No special driver needed!
While looking at the GRBL Wiki, I found a link to a script that sends G-code files to a CNC machine. I adapted the code to send individual commands rather than a file. The following code snippet shows how to open a port to the GRBL controller, how to send most commands to the controller, and how to close the controller.
To get a report of the machine state insert the following snippet:
To get a position report insert the following snippet:
CNC machine software for the project
At this point I felt I understood the requirements well enough to design a Python class to represent the CNC machine. I've attached the final implementation. Some highlights about the CncMachine class:
- the _init_ method opens the Python serial device to communicate with the GRBL controller, and also sends the wakeup command seen in the snippets above
- the send_cmd method is identical to the function in the snippets above; there are multiple variations of send_cmd with varying treatments of the response to the command
- the wait_for_idle method issues position report commands until the machine is no longer moving; this ensures that the machine is stopped for photos
- the configure method issues the commands needed to configure the machine appropriately to render the puzzle solution
Downloads
Build the Hardware: Part 2
At this point, I had the software needed to:
- Pi: take a photo of the iPad screen and crop it
- Pi: send the cropped photo to the MacBook
- MacBook: receive the photo
- MacBook: find the Sudoku puzzle in the photo
- MacBook: return the puzzle to the Pi
- Pi: solve the puzzle
- Pi: generate the G-code to render the solution
- PI: send G-code to the CNC machine to render the solution
In fact, I could actually let the CNC machine make all the movements necessary to render the solution because I had not installed the custom stylus, eliminating any possibility of crashes, especially with the iPad. It was time to install the custom stylus.
Custom Stylus
I inserted the stylus into the collet. I had to move the carriage to provide enough space to maneuver the stylus/collet into the spindle. I then tightened the collet nut by hand. You can see the custom stylus in figure 1 of Step 2.
Testing revealed that taps on the iPad using the custom stylus worked intermittently. Another surprise. I determined this was at least partly due to inadequate grounding.
Ground wire
I used solid and stranded wire to create a ground path between the custom stylus and the Raspberry Pi. I stripped some solid wire and soldered it to the stylus. I used some stranded hookup wire to connect that solid wire to a ground pin on the Raspberry Pi GPIO header. I had to ensure that the wire was attached to the support frame and spindle so that it did not interfere with movement or taking photos. You can see the ground wire in figure 1 of Step 2.
NOTE: The ground wire addressed a tap failure rate as high as 25%. While the ground wire helped, it did not eliminate failed taps; the failure rate did fall below 2.5%. The nature of tap failures is still not understood. It could be a grounding problem or electrical noise. It could be mechanical noise, e.g., flexing of the soft touch pad on the stylus; this could produce in a double tap that negates the original tap.
Fix Shortcomings
Finally, everything mostly worked. The implementation met the original goals outlined in the Introduction, minus the unanticipated tap failures. But, I felt there were actually four shortcoming in the implementation:
- First, and worst, is that tap failures still occurred.
- Second, I relied on manual positioning to prepare the robot for solving a puzzle.
- Third, I wanted to start the robot on the New Game screen or the puzzle screen.
- Fourth, and least, I hypothesized that occasionally someone might want to solve more than one puzzle without having to start all over.
Tap failures
With no known way to eliminate tap failures, the only fix possible was to make another attempt to solve the puzzle.
When a puzzle gets solved, the Sudoku app displays the New Game screen; when a puzzle remains unsolved, the app continues to display the puzzle screen. I decided that after the puzzle should be solved, the robot could take a photo of the screen and look at only the left side of the puzzle to determine if the puzzle really got solved. A "blue" screen means it got solved, while a "white" screen means it did not get solved. Once the solution state gets determined, the robot can try again to solve the puzzle, or claim victory.
Manual positioning
Part of the CNC ecosystem not discussed above is the concept of the CNC machine home, which is "a known and consistent position on a machine". A good reference is here. A homing cycle is a way of finding home without human intervention; a homing cycle requires limit switches. The Genmitsu 3018 PROVer V2 has limit switches! Thus, in theory, I could introduce a homing cycle and eliminate the need for manual positioning. This video gave me some confidence that I could turn the theory into reality; it also happens to be an interesting discussion of G-code coordinate systems.
This offers a bit more detail about the homing cycle. In particular it discusses the homing direction mask, which allows the machine to home to any of the 8 corners of its working volume. After thinking and testing, I decided to go with the top left corner, as it minimizes the chance of collision during the homing cycle, and minimizes movement needed to establish the workspace origin after the homing cycle.
Implementing the top left required setting $23=1. I simply issued that as a GRBL command. The GRBL controller retains the setting, so I only needed to do it once. Starting a homing cycle requires sending the proper command. So, CNC machine configuration has to include the '$H' command.
Not surprisingly, the home cycle leaves the stylus at home, i.e., at the top left corner of the machine working volume. I had to find the movements necessary to move the stylus to the lower left corner of the Sudoku puzzle to create a workspace origin for rendering a puzzle solution. The movements also became part of the CNC machine configuration.
Start screen determination
Fortuitously, to address this shortcoming, I could leverage part of the tap failure solution. I reused the detection of which screen is showing at the start of puzzle solving to either start solving immediately, or to first tap the New game button.
Multiple puzzles
This was the easiest to solve. I just inserted a loop with a prompt to indicate "start a new puzzle" or "quit".
Finish the Robot
With the shortcoming fixes identified, I was able to finish the Sudoku puzzle solution rendering software.
Final software for the Raspberry Pi
This is the final algorithm for solving a Sudoku puzzle with a CNC machine. It reflects the changes required to adapt to the various surprises, the "aha! moments", and fixes identified during development and testing. In other words, it is the end of the cautious, meandering path from concept to reality.
- Initialize network communication with the MacBook server
- Initialize serial communication with the CNC machine (see Step 10)
- Initialize the G-code generator (see Step 9)
- Initialize the camera (see Step 4)
- Configure the CNC machine
- Find the visible screen: New Game or Puzzle
- Start New Game? If so ...
- New Game screen visible? If so ...
- Generate G-code to tap the New Game button
- Send the G-code to the CNC machine to tap the New Game button
- Move to allow an unobscured photo
- Puzzle solved? If not ...
- Take photo of puzzle; crop photo
- Send the photo to the MacBook puzzle server and get the puzzle
- Solve the puzzle
- Generate the G-code required to render the solution
- Send the G-code to the CNC machine to render the solution
- Determine if the puzzle solved
- Loop to 7.b
- New Game screen is visible
- Loop to 7
The script for the final version of the software is attached.
Final software for the MacBook
The Python scripts for the puzzle server running on the MacBook are described in Steps 5 and 6.
Post project analysis
The initial ideas for the project came to mind around the fourth week of September. I finished the project in the first week in December. Time duration was 10 weeks. But, I took two trips, each about a week long, where I did nothing related to the project. That means 8 weeks, or 2 months duration. I also spent a few days on other projects. So, overall, I definitely met the 2 month goal!
I also had a lot of fun. I certainly learned a great deal. I used a some of my existing parts inventory, but I did have to spend a chuck of money on the CNC machine.
All things considered, I call it a win!
Downloads
Do a Demo
Of course a demonstrate of the robot is mandatory. The attached video shows the complete cycle from starting the sudoku_render.py script to completion of the solution rendering. Good news or bad news, you can see an example what appears to be a double tap in the video, resulting in a failed rendering on the first try, and the retry that results in eventual success.