Alpha-Chess: the Ultimate Cheat Code
by ElectraFlame in Design > Software
69 Views, 1 Favorites, 0 Comments
Alpha-Chess: the Ultimate Cheat Code



**Note : The project made is currently in form of a software... further development of turning it into a wearable and unrecognizable bot is in progress.**
Ever lost to ur friend in a chess tournament ... frustrating I know....what if u are wearing a cap in the tournament and u have sun sickness.... if that's a thing, and the whole time, underneath it is the ULTIMATE CHEAT CODE!! yes... that's the bot and which is barely noticeable by anyone. and who is it to help you with it? me. ElectraFlame, and wait is the bot trained by me? partly, and how will it recognize moves? well we have our own special guest who ill introduce to you later on. but hey, a hint won't render your brain to recognize who it might be,
our special guest is a chess board with four squares and a fish on it.
Before we begin, I would like you to take a look at the following website created by me and my personal AI which describes the features of my project -
https://elecramadness.github.io/AlphaChess.github.io/
The main website URL is not developed due to file size handling issues while deploying, but for output viewing the video provided above and codes can be utilized.
Ready to turn the tables on your chess nemesis? Let's build the future of "totally legitimate" chess strategy together!
Disclaimer: Use responsibly and maybe don't actually cheat in real tournaments. But hey, practicing against friends? Fair game.
Planning the Project
First off, we need to plan our way of input and output, since we have to conceal our bot.
Input-Output planning
- first method is input and output in textual basis, we type our move to the bot and our bot returns its move back in textual manner. (a simple prompt system, most helpful if its a software)
- second basis of input is through voice commands: - train your own RNN model to recognize only your commands, my pc will become a fired potato OR use a readymade model such as WHISPER to recognize voice input, THE CATCH, it recognizes literally everything in background
- Third yet most difficult option - Input using camera. Yes, we shall be using computer vision to make my potato understand moves and where they are done.
Now that the first two methods are easy to take on as a software we shall explore them first, however if we have the need of incorporating them as a wearable as promised, they are not feasible options.
If the provided codes have indentation issues, don't worry i shall provide code in files at the end!!
Lets get into making the first part - textual conversation!
Executing Textual Conversation


I think now might be the right time to introduce our guest
STOCKFISH! V17.1
so since we are doing text based conversation with our bot [ stockfish v17.1 ] the approach is pretty simple
- load stockfish into PyCharm project (recommended)
- initialize stockfish
- take text input
- feed it to stockfish
- print the move by stockfish
Installing stockfish
you can install stockfish from PyCharm directly or install from website and load it into your directory
https://stockfishchess.org/download/
Writing the code
- importing necessary libraries - chess, chess.engine
- first we shall declare a variable which serves as the location where stockfish is located and initialize the engine
- set the board dimensions
- ask input for the side they shall choose - black / white - b/w
- play the game
The code
Speech Based Approach


Speech based approach would be a bit difficult since my pc doesn't meet the hardware requirements for training with only my voice. so instead we are gonna use pretrained model, in our case its whisper. one major issue that can occur as highlighted above can be that the system not only takes our input but also the surroundings one, so his approach my only work in clean and noiseless surroundings.
make sure to install whisper from web as well as in PyCharm.
To succeed in speech recognition we can follow these steps
- import whisper model along with other libraries
- download small model
- use it to identify our speech
- transcribe the speech and feed it to stockfish
- let the computer speak via speakers the move made by stockfish.
the code for the same is as follows -
**One major issue while writing the code is that vosk recognizes moves as e four not e4 and stockfish is unable to interpret what move this could be so it returns invalid move, in order to fix this issue parsing can be used -
declaring dictionaries for each word to numeric [ {"four": 4} ], defining strings to check if that's what vosk transcribed and converting them to move suitable for stockfish, using phonetic alphabetical letters such as ALPHA - "A", BRAVO - "B" [ the approach i have used ] etc.**
For the above code to work there needs to be clear surroundings and good microphone. The above code uses whisper to recognize the move and works fine, but the main problem that arises is that the whisper model is not able to recognize the voice properly sometimes so the move needs to be clearly spoken.
Computer Vision Integration

Considering the third option -
"True chess mastery isn't just about making moves; it's about understanding your opponent and planning strategically—here, that means grasping the computer vision that allows the PC to interpret the game."
- ElectraFlame -
so saying that using computer vision to make your pc comprehend the game is like writing sorry in air (totally relatable)
no!! here comes the actual obstacle - using computer vision can help the pc to identify chessboard AS A WHOLE meaning it wont be able to grasp the moves happening.....due to it treating the board as one whole object.
The best approach i can take -
- get the computer to identify individual chess pieces.
- get the computer to identify the board's corners
- make the computer divide the board into 64 boxes
- Determine the current board state
- Identify what move was made by your opponent
- get our special guest to generate counter move
Again, we cant just turn my potato pc into a fried one by training with yolov5. so we'll use roboflow to definitely NOT FORK datasets!! Choose the dataset that is suitable for your chessboard and verify it's pieces and check if they match yours. now that we have the dataset, upload your own images to roboflow and train it based on various features roboflow provides.
with the training complete lets move onto letting the pc divide our board into 64 pieces. This step shall set us into coding computer vision part.
WARNING - my pc has camera issues unable to detect things but works well in other device, but the problem is that the filming of the demonstration is done on the device with camera issues.
now lets begin with coding -
Making an App - WINDOWS
EHHH....we can use two ways, use python [ tkinter, customtkinter, kivymd, kivy ] or html [ reactjs ]
im gonna go for html approach using flask
lets plan making the app first -
- integrate all three features into a app
- make it interactive
- idk
by planning the app i mean the layout, its up to you... i made it basic and animated, so its your choice in making it.
we are gonna make a website that contains all these features, so for that we shall develop a UI a simple one not to complex, yet functional
we shall use flask to deploy our website locally....
folder structure -
alpha_chess_flask/
├── app.py
|----- stockfish.exe
├── static/
│ ├── style.css
│ └── script.js
└── templates/
└── index.html
But remember to use ur roboflow model api, project id, version. and stockfish location
since we have such a setup, it is now far easier for us to integrate our 3 versions to js.
we can start taking help of personal ai tools you like according to your choices to convert the python code into a js one.
FLASK comes in handy when we need to import models like stockfish, whisper etc. we import these in main.py define functions that handle input and output i the main.py itself and update js to link it to our main.py.
Deploy the App

To deploy our app, we shall use PYTHONANYWHERE as a web hosting service
first we upload all our files into a repository and navigate to pages tab, we enable a hosting service and then done!! or website is deployed into the url.
Making the Hardware Part
Well since this instructables shall become too long, and I am currently developing the hardware, so we are just gonna be discussing the plans for doing so.
BUT DONT WORRY I SHALL BE MAKING A SECOND INSTRUCTABLES FOR THE SAME
We have the software, what does the hardware require? a proper way to conceal our project. to achieve such a task, we shall discuss three options -
- using a cap we hide the whole circuit and everything and place the camera on the side
- making a specs - we make a specs which displays the move in oled in front - INSPIRED FROM DARKFLAME must see his projects too - DarkFLAME do check our collaborated account - Binary_Flame.....
- using Ear Pods - to speak our moves and transmit opponent moves to ears
Stay tuned for part 2 THANK your for viewing this instructable... hope u enjoyed!