Light Up Bot! Use Object Detection to Turn Your Lights On
by hazal mestci in Circuits > Cameras
169 Views, 3 Favorites, 0 Comments
Light Up Bot! Use Object Detection to Turn Your Lights On
This tutorial uses the Viam vision service with your computer’s built-in webcam to detect the presence of a person and turn on a lamp when you sit down at your desk.
You can turn it into a night light for reading books, a security robot that alerts you when a person is close by, or a bathroom light that only activates when people enter; the opportunities are endless.
This project is a great place to start if you are new to building robots because the only hardware it requires in addition to your computer is a smart plug or smart bulb.
Supplies
Hardware requirements
You need the following hardware for this tutorial:
- Computer with a webcam
- This tutorial uses a MacBook Pro but any computer running macOS or 64-bit Linux will work
- Mobile phone (to download the Kasa Smart app)
- Either a smart plug or bulb:
- Kasa Smart Wi-Fi Plug Mini
- (This is what we used for this tutorial)
- Kasa Smart Light Bulb
- Table Lamp Base or similar
Software requirements
You will use the following software in this tutorial:
- Python 3.8 or newer
- viam-server
- Viam Python SDK
- The Viam Python SDK (software development kit) lets you control your Viam-powered robot by writing custom scripts in the Python programming language. Install the Viam Python SDK by following these instructions.
- Project repo on GitHub
Install Viam-server and Connect to Your Robot
In the Viam app, add a new machine and follow the Follow the setup instructions to install viam-server on your computer and connect to the Viam app.
Configure the Camera Component
Once connected, navigate to your machine’s page in the app and click on the CONFIGURE tab.
First, add your personal computer’s webcam to your robot as a camera by creating a new component with type camera and model webcam:
Click the + icon next to your machine part in the left-hand menu and select Component. Select the camera type, then select the webcam model. Enter cam as the name and click Create.
In the configuration panel, click the video path field. If your robot is connected to the Viam app, you will see a dropdown populated with available camera names.
Select the camera you want to use. If you are unsure which camera to select, select one, save the configuration, and go to the Control tab to confirm you can see the expected video stream. On the Control tab, click on the dropdown menu labeled camera and toggle the feed on. If you want to test your webcam’s image capture, you can click on Export screenshot to capture an image.
Configure Your Services
This tutorial uses a pre-trained machine learning (ML) model from the Viam registry named EfficientDet-COCO. This model can detect a variety of objects, which you can find in the provided labels.txt file.
If you want to train your own model instead, follow the instructions to train a model.
Navigate to the Services subtab of your machine’s Config tab.
Configure the ML model service
The ML model service allows you to deploy a machine learning model to your robot.
- Click the + icon next to your machine part in the left-hand menu and select Service.
- Select the ML model type and select the TFLite CPU model, then enter people as the name for your mlmodel, then click Create.
In the new ML Model service panel, configure your service.
Select the Deploy model on robot for the Deployment field. Then select the viam-labs:EfficientDet-COCO model from the Models dropdown.
Configure an mlmodel detector
The vision service uses the deployed ML model alongside input from a camera to detect people:
- Click the + icon next to your machine part in the left-hand menu and select Service.
- Select the vision type and select the ML model model, then enter myPeopleDetector for the name of your service and click Create.
- In the new vision service panel, configure your service with the ML model dropdown. Select the ML model people that you deployed.
Configure the detection camera
To be able to test that the vision service is working, add a transform camera which will add bounding boxes and labels around the objects the service detects.
Click the + icon next to your machine part in the left-hand menu and select Component. Select the camera type, then select the transform model. Enter detectionCam as the name and click Create.
In the new transform camera panel, replace the attributes JSON object with the following object which specifies the camera source that the transform camera will be using and defines a pipeline that adds the defined myPeopleDetector:
{
"source": "cam",
"pipeline": [
{
"type": "detections",
"attributes": {
"detector_name": "myPeopleDetector",
"confidence_threshold": 0.5
}
}
]
}
Click Save at the top right corner of the screen.
Set Up the Kasa Smart Plug
1. Plug your smart plug into any power outlet and turn it on by pressing the white button on the smart plug. To connect the plug to your wifi, download the Kasa Smart app from the App Store or Google Play to your mobile phone. When you first open the app, you will be prompted to create an account. As you do this, you will receive an email with the subject line “TP-Link ID: Activation Required” to complete your account registration.
2. Follow the steps in Kasa’s setup guide to add your device and connect it to your wifi. Once it is connected, you will no longer need to use the mobile app.
3. Open a terminal on your computer and run the following command to install the smart plug Python API:
pip3 install python-kasa
4. Run the following command to return information about your smart device:
kasa discover
5. You should see this command output something like this:
6. Write down or save the host address (for example, 10.1.11.221). You will need to include it in your Python code in a later step.
Write Python Code to Control Your Object Detection Robot
Now that you have your machine configured and your Kasa plug set up, you are ready to set up the code for the logic of the robot. The files used in this section can all be found in the GitHub repo for this project.
Create the main script file
On your computer, navigate to the directory where you want to put the code for this project. Create a file there called lightupbot.py. This will be the main script for the machine. Copy the entirety of this file and paste it into your lightupbot.py file. Save lightupbot.py.
Connect the code to the robot
You need to tell the code how to access your specific robot (which in this case represents your computer and its webcam).
- Navigate to the CONNECT tab on the Viam app. Make sure Python is selected in the Language selector.
- Get the robot address and API key from the code sample and set them as environment variables or add them at the top of lightupbot.py. API KEY AND API KEY ID: By default, the sample code does not include your machine API key and API key ID. We strongly recommend that you add your API key and API key ID as an environment variable and import this variable into your development environment as needed. To show your machine’s API key and API key ID in the sample code, toggle Include secret on the CONNECT tab’s Code sample page. CAUTION: Do not share your API key or machine address publicly. Sharing this information could compromise your system security by allowing unauthorized access to your machine, or to the computer running your machine.
- You also need to tell the code how to access your smart plug. Add the host address (for example, 10.1.11.221) of your smart plug that you found in the kasa discover step to line 55 of lightupbot.py.
Run the code
Now you are ready to test your robot!
From a command line on your computer, navigate to the project directory and run the code with this command:
python3 lightupbot.py
If the camera detects a person, it will print to the terminal “This is a person!” and turn on the smart plug. If it does not find a person, it will write “There’s nobody here” and will turn off the plug.
Try moving in and out of your webcam’s field of view. You will see your light turn on and off as the robot detects you!
Your terminal output should look like this as your project runs:
python3 lightupbot.py
This is a person!
turning on
There's nobody here
turning off
You can actually detect any object that is listed in the labels.txt (such as a dog or a chair) but for this tutorial, we are detecting a person.
To detect something else with the camera, just change the string “person” on line 46 of lightupbot.py to a different item in the label.txt file.
if d.class_name.lower() == "person":
print("This is a person!")
found = True
Next Steps
In this tutorial, you learned how to build an object detection robot that turns your lights on using Viam. You could use this same concept to build a smart fan that only turns on if you are sitting at your desk working, turn on the lights in your bathroom mirror only when you are in front of the sink, or activate a pet feeder every time your cat looks at the camera.
To turn this robot into a security alert system, have a look at this tutorial: Build a Person Detection Security Robot That Sends You a Photo of the Person Stealing Your Chocolates, which I also wrote!