Build an AI Assisted Smart Trash System With VIAM

by shadinaguib in Circuits > Arduino

136 Views, 2 Favorites, 0 Comments

Build an AI Assisted Smart Trash System With VIAM

VIAM smart trash can

Every year, billions of tonnes of waste are generated worldwide, and the vast majority ends up in landfills due to poor sorting and recycling practices. This project tackles this issue by using technology to make waste sorting efficient and accurate. We'll be using a laptop camera and the VIAM platform for object detection, an ESP32 microcontroller for processing, and LED strips to indicate proper disposal methods.

Supplies

  • A computer capable of running viam-server. You can use a personal computer running macOS or Linux, or a single-board computer (SBC) running 64-bit Linux.
  • ESP32 microcontroller
  • LED strips (various colors)
  • Breadboard and connecting wires
  • Power supply for ESP32 and LEDs
  • Hugging Face account for accessing AI models
  • Arduino IDE
  • Jumper wires

Setting Up the Object Detection Model

Configure your physical camera

Configure your webcam so that your machine can get the video stream from the camera:

  1. On the Viam app, navigate to your machine’s page and ensure the status dropdown next to your machine’s name is set to "Live".
  2. Click the + (Create) button, select "Component", and choose "camera / webcam". Name it my_webcam.
  3. Select your webcam in the video path dropdown and save the changes.

Test your physical camera

Under the CONTROL tab, expand your camera’s panel and toggle "View my_webcam" to check the video feed.

Configure the vision service

  1. In the CONFIGURE tab, click + (Create), select "Service", and choose "vision / yolov5".
  2. Set the model location to :
{
"model_location": "keremberke/yolov5m-garbage"
}

Click Save in the top right corner of the screen to save your changes.

Configure the objectfilter module

  1. Add an "objectfilter" component and link it to your vision service.
  2. Enter attributes to display detection boxes and filter data:
{
  "display_boxes": true,
  "camera": "my-camera",
  "filter_data": true,
  "vision_services": [
    "yolo_trash_detector"
  ],
  "labels": [
    "biodegradable",
    "glass",
    "metal",
    "plastic"
  ],
  "confidence": 0.5
}


Test the detector

Now that the detector is configured, it’s time to test it!

  1. Navigate to the CONTROL tab.
  2. Click the objectfilter_cam panel to open your detector camera controls.
  3. Toggle View objectfilter_cam to the “on” position. This displays a live feed from your webcam with detection bounding boxes overlaid on it.

ESP32 WebServer Setup

Screenshot 2024-04-21 at 11.22.17 PM.png

Hardware connection

Connect the LED strip to your ESP32, and upload that code to initiate a webserver, as outline in the attached diagram.

Code implementation

Implement the web server and handlers in the Arduino IDE:

#include "WiFi.h"
#include <Adafruit_NeoPixel.h>
#include <WebServer.h>


const char* ssid = "your_wifi"; // Replace with your Wi-Fi SSID
const char* password = "your_password"; // Replace with your Wi-Fi password


#define LED_PIN 13 // GPIO pin connected to the LEDs.
#define NUM_LEDS 150 // Number of LEDs in the strip


Adafruit_NeoPixel strip = Adafruit_NeoPixel(NUM_LEDS, LED_PIN, NEO_GRB + NEO_KHZ800);
WebServer server(80);


void setup() {
Serial.begin(115200);
WiFi.begin(ssid, password);


while (WiFi.status() != WL_CONNECTED) {
delay(500);
Serial.print(".");
}


Serial.println("");
Serial.println("WiFi connected");
Serial.println("IP address: ");
Serial.println(WiFi.localIP());


// Initialize the NeoPixel strip
strip.begin();
strip.show(); // Initialize all pixels to 'off'


// Setup web server handlers
server.on("/trash/command", []() {
if (server.hasArg("type")) {
String commandType = server.arg("type");
setLEDColors(commandType);
server.send(200, "text/plain", "LEDs updated based on " + commandType);
} else {
server.send(400, "text/plain", "Command type not specified");
}
});


server.on("/trash/off", []() {
for (int i = 0; i < strip.numPixels(); i++) {
strip.setPixelColor(i, strip.Color(255, 0, 0)); // Turn off
}
strip.show();
server.send(200, "text/plain", "LED is off");
});


server.begin();
}


void setLEDColors(String command) {
int oneThird = NUM_LEDS / 3;
uint32_t red = strip.Color(255, 0, 0); // Red
uint32_t green = strip.Color(0, 255, 0); // Green


uint32_t firstThirdColor, secondThirdColor, thirdThirdColor;


// Assign colors based on command
if (command == "biodegradable") {
firstThirdColor = green;
secondThirdColor = red;
thirdThirdColor = red;
} else if (command == "metal") {
firstThirdColor = red;
secondThirdColor = green;
thirdThirdColor = red;
} else if (command == "plastic") {
firstThirdColor = red;
secondThirdColor = red;
thirdThirdColor = green;
} else {
firstThirdColor = green;
secondThirdColor = red;
thirdThirdColor = red;
}


// Apply colors to each third
for (int i = 0; i < oneThird+4; i++) {
strip.setPixelColor(i, firstThirdColor);
}
for (int i = oneThird+6; i < 2 * oneThird+8; i++) {
strip.setPixelColor(i, secondThirdColor);
}
for (int i = 2 * oneThird+8; i < NUM_LEDS; i++) {
strip.setPixelColor(i, thirdThirdColor);
}
strip.show();
}


void loop() {
server.handleClient();
}

Test the server by navigating to http://<your_ip>/trash/off. If the LEDs turn red, your WebServer is functional.

VIAM Code

Screenshot 2024-04-21 at 11.23.57 PM.png

Automation Script

The only thing left to do is run a VIAM script that will send a custom message to the webserver depending on the detection made. In our case, we have to detect the item in three consecutive frames to turn the LEDs on :

import asyncio
from collections import deque
import aiohttp
from viam.robot.client import RobotClient
from viam.components.camera import Camera
from viam.services.vision import VisionClient
import time


WEBSERVER_IP = "192.168.1.159"


async def connect():
opts = RobotClient.Options.with_api_key(
api_key='your_api_key',
api_key_id='your_api_key_id'
)
return await RobotClient.at_address('smart-trash-main.0f960q0n3l.viam.cloud', opts)


async def configure_lights(class_name):
"""
Turn on and off lights based on the class_name, asynchronously.
"""
print(f"Turning on lights for {class_name}")
params = {'type': class_name}
url_on = f"http://{WEBSERVER_IP}/trash/command"
url_off = f"http://{WEBSERVER_IP}/trash/off"


async with aiohttp.ClientSession() as session:
start = time.time()
print(time.time())
async with session.post(url_on, params=params) as response:
print(await response.text())
await asyncio.sleep(3)
print("Turning off lights")
print(time.time() - start)
async with session.post(url_off) as response:
print(await response.text())


async def main():
machine = await connect()
print('Resources:', machine.resource_names)


my_camera = Camera.from_robot(machine, "my-camera")
yolo_trash = VisionClient.from_robot(machine, "yolo_trash_detector")
detection_history = deque(maxlen=3)


while True:
print("Capturing image...")
image = await my_camera.get_image()
detections = await yolo_trash.get_detections(image)


# Process detections
current_frame_detections = {}
for d in detections:
if d.confidence > 0.5:
current_frame_detections[d.class_name] = current_frame_detections.get(d.class_name, 0) + 1
print(f"Detected: {d.class_name}")
break # Stops after the first valid detection


for class_name in current_frame_detections:
if class_name != "paper" and class_name != "cardboard":
detection_history.append(class_name)


print(f"Detection History: {detection_history}")
if len(detection_history) == detection_history.maxlen:
most_common = max(set(detection_history), key=detection_history.count)
if detection_history.count(most_common) == detection_history.maxlen:
await configure_lights(most_common)
detection_history.clear()
print("Resetting detection history and entering cooldown...")
await asyncio.sleep(3) # Cooldown period


await machine.close()


if __name__ == '__main__':
asyncio.run(main())

Now try it out! Run the script and expose your webcam to different items to see the system in action.

Conclusion

This setup demonstrates an efficient approach to using AI for sorting waste. By integrating various technologies, we streamline the recycling process, helping to reduce the environmental impact of waste.

Feel free to add new features or improve the system's accuracy by fine-tuning the AI model. Your feedback and contributions are welcome to enhance this project further.