Smart Line AI

by aaravsharma23 in Circuits > Raspberry Pi

95 Views, 3 Favorites, 0 Comments

Smart Line AI

IMG_6392.JPEG
Screenshot 2025-08-22 at 2.00.16 PM.png
Screenshot 2025-08-22 at 2.16.37 PM.png

I created a project called Smart Line AI, which is a project that provides a detailed analysis of the lunch line in a school cafeteria setting. The analysis includes a graph that informs the reader about the average wait time of the line at different timestamps. Let's say lunch at a school cafeteria occurs from 12 pm - 2 pm, there will be students queuing up from 11.50 am up till 2 pm to select their food on a kiosk. The graph will display the average wait time of a student at 12.35 pm,..., till 1.15 pm.

This information gets sent to the food provider, where they can take measures to reduce the average wait time for food!

The project uses a Raspberry Pi 5 connected to a Camera Module 3 with a Wide lens providing a 120-degree view.

Motivation in Creating Smart Line AI

The food provider for our school canteen has recently changed, and there have been complaints about increased wait times for students to receive their meals. To find out ways to optimize this solution, I have created this project where the data will be shared with the food provider so they can find more optimal ways to resolve this issue. The project has been approved and installed with our campus's Head of IT Infrastructure, and received recognition from the Head of College, noting its the projects progress.

Supplies

Raspberry Pi 5 with at least 2 GB of RAM, I used one with 4G

Raspberry Pi Camera Module 3 Wide

Raspberry Pi Official Power, purchasing the official power is crucial since just any USB-C output doesn't provide the correct amps (USB-C Computer Chargers DO NOT work)

Wired USB Mouse

Wired USB Keyboard

Monitor (with power cable), or a small touch screen (3.5'' or 7'' works great)

Micro-HDMI to HDMI Cable

MicroSD Card, at least 16GB of Storage. I used one with 64GB

MicroSD Card USB Adapter

3D Printer, or access to one. I used a FlashForge Adventurer 5M Pro

UHU Adhesive Glue

Computer

Flashing the MicroSD Card With Raspberry Pi OS

Screenshot 2025-08-20 at 8.35.24 PM.png
  1. To install Raspberry Pi OS, insert your microSD card into a USB adapter and connect it to your computer.
  2. Download and install the Raspberry Pi Imager software from the official Raspberry Pi website.
  3. Open Imager, and configure it to the settings provided in the image
  4. Write to flash the OS. This will take around 30 minutes.
  5. Once the process is complete, remove the microSD card, insert it into the microSD slot of your Raspberry Pi, and power it on — your Raspberry Pi will boot into the OS.

Raspberry Pi Setup

everything.png

A Raspberry Pi is essentially a computer with lower specs, but it still functions the same way as a computer. It still requires a monitor/screen and input commands (keyboard & mouse).

To configure this properly, connect both the keyboard and mouse to the Raspberry Pi via USB 3.0 (blue-colored ports). Afterward, insert the Micro-HDMI cable into the Raspberry Pi and the HDMI side into your monitor. Power on the Raspberry Pi, wait for a couple of seconds, and you will arrive at the setup page.

On this page, you will be prompted to enter admin information such as your Raspberry Pi username and password, your Wi-Fi SSID and password, as well as location and language preferences — fill in this information!

Wiring the Camera Into the Raspberry Pi 5

arducam_wiki_rpi_imx708_hardware_picture_1.png

Open the Raspberry Pi 5 camera port by gently pulling up the black plastic part. Once it is loose, insert the small end of the camera wire into the Raspberry Pi. Then, simply push the black plastic back into the same gap.

3D Render of Casing

Screenshot 2025-08-20 at 7.58.06 AM.png
Screenshot 2025-08-07 at 9.11.16 AM.png

I created this CAD using Autodesk Fusion 360. The first image is a render, but the second can help you understand the specific tools I used. I have also used Autodesk to understand the strengths of this model to ensure the production would be sufficient.

To view the 3D render of the casing, please view the above casing. This can also help in visualizing possible changes for your project.

Safety in 3D Printing

ChatGPT Image Aug 22, 2025, 05_30_40 PM.png

When you are removing your 3D print from the bed post printing, PLEASE ensure that you wait for 5 minutes, allowing the plate to cool down. This will prevent any possible hazards, such as burns.

3D Printing the Casing

Screenshot (3).png

Please download all of the .STL files that I added in the Supporting Files section and place them into your slicer tool, such as Orca or FlashPrint. For this, I would recommend Orca because you can customize the infill of the file, meaning you can set it to 100% in regions around the screw areas and lower elsewhere. I would not recommend setting it to 100% everywhere since this makes the module extremely heavy.

Check out the image where I configured this in Orca.

After the print is completed, remember to remove any supports.

Gluing

IMG_6398.JPEG

Glue the parts together using UHU adhesive glue. First, start by gluing the top to the bottom of the case. If you have clamps, this would be a good time to use them; otherwise, clips also work great.

Next, glue the casing to the mount using the same adhesive. I didn’t use clamps here, but there was a very large surface area for the glue to set firmly. UHU starts to set after 30 minutes, but I left it overnight to ensure it would be strong.

Installation

IMG_6404.JPG

When you are installing the module, make sure you drill 4 holes based on your screw size. Check the above picture to get a better understanding.

Setting Up VNC and Ensuring Camera Works

Screenshot 2025-08-22 at 2.21.32 PM.png

Currently, when you connect to your Raspberry Pi, you need to set up your monitor, keyboard, and mouse, which is a huge hassle. Although there are simpler alternatives, such as using a tool called VNC. This allows you to access your Raspberry Pi without a monitor, keyboard, or mouse — all you need is power.

To set this up, click on the Raspberry Pi logo, go into Raspberry Pi Configuration, go to Interfaces, and toggle the VNC tab on. After doing this, go to your computer and install RealVNC.

After enabling VNC, you need to find out the IP address of your Raspberry Pi. To do this, you can either look through the Raspberry Pi UI or use an IP scanner. To find it using the UI, move the cursor over the Wi-Fi logo, and the IP address will show. Simply enter those numbers into RealVNC on your computer and type in your username and password. Then you will have complete control through a headless setup. If you wish to use an IP scanner, search for IPs and enter that number into RealVNC.

Testing Data With Code

Screenshot 2025-08-22 at 11.44.18 AM.png

Now for the fun part - code. First, we need to collect the videos that we want to analyse. I recommend around 5 hours of footage for this. Feel free to use the code below that helps to collect data and store it in a local folder called 'videos'.

from picamera2 import Picamera2
from picamera2.encoders import H264Encoder
from picamera2.outputs import FfmpegOutput
from datetime import datetime, timedelta, time as dtime
import os
import time
import sys

def next_window(now):
today = now.date()
start_today = datetime.combine(today, dtime(12, 20, 0))
end_today = datetime.combine(today, dtime(14, 0, 0))

if now < start_today:
return start_today, end_today
elif now < end_today:
return now, end_today
else:
tomorrow = today + timedelta(days=1)
start_tomorrow = datetime.combine(tomorrow, dtime(12, 20, 0))
end_tomorrow = datetime.combine(tomorrow, dtime(14, 0, 0))
return start_tomorrow, end_tomorrow

def main():
out_dir = os.path.join(os.getcwd(), "videos")
os.makedirs(out_dir, exist_ok=True)

now = datetime.now()
start_dt, end_dt = next_window(now)

stamp = start_dt.strftime("%d%m%Y")
filename = f"{stamp}_video.mp4"
out_path = os.path.join(out_dir, filename)

wait_seconds = (start_dt - datetime.now()).total_seconds()
if wait_seconds > 0:
print(f"waiting until {start_dt.strftime('%Y-%m-%d %H:%M:%S')}...")
try:
time.sleep(wait_seconds)
except KeyboardInterrupt:
print("\nInterrupted while waiting. Exiting.")
sys.exit(0)

duration = max(0, (end_dt - datetime.now()).total_seconds())
if duration == 0:
print("[Scheduler] No time left in the window. Exiting.")
return

print(f"Starting capture at {datetime.now().strftime('%H:%M:%S')} "
f"for {int(duration)} seconds (until {end_dt.strftime('%H:%M:%S')}).")
print(f"[Recorder] Output: {out_path}")

picam2 = Picamera2()
video_config = picam2.create_video_configuration(main={"size": (1920, 1080)})
picam2.configure(video_config)

encoder = H264Encoder(bitrate=10_000_000)
output = FfmpegOutput(out_path)

try:
picam2.start_recording(encoder, output)
time.sleep(duration)
except KeyboardInterrupt:
print("\n[Recorder] Interrupted. Stopping recording...")
finally:
try:
picam2.stop_recording()
except Exception:
pass
try:
picam2.close()
except Exception:
pass

print(f"Finished at {datetime.now().strftime('%H:%M:%S')}. Saved: {out_path}")

if __name__ == "__main__":
main()

Using the Model

Now you need to add the video to your code directory and rename the file to one that you can refer to. Use the code below and paste it into your IDE. I am using PyCharm. When you add the code, it's quite likely that all the libraries will have a red line underneath, meaning it's not installed. To install the package, write 'pip3 install package-name' into your terminal and replace package-name with the name of your library, such as ultralytics.


from ultralytics import YOLO
import cv2
import yaml
import math
import csv
from pathlib import Path

LINE1 = {"center_rel": (0.05, 0.60), "length": 850, "angle_deg": 333, "thickness": 4}
LINE2 = {"center_rel": (0.08, 0.2), "length": 550, "angle_deg": 336, "thickness": 4}
LINE3 = {"center_rel": (0.51, 0.29), "length": 170, "angle_deg": 200, "thickness": 4}
LINE4 = {"center_rel": (0.635, 0.285), "length": 120, "angle_deg": 318, "thickness": 4}
LINE5 = {"center_rel": (0.59, 0.1), "length": 200, "angle_deg": 214, "thickness": 4}
LINE1_ABS = LINE2_ABS = LINE3_ABS = LINE4_ABS = LINE5_ABS = None

DEADZONE_SEC = 30.0

model = YOLO("yolov8n.pt")
video_path = VIDEO_PATH
cap = cv2.VideoCapture(video_path)

tracker_path = "./botsort_local.yaml"
with open(tracker_path, "r") as f:
tracker_cfg = yaml.safe_load(f)
print(f"Tracker config loaded from {tracker_path}")
print(f"track_buffer = {tracker_cfg.get('track_buffer')} frames")

start_time_sec = 1700
fps = cap.get(cv2.CAP_PROP_FPS) or 30
start_frame = int(start_time_sec * fps)
cap.set(cv2.CAP_PROP_POS_FRAMES, start_frame)
print(f"Jumping to {start_time_sec}s (frame {start_frame})")

overlay_ready = False
line_pts = []

entry_frame = {} # id -> frame when timer started
last_seen = {} # id -> last frame seen
grace_until = {} # id -> frame until grace expires
active = {} # id -> currently timed

def line_endpoints_from_center(center, length, angle_deg):
cx, cy = center
rad = math.radians(angle_deg)
dx = int(length * math.cos(rad))
dy = int(length * math.sin(rad))
return (cx - dx, cy - dy), (cx + dx, cy + dy)

# CSV setup: id,start_sec,end_sec,duration_sec
csv_path = Path("id_sessions.csv")
new_file = not csv_path.exists()
csv_file = csv_path.open("a", newline="")
csv_writer = csv.writer(csv_file)
if new_file:
csv_writer.writerow(["id", "start_sec", "end_sec", "duration_sec"])

CONF = 0.10
NMS_IOU = 0.70
IMGSZ = 1280
MAX_DET = 500

frame_idx = start_frame

try:
while True:
ret, frame = cap.read()
if not ret:
break

if not overlay_ready:
h, w = frame.shape[:2]
def build(ldef, abs_override):
if abs_override is None:
c = (int(ldef["center_rel"][0]*w), int(ldef["center_rel"][1]*h))
return line_endpoints_from_center(c, ldef["length"], ldef["angle_deg"])
else:
x1,y1,x2,y2 = abs_override
return (x1,y1),(x2,y2)
line_pts = [
build(LINE1, LINE1_ABS),
build(LINE2, LINE2_ABS),
build(LINE3, LINE3_ABS),
build(LINE4, LINE4_ABS),
build(LINE5, LINE5_ABS),
]
overlay_ready = True

results = model.track(
frame,
persist=True,
classes=[0],
tracker=tracker_path,
conf=CONF,
iou=NMS_IOU,
imgsz=IMGSZ,
max_det=MAX_DET,
agnostic_nms=True,
)
out = results[0].plot()

for idx, (p1, p2) in enumerate(line_pts):
th = [LINE1, LINE2, LINE3, LINE4, LINE5][idx]["thickness"]
cv2.line(out, p1, p2, (0, 255, 0), th)

boxes = results[0].boxes
ids = boxes.id.int().cpu().tolist() if boxes.id is not None else []
seen_this_frame = set()

for i, box in enumerate(boxes):
x1, y1, x2, y2 = box.xyxy[0].int().tolist()
cx = (x1 + x2) // 2
cy = (y1 + y2) // 2
pid = ids[i] if i < len(ids) else None
if pid is None:
continue

seen_this_frame.add(pid)

if not active.get(pid, False):
active[pid] = True
entry_frame[pid] = frame_idx
last_seen[pid] = frame_idx
grace_until.pop(pid, None)

elapsed = (frame_idx - entry_frame[pid]) / float(fps)
label = f"ID {pid} | {elapsed:.1f}s"
(tw, th), baseline = cv2.getTextSize(label, cv2.FONT_HERSHEY_SIMPLEX, 0.7, 2)
tx, ty = x1, max(0, y1 - 15)
cv2.rectangle(out, (tx, ty - th - baseline), (tx + tw, ty + baseline), (0, 0, 0), -1)
cv2.putText(out, label, (tx, ty), cv2.FONT_HERSHEY_SIMPLEX, 0.7, (255, 255, 255), 2)
cv2.circle(out, (cx, cy), 6, (0, 255, 0), -1)

# handle missing IDs -> start/expire grace; write CSV only after >30s missing
for pid in list(active.keys()):
if not active[pid]:
continue
if pid not in seen_this_frame and pid not in grace_until:
last_f = last_seen.get(pid, frame_idx)
grace_until[pid] = last_f + int(DEADZONE_SEC * fps)
if pid in grace_until and frame_idx > grace_until[pid]:
start_f = entry_frame.get(pid)
end_f = last_seen.get(pid)
if start_f is not None and end_f is not None and end_f >= start_f:
start_sec = start_f / float(fps)
end_sec = end_f / float(fps)
dur_sec = end_sec - start_sec
csv_writer.writerow([pid, f"{start_sec:.3f}", f"{end_sec:.3f}", f"{dur_sec:.3f}"])
csv_file.flush()
print(f"[CSV] ID {pid} session written: {dur_sec:.3f}s")
active[pid] = False
entry_frame.pop(pid, None)
last_seen.pop(pid, None)
grace_until.pop(pid, None)

cv2.imshow("Canteen Tracking", out)
if cv2.waitKey(1) & 0xFF == ord("q"):
break

frame_idx += 1

finally:
# end-of-video: only write sessions that have already exceeded grace
for pid in list(active.keys()):
if not active[pid]:
continue
start_f = entry_frame.get(pid)
end_f = last_seen.get(pid)
gd_f = grace_until.get(pid, (end_f if end_f is not None else frame_idx) + int(DEADZONE_SEC * fps))
if end_f is not None and frame_idx > gd_f:
start_sec = start_f / float(fps)
end_sec = end_f / float(fps)
dur_sec = end_sec - start_sec
csv_writer.writerow([pid, f"{start_sec:.3f}", f"{end_sec:.3f}", f"{dur_sec:.3f}"])
print(f"[CSV-END] ID {pid} session written: {dur_sec:.3f}s")
try:
csv_file.flush(); csv_file.close()
except Exception:
pass
cap.release()
cv2.destroyAllWindows()
print(f"CSV saved to: {csv_path.resolve()}")

Running the Program on Your Laptop

Screenshot 2025-08-22 at 2.00.16 PM.png

To run the code, simply write 'python3 file-name', where file-name is the name of your file. In my case, main.py. The analysis you see should be similar to the image attached.

Main aspects to note:

  1. ID Number - This is what ID tag that the model assigns to each person. This is a helpful post recording so the model can record the time it takes for the person to complete the line.
  2. Time – This works like a stopwatch. It starts when someone enters the green zone. If the stopwatch seems to move more slowly than real life, that’s okay. The computer slows down because the model is very large, so the time shown while running may look delayed. But once the recording is finished, the time displayed at the end is the correct time.
  3. 0.XY - This value, where XY is an integer, is the percentage of confidence that the model has in detecting a person.

Please note that this script will also save the data into a CSV file, which we will use in the post-data collection step.

Post Data Collection and Analytics

Screenshot 2025-08-22 at 2.16.37 PM.png

Use the code below, which will utilize the CSV file we are using to create a graph plotting the CSV values we used. This will use values on a 3-minute interval, which, after fine-tuning, I found as the best value. The graph shows the average wait time of the line every 3 minutes.


import pandas as pd
import matplotlib.pyplot as plt
from datetime import datetime, timedelta

csv_file = "id_sessions.csv"
feed_start = datetime.strptime("2024-01-01 12:38:00", "%Y-%m-%d %H:%M:%S")
bin_minutes = 3

df = pd.read_csv(csv_file, names=["id","start_sec","end_sec","duration_sec"], skiprows=1)

df["start_time"] = df["start_sec"].astype(float).apply(lambda s: feed_start + timedelta(seconds=s))

df["time_bin"] = df["start_time"].dt.floor(f"{bin_minutes}T")

avg = df.groupby("time_bin")["duration_sec"].mean().reset_index()

plt.figure(figsize=(10,5))
plt.plot(avg["time_bin"], avg["duration_sec"], marker="o")
plt.title(f"Average Queue Time per {bin_minutes}-minute interval")
plt.xlabel("Time of Day")
plt.ylabel("Average Duration (seconds)")
plt.grid(True)
plt.xticks(rotation=45)
plt.tight_layout()
plt.show()


Complete Analysis of Lunch Line

Screenshot 2025-08-23 at 3.25.33 PM.png
Screenshot 2025-08-23 at 3.26.57 PM.png
Screenshot 2025-08-23 at 3.27.02 PM.png

If you would like to check the post-processing and analysis of the data received from the AI Model, feel free to check out the attached PDF. This includes a 3-page write-up, which includes how the AI handles the point of view, analysis of graphs, explanations, and more

Completed!!

Screenshot 2025-08-20 at 8.00.02 AM.png

Congrats, now you have completed creating Smart Line AI. I hope this project has been extremely rewarding. Feel free to share your version of this project as a comment :)

Next Steps for Smart Line AI

The project has been set up for 5 days, and the plan is to collect lots of video - 10 days' worth - to feed into the training and fine-tuning of the model. With more data, I can increase the accuracy of the project regarding the tracking aspect of individuals. When I have lots of data, I will also expand this project to other campuses since it has already provided my school with valuable insights in reducing the school line.