DIY Augmented Reality Flash Cards Explaining Chakra Natures in Naruto

by engineerkid1 in Workshop > Science

19 Views, 0 Favorites, 0 Comments

DIY Augmented Reality Flash Cards Explaining Chakra Natures in Naruto

OpenCV Augment Reality Chakra Nature mixing explained

In Naruto, shinobi combine elemental chakra natures (Fire, Wind, Lightning, Earth, Water) to create advanced releases (Ice, Lava, Storm, etc.). This project brings that to life with augmented reality: point your camera at printed ArUco markers, and the corresponding chakra image appears. Move two markers close together and—boom—your screen shows the fused release.

What you’ll build

  1. Live camera feed (or a demo video) where each base chakra is overlaid on its marker.
  2. When two base markers get close, the overlay switches to the correct combined release (e.g., Water + Wind → Ice).

Difficulty: Beginner–Intermediate

Build time: ~60–90 minutes (including printing markers)

Platforms: Windows/macOS/Linux (Python)

Supplies

cam.PNG
glue.PNG

Bill of Materials (Hardware & Assets):

A computer with a webcam (or a video file for testing)

Printer + paper (for printing the markers)

Scissors/tape/glue (to mount markers on cards)

Image assets (PNG/JPG) for:

  1. Base elements: fire.jpg, wind.jpg, light.jpg (Lightning), earth.jpg, water.jpg
  2. Combined elements: ice.jpg, wood.jpg, lava.jpg, storm.jpg, boil.jpg, explo.jpg, scorch.jpg, magnet.jpg

Software Prerequisites:

Install Python 3.9+ and the following packages:

pip install opencv-contrib-python numpy

That's all you will need to build this project.

How It Works?

explain 1.PNG
explain 2.PNG

OpenCV’s ArUco detects square fiducial markers and returns their corner points and IDs.

For each detected marker ID, we compute its center and overlay the corresponding chakra image using a homography (perspective warp) so it “sticks” to the marker.

If two markers come within a distance threshold, we switch to a fusion overlay.

Print Your Markers

aruco.PNG

We’ll use the DICT_ARUCO_ORIGINAL dictionary and the following IDs:

Base chakra → Marker ID

  1. Fire → 0
  2. Wind → 1
  3. Lightning → 2 (light.jpg)
  4. Earth → 3
  5. Water → 4

Generate and save marker PNGs (run once):

import cv2
import cv2.aruco as aruco

aruco_dict = aruco.getPredefinedDictionary(aruco.DICT_ARUCO_ORIGINAL)

def save_marker(marker_id, size=600):
img = aruco.generateImageMarker(aruco_dict, marker_id, size)
cv2.imwrite(f"marker_{marker_id}.png", img)

for mid in [0,1,2,3,4]:
save_marker(mid)

Print each marker_<ID>.png at ~5–7 cm square and mount on stiff cards.

Or you can use an online aruco marker generator. Save the files and print them and you are good to go.

https://chev.me/arucogen/

Prepare the Project Folder

boil.jpg
dust.jpg
earth.jpg
explo.jpg
fire.jpg
ice.jpg
lava.jpg
light.jpg
magnet.jpg
scorch.jpg
storm.jpg
water.jpg
wind.jpg
wood.jpg
blank.jpg

Create a folder like this:

chakra_ar/
├── main.py
├── vid.mp4 # optional test clip
├── fire.jpg
├── wind.jpg
├── light.jpg # lightning
├── earth.jpg
├── water.jpg
├── blank.jpg
├── ice.jpg
├── wood.jpg
├── lava.jpg
├── storm.jpg
├── boil.jpg
├── explo.jpg
├── scorch.jpg
└── magnet.jpg

File names must match exactly. If not make sure the name and path is modified in the code.

Use the Base Code

code.PNG

Copy and paste the code given below in your code editor.

import cv2
import cv2.aruco as aruco
import numpy as np
import math

Offset = 50
pid = 99
pcenter = (99,0,0)
gdist = False
comb = False

# Use a demo video OR the webcam (uncomment one)
cap = cv2.VideoCapture('vid.mp4')
# cap = cv2.VideoCapture(0)

aruco_dict = aruco.getPredefinedDictionary(aruco.DICT_ARUCO_ORIGINAL)
parameters = aruco.DetectorParameters_create()

fire = cv2.imread("fire.jpg")
wind = cv2.imread("wind.jpg")
light = cv2.imread("light.jpg") # lightning
earth = cv2.imread("earth.jpg")
water = cv2.imread("water.jpg")

blank = cv2.imread("blank.jpg")
ice = cv2.imread("ice.jpg")
explo = cv2.imread("explo.jpg")
wood = cv2.imread("wood.jpg")
lava = cv2.imread("lava.jpg")
storm = cv2.imread("storm.jpg")
boil = cv2.imread("boil.jpg")
scorch = cv2.imread("scorch.jpg")
magnet = cv2.imread("magnet.jpg")

basic_elements = [fire, wind, light, earth, water]
new_elements = [ice, wood, lava, storm, boil, explo, scorch, magnet]

def centerPoint(corners, id):
x_sum = corners[0][0][0][0]+ corners[0][0][1][0]+ corners[0][0][2][0]+ corners[0][0][3][0]
y_sum = corners[0][0][0][1]+ corners[0][0][1][1]+ corners[0][0][2][1]+ corners[0][0][3][1]
cent_x = int(x_sum*.25)
cent_y = int(y_sum*.25)
return (id, cent_x, cent_y)

def augmentFrame(corners, id, frame, ovr):
tl = (corners[0][0][0]-Offset, corners[0][0][1]-Offset)
tr = (corners[0][1][0]+Offset, corners[0][1][1]-Offset)
br = (corners[0][2][0]+Offset, corners[0][2][1]+Offset)
bl = (corners[0][3][0]-Offset, corners[0][3][1]+Offset)
h, w, c = ovr.shape
pts_dst = np.array([tl,tr,br,bl])
pts_src = np.array([[0,0],[w, 0],[w, h],[0,h]])
matrix, status = cv2.findHomography(pts_src, pts_dst)
imgout = cv2.warpPerspective(ovr, matrix, (frame.shape[1], frame.shape[0]))
cv2.fillConvexPoly(frame, pts_dst.astype(int), 0, 16)
imgout = frame + imgout
return imgout

def detectAruco(img):
global pid, pcenter, gdist
corners, ids, rejected = aruco.detectMarkers(image=img, dictionary=aruco_dict, parameters=parameters,)
aruco.drawDetectedMarkers(img, corners)
if ids is not None:
if ids[0][0] != pid:
center = centerPoint(corners, ids[0][0])
dist = math.sqrt((center[1] - pcenter[1])**2 + (center[2] - pcenter[2])**2)
gdist = (dist < 250)
pcenter = center
pid = ids[0][0]
return [corners, ids]

while True:
ret, frame = cap.read()
if not ret:
break

aruco_found = detectAruco(frame)
if gdist == True and len(aruco_found[0])==2:
comb = True
else:
comb = False

if len(aruco_found[0]) != 0:
for corners, id in zip(aruco_found[0], aruco_found[1]):
if comb == False:
frame = augmentFrame(corners, id, frame, basic_elements[int(id)])
if comb == True:
if (id[0]==4 and pid==1) or (id[0]==1 and pid==4):
frame = augmentFrame(corners, pid, frame, new_elements[int(0)]) # Ice
if (id[0]==3 and pid==4) or (id[0]==4 and pid==3):
frame = augmentFrame(corners, pid, frame, new_elements[int(1)]) # Wood
if (id[0]==0 and pid==3) or (id[0]==3 and pid==0):
frame = augmentFrame(corners, pid, frame, new_elements[int(2)]) # Lava
if (id[0]==2 and pid==4) or (id[0]==4 and pid==2):
frame = augmentFrame(corners, pid, frame, new_elements[int(3)]) # Storm
if (id[0]==4 and pid==0) or (id[0]==0 and pid==4):
frame = augmentFrame(corners, pid, frame, new_elements[int(4)]) # Boil
if (id[0]==3 and pid==2) or (id[0]==2 and pid==3):
frame = augmentFrame(corners, pid, frame, new_elements[int(5)]) # Explosion
if (id[0]==0 and pid==1) or (id[0]==1 and pid==0):
frame = augmentFrame(corners, pid, frame, new_elements[int(6)]) # Scorch
if (id[0]==1 and pid==3) or (id[0]==3 and pid==1):
frame = augmentFrame(corners, pid, frame, new_elements[int(7)]) # Magnet (fixed)
cv2.imshow('Frame', frame)
key = cv2.waitKey(1) & 0xFF
if key == ord('q'):
break

cap.release()
cv2.destroyAllWindows()

Run It

demo.PNG

From the project folder:

python main.py

Press 'q' to quit.

Try one marker at a time (you should see Fire/Wind/etc.).

Then bring two different base markers close together (within ~250 px in the camera view) to see the fusion overlay.

Element Mapping (Canon-Friendly)

Base (ID → overlay)

  1. 0: Fire → fire.jpg
  2. 1: Wind → wind.jpg
  3. 2: Lightning → light.jpg
  4. 3: Earth → earth.jpg
  5. 4: Water → water.jpg

Fusions (pair → overlay)

  1. Water + Wind → ice.jpg (Ice Release)
  2. Earth + Water → wood.jpg (Wood Release)
  3. Fire + Earth → lava.jpg (Lava Release)
  4. Lightning + Water → storm.jpg (Storm Release)
  5. Water + Fire → boil.jpg (Boil Release)
  6. Earth + Lightning → explo.jpg (Explosion Release)
  7. Fire + Wind → scorch.jpg (Scorch Release)
  8. Wind + Earth → magnet.jpg (Magnet Release)

Troubleshooting

  1. Only one element shows even with two markers present
  2. Ensure both markers are within the frame and not occluded.
  3. Try reducing the threshold from 250 to ~180–220.
  4. Wrong fusion shows up
  5. Check you’re holding the intended pair (IDs above).
  6. Confirm the corrected Magnet logic is in place.
  7. No markers detected
  8. Print quality/contrast matters; make sure the black squares are truly dark and borders are crisp.
  9. Increase room lighting; avoid motion blur.
  10. Overlay misaligned
  11. Adjust Offset or ensure markers are flat and not warped.

Where Else Can This Technique Be Used?

How can you use this.png

The Naruto chakra fusion here is just one fun example of a much bigger Augmented Reality + ArUco marker concept.

Since the code simply detects marker IDs and changes what’s displayed based on combinations, you can swap out the “chakra” theme for any subject or visual data.

Here are some real-world applications:

  1. Teaching Chemistry
  2. Replace “Fire” and “Water” with chemical symbols (H₂, O₂, Na, Cl, etc.).
  3. When two markers come close, show the resulting compound (H₂ + O → H₂O molecule model).
  4. Great for high school science to make learning interactive.
  5. Language Learning
  6. One marker shows a word and another shows an image.
  7. When combined, the screen displays the full sentence or translation.
  8. Math Concepts
  9. Show numbers on markers — when two are brought together, overlay their sum, product, or equation.
  10. Useful for young learners to visualize arithmetic.
  11. Museum Exhibits & Interactive Art
  12. Visitors can place historical artifact markers together to trigger animations, timelines, or hidden stories.
  13. Board Games & Card Games
  14. Imagine a collectible card game where AR animations appear when you bring two cards together.
  15. Training Simulations
  16. Engineering or medical training can use markers to combine tools, parts, or procedures visually.

Why it works:

  1. The pair detection logic can be mapped to any domain where combining two known entities yields a meaningful result.
  2. The overlay can be images, videos, 3D models, or even data visualizations.

With a little creativity, this project can turn into a teaching aid, interactive museum piece, or a full-fledged AR game.

Thankyou for supporting my work. Hope this helped you.