Facial Recognition Mirror With Secret Compartment

by Daniel Quintana in Circuits > Raspberry Pi

7868 Views, 154 Favorites, 0 Comments

Facial Recognition Mirror With Secret Compartment

DSC_0695.JPG
DSC_0693.JPG
DSC_0713.JPG
DSC_0711.JPG
DSC_0709.JPG
DSC_0708.JPG
DSC_0707.JPG
DSC_0701.JPG
DSC_0700.JPG
DSC_0718
Capture.PNG
Capture.PNG

I've always been intrigued by the ever-creative secret compartments used in stories, movies, and the like. So, when I saw the Secret Compartment Contest I decided to experiment with the idea myself and make an ordinary looking mirror that opens a secret side drawer when the right person looks into it.

By utilizing a Raspberry Pi, some knowledge of python programming, and 8th-grade shop class, we can create this spiffy device to hide objects in plain sight that only the correct user will have access to.

I would like to give a special thanks to these people/platforms where I got my information and resources from as well:

TeCoEd - Youtube Channel

Emmet from PiMyLifeUp

MJRoBot on Hackster.io (profile)

Gaven MacDonald - Youtube Channel

Tucker Shannon on Thingiverse (profile)

Supplies

Frame Supplies:

  • Wood Plank (The dimensions of this board was 42" by 7.5" by 5/16")
  • Pencil Picture Frame (with glass)
  • Spray Paint
  • One Way Reflective Adhesive
  • Glass Cleaner & Rag
  • MDF Wood

Facial Recognition Supplies:

Tools:

  • Table Saw
  • Jig Saw
  • SandpaperWood
  • GlueTape
  • Measure
  • Scissors
  • Spray Bottle
  • 3D Printer
  • Super Glue

Cuts for the Box Frame

DSC_0532.JPG
Screenshot 2020-08-17 at 10.46.36 AM.png
DSC_0538.JPG
DSC_0551.JPG

I bought a picture frame from the second-hand store. Just a warning, make sure that the planks that make up the frame are at least 1 1/2" wide. This is so you can glue other boards of wood on it with enough space to work with. Also, make sure the glass in the frame is completely clear. I bought a frosted one by accident and then had to buy another frame just for the clear glass. Because my frame is used the measurements for the box frame may vary.

  • Lay the frame in portrait orientation. Measure the long sides (LS) of the glass hole side on the frame with an additional ½” on both the top and bottom. (i.e. add an inch to the long side of the glass hole measurement. Record this and label LSM (Long Side Measurement).
  • Similarly, measure the top side of the hole and add an additional 1”. Record this and label SSM (Short Side Measurement).
  • Get your board and with a table saw, cut two LSM x 2” and two SSM x 2”.
  • Take one of the LSM cuts and measure a 2”x1” rectangle that is 1” from the bottom and ½” from the left and right sides (as shown in picture 3).
  • Use a jigsaw to cut out the hole. Then use the sandpaper to sand out the edges.

Cuts for the Drawer

DSC_0544.JPG
DSC_0545.JPG
DSC_0546.JPG
DSC_0548.JPG

Now we will start to build the drawer (a.k.a Secret Compartment).

  • Cut out two 4”x 1” sides, a 3 ⅜” x 1” (back edge), a 4 ¼” x 1 ¼” (front edge) , and a 4” x 3 ⅜” (platform).
  • Glue the first 4” x 1” side along the 4” side of the platform. I put a couple folded of papers under the platform side so it was slightly lifted, this way it wouldn’t drag on the hole that I cut out in the LS plank. Set to dry for 30 mins.
  • Similarly, glue the 3 ⅜” x 1” along the 3 ⅜” edge of the platform. Set to dry for 30 mins. Then glue the second 4” x 1” side on the opposite side of the first. Set to dry for 30 mins.
  • Set aside the front edge for now. It will be the last thing glued onto the drawer.
  • When finished, check to see if it fits into the hole you jigsawed into the LSM plank. If not, sand the hole until the drawer easily slides in and out, and there is no drag.

Putting the Frame Together

DSC_0553.JPG
DSC_0556.JPG
DSC_0560.JPG
DSC_0629.JPG
DSC_0564.JPG

With all the parts complete we can begin to assemble the entirety of the frame.

  • Glue the LSM plank centered with the glass hole with a ½” on each side. Make sure it is glued with ½” away from the hole (as shown in picture 1). Set to dry for 30 mins.
  • Glue the first SSM plank with the edge touching the inside of the LSM plank that was just glued. (Use a ruler to make sure it is glued on straight). Set to dry for 30 mins.
  • Take the other LSM side and glue similar to the first one. Make sure it is ½” away from the hole and that the SSM that was just attached is glued on the inside of the plank. Set to dry for 30 mins.
  • Glue the last SSM on the top edge. Since you have two LSM’s on both sides, depending on how straight you attached them, you may need to sand the sides of the SSM down to make sure that it fits (my cutting is sometimes off). Set to dry for 30 mins.
  • Measure the small space between the bottom of the drawer and the frame. Cut a piece of MDF wood with this measurement, by 4". You want to make this piece close to the drawer but does not touch it. It's meant to support the drawer with minimal friction.
  • When all done, I spray painted the frame just so all the pieces matched.

For the Mirror

DSC_0570.JPG
DSC_0582.JPG
DSC_0594.JPG
DSC_0599.JPG

The one-way film adhesive that I bought off Amazon was around $10. There are better quality ones that are a little more expensive if you're interested. The one I use reflects but you can tell it is not a regular mirror that you would see in a home. The more expensive ones will get you that look.

  • Clean the glass with glass cleaner on both sides.
  • Unroll the one-way adhesive and lay the glass on top. Cut out the adhesive so there is at least ½” excess on each side of the glass.
  • Set the glass aside and wet one side of it with water. Then peel the plastic coat off the one-way adhesive and spray the newly exposed side with water.
  • Place the wet side of the glass on the wet side of the adhesive. Let sit for 30 mins.
  • Flip over and use your thumb to flatten any bubbles between the adhesive and glass. Then cut the excess adhesive from around the edges.

Install Raspbian Stretch

This being my first time delving into the Raspberry Pi environment I began looking for instructions on how to get the OS installed. I eventually found a straightforward tutorial on Youtube by TeCoEd that went through the process of getting Stretch installed on the SD card (with a rather lovely introduction as well). Here is the link to that tutorial: https://www.youtube.com/watch?v=UVjauheEcSA

In essence, all you need to do is:

  • Format the SD card by selecting your Drive >> Drive Tools >> Format. Download the ZIP file for Raspian Stretch (found here: https://www.raspberrypi.org/downloads/raspberry-p...
  • Flash the OS image to the SD Card. TeCoEd used Win32 Disk Imager to complete this. I ended up installing balenaEtcher which seemed a little more straightforward. (Here is the download link for balenaEtcher: https://www.balena.io/etcher/
  • Once in balenaEtcher select “Flash From File” and choose the previously downloaded ZIP file. Next, select the desired SD card (if not selected automatically). Then hit the juicy flash button and wait for the magic to happen.

Once installed on the SD card you can insert it into the Raspberry Pi and go through the generic Pi setup process.

Install OpenCV

Now on to the more Facial-Recognition-Oriented parts. In order to recognize faces, we must download the OpenCV library which contains a vast number of tools to work with computer vision.

Installing OpenCV was the most arduous part of the software aspect for me. But after following numerous instructions I finally found a tutorial by Emmet from PiMyLifeUp that did the trick which is found here: https://pimylifeup.com/raspberry-pi-opencv/

I won’t walk through these steps since you will be better suited following them from the link (with the given explanations and the ability to copy and paste directly from the site with more ease).

Enable/Test the Camera

DSC_0682.JPG
DSC_0681.JPG

After getting OpenCV installed the rest of my journey was completed using a tutorial by MJRoBot on Hackster.io found here: https://www.hackster.io/mjrobot/real-time-face-re...

Before we get started I would like to remind you that I am not the original creator of these scripts but did end up modifying parts of them.

To start out we should test the camera to make sure we can capture video on the screen. I spent about an hour trying to run the script provided in Step 3 of MJRoBot. As life would have it we actually need to enable the camera on the Raspberry Pi (turns out it might be a good idea to read provided instructions...mmm nah). So after connecting the Camera to its correct port follow these steps:

  • Open a command terminal and type sudo raspi-config
  • Select “Enable Camera” (this might be found under a devices option)
  • Hit “Enter”
  • Go to “Finish” And you will be prompted to reboot

Then follow these steps:

  • Go to the Raspberry’s Main Menu (Top left)
  • Preferences
  • Raspberry Pi Configuration
  • Interfaces
  • Then in Camera, select “Enabled”
  • Then “OK”

Now you should be able to successfully run this script from MJRoBot’s tutorial to test the camera out (remember that all this code plus a more in-depth description is found in the provided link above to MJRobot's tutorial):

import numpy as np
import cv2
cap = cv2.VideoCapture(0)
cap.set(3,640) # set Width
cap.set(4,480) # set Height
while(True):
    ret, frame = cap.read()
    frame = cv2.flip(frame, -1) # Flip camera vertically
    gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
    
    cv2.imshow('frame', frame)
    cv2.imshow('gray', gray)
    
    k = cv2.waitKey(30) & 0xff
    if k == 27: # press 'ESC' to quit
        break
cap.release()
cv2.destroyAllWindows()<br>

The previous code should display two windows, one in color and the other in greyscale. If you made it this far I think you deserve a nice sandwich.

Collecting Data and Training Data

DSC_0680.JPG
DSC_0679.JPG
DSC_0684.JPG

In the provided tutorial the author goes into far more depth about the processes of the code soon to be provided, but since these are instructions on how this mirror was made I won’t go into depth on the history nor the complicated mechanics. I do however recommend you take a month of your life reading about these two things as they can serve your mind well.

There are just about three more scripts to run before we can get this all working. The first is for collecting data, the second is for training it and the last is actually for recognition. Collecting data requires actual pictures of the face to be taken and stored in a specific place for training. The creator of this code made it very simple to get all this done so I recommend following these instructions to avoid a headache.

  • Open a command line and make a new directory naming it something fun ( I called mine FaceRec)
mkdir FaceRec 
  • Now, change directory to FaceRec and make a subdirectory being sure to name it dataset.
cd FaceRec 
mkdir dataset 
  • While we’re at it, we can also make the other subdirectory named trainer.
mkdir trainer 
  • Now you can run and follow the directions of the first script which will capture pictures of a user. (Just a heads up, be sure to enter the user id as either 1,2,3 etc.)
import cv2<br>import os
cam = cv2.VideoCapture(0)
cam.set(3, 640) # set video width
cam.set(4, 480) # set video height
face_detector = cv2.CascadeClassifier('haarcascade_frontalface_default.xml')
# For each person, enter one numeric face id
face_id = input('\n enter user id end press  ==>  ')
print("\n [INFO] Initializing face capture. Look the camera and wait ...")
# Initialize individual sampling face count
count = 0
while(True):
    ret, img = cam.read()
    img = cv2.flip(img, -1) # flip video image vertically
    gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
    faces = face_detector.detectMultiScale(gray, 1.3, 5)
    for (x,y,w,h) in faces:
        cv2.rectangle(img, (x,y), (x+w,y+h), (255,0,0), 2)     
        count += 1
        # Save the captured image into the datasets folder
        cv2.imwrite("dataset/User." + str(face_id) + '.' + str(count) + ".jpg", gray[y:y+h,x:x+w])
        cv2.imshow('image', img)
    k = cv2.waitKey(100) & 0xff # Press 'ESC' for exiting video
    if k == 27:
        break
    elif count >= 30: # Take 30 face sample and stop video
         break
print("\n [INFO] Exiting Program and cleanup stuff")
cam.release()
cv2.destroyAllWindows()
  • At this point be sure you have installed pillow on the Pi. If not, run the command:
pip install pillow
  • After that is completed you can run the training script (second script) which will seamlessly provide you with a .yaml file that will be used in the final script.
import cv2<br>import numpy as np
from PIL import Image
import os
# Path for face image database
path = 'dataset'
recognizer = cv2.face.LBPHFaceRecognizer_create()
detector = cv2.CascadeClassifier("haarcascade_frontalface_default.xml");
# function to get the images and label data
def getImagesAndLabels(path):
    imagePaths = [os.path.join(path,f) for f in os.listdir(path)]     
    faceSamples=[]
    ids = []
    for imagePath in imagePaths:
        PIL_img = Image.open(imagePath).convert('L') # convert it to grayscale
        img_numpy = np.array(PIL_img,'uint8')
        id = int(os.path.split(imagePath)[-1].split(".")[1])
        faces = detector.detectMultiScale(img_numpy)
        for (x,y,w,h) in faces:
            faceSamples.append(img_numpy[y:y+h,x:x+w])
            ids.append(id)
    return faceSamples,ids
print ("\n [INFO] Training faces. It will take a few seconds. Wait ...")
faces,ids = getImagesAndLabels(path)
recognizer.train(faces, np.array(ids))
# Save the model into trainer/trainer.yml
recognizer.write('trainer/trainer.yml') # recognizer.save() worked on Mac, but not on Pi
# Print the numer of faces trained and end program
print("\n [INFO] {0} faces trained. Exiting Program".format(len(np.unique(ids))))

What’s cool about this set of scripts is that multiple faces can be entered into the system meaning multiple individuals can access the innards of the mirror if so desired.

Down below I have the Data Capture script and Training script available for download.

Facial Recognition Time

DSC_0689.JPG
DSC_0688.JPG

Finally, we can run the recognizer script. More code was added to this script in order to make the motor process functional so I’ll explain those parts a little more thoroughly. I'll break it down into sections but I'll put the whole script at the end of the step if that's what you're after.

We will start by importing all the modules we will need and then setting the GPIO mode to GPIO.BCM

import numpy as np
import os
import time
import RPi.GPIO as GPIO

GPIO.setwarnings(False)
GPIO.setmode(GPIO.BCM)

This next list named ControlPin is an array of numbers that represents output pins that will be used for our stepper motor.

ControlPin = [14,15,18,23]

The for-loop sets these pins as Outputs and then makes sure that they are turned off. I still have some code in here to let the drawer close by the push of a button but I decided to use a timer instead.

    GPIO.setup(ControlPin[i], GPIO.OUT)
    GPIO.output(ControlPin[i], 0)
GPIO.setup(2, GPIO.IN, pull_up_down=GPIO.PUD_DOWN)<br>

The next two variables are sequences we will use to drive the motor. I learned this information from a wonderful video by Gaven MacDonald which I highly recommend watching as he goes into depth on not just the code but the actual motor (found here: https://www.youtube.com/watch?v=Dc16mKFA7Fo). In essence, each sequence will be iterated through using the nested for-loops in the upcoming openComp and closeComp functions. If you look closely seq2 is just the exact opposite of seq1. Yup, you guessed it. One is for moving the motor forward and the other is for reverse.

seq1 = [ [1,0,0,0],        
        [1,1,0,0],
        [0,1,0,0],
        [0,1,1,0],
        [0,0,1,0],
        [0,0,1,1],
        [0,0,0,1],
        [1,0,0,1], ]
seq2 = [ [0,0,0,1],
        [0,0,1,1],
        [0,0,1,0],
        [0,1,1,0],
        [0,1,0,0],
        [1,1,0,0],
        [1,0,0,0],
        [1,0,0,1], ]

Starting with our openComp function we create a for-loop that will iterate 1024 times. According to MacDonald’s video 512 iterations would provide a full rotation of the motor and I found that about two rotations was a good length but this can be adjusted depending on an individual’s sizing. The next for-loop is comprised of 8 iterations in order to account for the 8 arrays found in seq1 and seq2. And finally, the last for-loop iterates four times for the four items that are found in each of these arrays as well as the 4 GPIO pins we have our motor connected to. The line under here selects the GPIO pin and then turns it either on or off depending on which iteration it’s on. The line after provides some buffer time lest our motor not rotate at all. After the motor rotates to move the drawer out it sleeps for 5 seconds before moving on. This time can be adjusted here or you can enable the commented out code that allows for the use of a push-button to forward with the script rather than a timer.

   for i in range(1024):
        for halfstep in range(8):
            for pin in range(4):
                GPIO.output(ControlPin[pin], seq1[halfstep] [pin])
            time.sleep(.001)
    '''while True:
        if GPIO.input(2) == GPIO.LOW:
            break;'''
    time.sleep(5)

The closeComp function work in a similar fashion. After the motor moves back I proceed to set our last GPIO pins to low in order to make sure we’re not wasting any energy and then I add three more seconds of time before moving on.

    for i in range(1024):
        for halfstep in range(8):
            for pin in range(4):
                GPIO.output(ControlPin[pin], seq2[halfstep] [pin])
            time.sleep(.001)
            
    print("Compartment Closed")
    GPIO.output(ControlPin[0], 0)
    GPIO.output(ControlPin[3], 0)
    time.sleep(3)

The bulk of the next part is used to set up the camera and begin the facial recognition. Again, MKRoBot’s instructions go into the parts more but for now, I’m just showing the parts used for the mirror.

First I changed the list names so that my name is in the index which I assigned it while collecting the data (in my case 1). And then I set the rest of the values to None since I had no more faces in the dataset.

names = ['None', 'Daniel', 'None', 'None', 'None', 'None'] 

Our last few lines of code are implemented in the thicc for-loop. I created a variable to store the confidence as an integer (intConfidence) before the variable confidence gets turned into a string. Then I use an if-statement to check if the confidence is greater than 30 and if the id (which person the computer is detecting, in this case, “Daniel”) is equal to my name. After this is confirmed the function openComp is called which (as explained before) moves the motor, kicks out after 5 seconds, and then proceeds to closeComp which moves the motor in the opposite direction and does some cleanup before proceeding with the thicc loop.

if intConfidence > 30 and id == 'Daniel':
            openComp()
            closeComp()

A bug that I found here is that sometimes after closeComp returns, the code continues but the conditional if-statement is found to be true again as though it’s reading video feed that’s still in the buffer. Although it doesn’t happen every time I’ve yet to find a way to ensure it never happens, so if anyone has any ideas just let me know in the comments.

Here is that whole script all in one place (and just below this is the downloadable):

<pre class="a-b-r-La" style="font-family: "Courier New" , Courier , monospace , arial , sans-serif;margin-bottom: 0.0px;background-color: rgb(255,255,255);color: rgb(0,0,0);font-size: 14.0px;">import cv2
import numpy as np
import os
import time
import RPi.GPIO as GPIO

GPIO.setwarnings(False)
GPIO.setmode(GPIO.BCM)

ControlPin = [14,15,18,23]

for i in range(4):
    GPIO.setup(ControlPin[i], GPIO.OUT)
    GPIO.output(ControlPin[i], 0)


GPIO.setup(2, GPIO.IN, pull_up_down=GPIO.PUD_DOWN)

seq1 = [ [1,0,0,0],
        [1,1,0,0],
        [0,1,0,0],
        [0,1,1,0],
        [0,0,1,0],
        [0,0,1,1],
        [0,0,0,1],
        [1,0,0,1], ]

seq2 = [ [0,0,0,1],
        [0,0,1,1],
        [0,0,1,0],
        [0,1,1,0],
        [0,1,0,0],
        [1,1,0,0],
        [1,0,0,0],
        [1,0,0,1], ]

def openComp():
    for i in range(1024):
        for halfstep in range(8):
            for pin in range(4):
                GPIO.output(ControlPin[pin], seq1[halfstep] [pin])
            time.sleep(.001)
    '''while True:
        if GPIO.input(2) == GPIO.LOW:
            break;'''
    time.sleep(5)
        
def closeComp():
    for i in range(1024):
        for halfstep in range(8):
            for pin in range(4):
                GPIO.output(ControlPin[pin], seq2[halfstep] [pin])
            time.sleep(.001)
            
    print("Compartment Closed")
    GPIO.output(ControlPin[0], 0)
    GPIO.output(ControlPin[3], 0)
    time.sleep(3)



recognizer = cv2.face.LBPHFaceRecognizer_create()
recognizer.read('trainer/trainer.yml')
cascadePath = "/home/pi/opencv/data/haarcascades/haarcascade_frontalface_default.xml"
faceCascade = cv2.CascadeClassifier(cascadePath);
font = cv2.FONT_HERSHEY_SIMPLEX
#iniciate id counter
id = 0
# names related to ids: example ==> Marcelo: id=1,  etc
names = ['None', 'Daniel', 'None', 'None', 'None', 'None'] 
# Initialize and start realtime video capture
cam = cv2.VideoCapture(0)
cam.set(3, 640) # set video widht
cam.set(4, 480) # set video height
# Define min window size to be recognized as a face
minW = 0.1*cam.get(3)
minH = 0.1*cam.get(4)
  

while True:
    ret, img =cam.read()
    gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
    
    faces = faceCascade.detectMultiScale( 
        gray,
        scaleFactor = 1.2,
        minNeighbors = 5,
        minSize = (int(minW), int(minH)),
       )
    for(x,y,w,h) in faces:
        cv2.rectangle(img, (x,y), (x+w,y+h), (0,255,0), 2)
        id, confidence = recognizer.predict(gray[y:y+h,x:x+w])
        # Check if confidence is less them 100 ==> "0" is perfect match 
        if (confidence < 100):
            id = names[id]
            intConfidence = 100 - confidence
            confidence = "  {0}%".format(round(100 - confidence))
        else:
            id = "unknown"
            confidence = "  {0}%".format(round(100 - confidence))
        
        cv2.putText(img, str(id), (x+5,y-5), font, 1, (255,255,255), 2)
        cv2.putText(img, str(confidence), (x+5,y+h-5), font, 1, (255,255,0), 1)
        if intConfidence > 30 and id == 'Daniel':
            openComp()
            closeComp()
    
    cv2.imshow('camera',img)
    
    k = cv2.waitKey(10) & 0xff # Press 'ESC' for exiting video
    if k == 27:
        break
# Do a bit of cleanup
print("\n [INFO] Exiting Program and cleanup stuff")
pwm.stop()
GPIO.cleanup()
cam.release()
cv2.destroyAllWindows()

Mounting the Pi and Connecting the Motor

DSC_0628.JPG
DSC_0637.JPG
DSC_0620.JPG
DSC_0635.JPG
DSC_0634.JPG
DSC_0638.JPG

Mounting the Raspberry Pi to the frame was rather simple. I designed a small 90-degree elbow with one face having a hole and the other side being completely flat. After 3D printing two of these they can be attached with screws to the Raspberry Pi on its mounting holes (I used the two holes on each side of the GPIO pins).

I then proceeded to use super glue on the opposite faces of the 3D printed elbows to glue the Pi just above the drawer on the frame. After letting the glue dry I was able to remove or replace the Pi into position simply and conveniently with just the two screws. I have the .stl for the elbow linked below.

Now simply connect the motor driver to the PI with IN1, IN2, IN3, IN4 connecting to GPIO 14,15,18,23 respectively. Finally, connect the 5v and Ground pins of the controller board to the 5v output and Ground pins of the Pi.

Here's a link to the Pi's Pinout for some reference: https://www.raspberrypi.org/documentation/usage/gpio/

Downloads

Mounting the Camera

DSC_0640.JPG
DSC_0643.JPG
DSC_0653.JPG
DSC_0648.JPG
DSC_0646.JPG
DSC_0644.JPG

Mounting the Camera was slightly less robust than the Pi but the method got the job done. After designing and printing a thin beam with 2 holes on each end I attached the beam to the Rasberry Pi through its mounting hole. Then just attach the camera to the opposite end of the beam with another screw. Ta-da! It’s lookin’ pretty fly.

Downloads

Creating and Mounting the Drawer-Moving-Mechanism

DSC_0618.JPG
DSC_0615.JPG
DSC_0626.JPG
DSC_0624.JPG
DSC_0652.JPG
DSC_0650.JPG
DSC_0649.JPG
DSC_0658.JPG

This step was made easy thanks to the ever benevolent gifts of the maker community. After a quick search on Thingiverse I was able to find a linear actuator created by TucksProjects (found here: https://www.thingiverse.com/thing:2987762). All that was left to do was slap it on an SD card and let the printer do the work.

I ended up going into Fusion 360 and edited the spur since the shaft of my motor was too large for the one provided by TucksProjects. I have the .stl for that below. After the print was done, we just need to assemble it by placing the spur on the motor shaft, then by attaching the motor and enclosure sides with 2 screws (making sure you put the rack in between before closing it up). I ended up having to cut an inch off of the rack so that it would fit in-between the drawer and the frame.

Now all that’s left is attaching the mechanism to the frame and drawer. “BuT hOW wiLL wE Do tHiS?” you ask...yup, say it with me: Super Glue. As shown in the above pictures, just place the mechanism against the bottom of the frame and push it up against the piece of wood that the drawer slides on. It is vital here that you try to get the rack/mechanism as parallel with the frame as possible so that when the mechanism moves it pushes the drawer straight and not at an angle. After the glue has dried, place some more glue on the edge of the rack and move the drawer into position and let it dry. Once complete we have a sturdy mechanism to slide our secret drawer in and out.

Downloads

Adding Cardboard Behind the Mirror

DSC_0657.JPG
DSC_0661.JPG
DSC_0666.JPG
DSC_0664.JPG
DSC_0667.JPG

In order to make this two-way film look more mirror-like, I found that it serves our purpose well to place cardboard behind the glass. The cardboard used is one that came with the frame but any piece cut to fit will work. This also ensures no light from the camera LED, the motor controller, or the Pi shows on the other side of the mirror. With everything in its place use a pencil to mark where the camera sits on the cardboard. Then use a razor to cut a rectangle so that the camera can peek through when it’s in place.

Putting on the Final Piece

DSC_0672.JPG
DSC_0676.JPG

The last thing to do is to put on the front part of the drawer that was set aside earlier. Move the motor so the drawer sticks out. Then glue the front part on so that the drawer piece is centered (there should be a little bit of hang on all sides. Then you can just hang it on a wall.

Finale

DSC_0711.JPG
DSC_0658.JPG

There you have it! There are several improvements that could be made such as adding that push button, buying some better two-way film and fixing that bug in the code but all in all, it gets the job done: it looks like a mirror, it recognizes the predetermined user's face and it opens that cute little drawer. As always I would love to hear your thoughts, questions, and memoirs in the comments down below.

Overall Rating: 10/10

Comments: #WouldNotTryAgain...unless I could follow this instructable ;)