Non-line-of-sight Imaging With a Photoresistor and Projector
by okooptics in Circuits > Cameras
811 Views, 10 Favorites, 0 Comments
Non-line-of-sight Imaging With a Photoresistor and Projector



You can generate an image of a scene using only a single photodetector and light projector. The simplest to do it is by scanning one point at a time onto a scene, detecting the light reflecting off the object from each position, and then rearranging all measurements into a 2D image.
In this project, I'm going to show how you can construct images with the detector facing away from the scene using point scanning and structured illumination. This seems challenging because the camera or detector doesn't directly "see" the light from the scene. But it's not as bad as you may think. All we need to do is collect the reflected light off a diffuse surface. I'll also cover how to measure the 2D Fourier transform of a scene using compressed sensing.
All the ideas are based in the fields of computer vision, dual photography, non-line-of-sight imaging, and compressed sensing.
Here are some references for the work:
My first video on single pixel camera: https://www.youtube.com/watch?v=EE9AETSoPHw
Dual photography paper: Pradeep Sen, Billy Chen, Gaurav Garg, Stephen R. Marschner, Mark Horowitz, Marc Levoy, and Hendrik P. A. Lensch. 2005. Dual photography. In ACM SIGGRAPH 2005 Papers (SIGGRAPH '05). Association for Computing Machinery, New York, NY, USA, 745–755. https://doi.org/10.1145/1186822.1073257
Hadamard vs. Fourier patterns: Zhang Z, Wang X, Zheng G, Zhong J. Hadamard single-pixel imaging versus Fourier single-pixel imaging. Opt Express. 2017 Aug 7;25(16):19619-19639. doi: 10.1364/OE.25.019619. PMID: 29041155; PMCID: PMC5557330. https://pmc.ncbi.nlm.nih.gov/articles/PMC5557330/
Graham M. Gibson, Steven D. Johnson, and Miles J. Padgett, "Single-pixel imaging 12 years on: a review," Opt. Express 28, 28190-28208 (2020). https://opg.optica.org/oe/fulltext.cfm?uri=oe-28-19-28190&id=437999
Supplies

Arduino
Photoresistors
Resistors (330k and 1k)
Pushbutton
Lens (D=12.5, f=3.7mm): https://www.amazon.com/dp/B0BR5PYPQB?ref_=ppx_hzsearch_conn_dt_b_fed_asin_title_1&th=1
Projector. I used Philips PicoPix Micro+, but it looks to be discontinued. Any projector with an HDMI input will do. https://www.usa.philips.com/c-e/so/projectors/mini-projector/picopix.html
Blackout paper - https://www.amazon.com/dp/B0BWMRYJVZ?ref_=ppx_hzsearch_conn_dt_b_fed_asin_title_14
Wirewrap wire
Wirewrap tool
Helping hands
Soldering iron
Tape
Build Photodetector Module






To improve the light collection of the photoresistor, you can use the attached mount that enables mounting a lens above the detector. 3D print the parts, slide the photoresistor through, and press the lens into place. You may need some pliers because it is a tight fight for the lens. Push the cap on top to hold the lens in place. Use wirewrap wire to connect the photoresistor leads to some male headers so they can go into a breadboard.
The lens is optional if you're okay with longer exposure times. I have that design also attached.
Enclosure for Blocking Light




You will need to block out the ambient light. I used an Ikea stool and built up a wall of blackout paper around the parameter. Nothing beautiful, but enough to block the light. I then covered the entire setup with a black t-shirt. Cables ran through the back of the little home I built for the system.
Acquisition Using Python and Arduino



The acquisition is setup using Python and analog input pins on an Arduino. For the Arduino to work using Python, you need to use pyfirmata library. Upload the attached Arduino script called "StandardFirmata.ino" to your Arduino of choice.
The attached Python script loads images in a local folder and displays them over the full screen. When the projector is connected to your computer, then it will display those images onto your scene.
Connect as many photoresistors you'd like with a resistor in series. Each photoresistor will provide a different image of the scene. More on that later. The resistance controls the sensitivity of the setup. I used a 330kOhm resistor.
The acquisition starts when a pushbutton is pressed by the user. I connected the button to pin D3.
Multiple analog reads are acquired for every displayed image, averaged, and stored in an array. This array is then saved locally for processing after the acquisition is completed.
Patterns for Acquiring Images



The images displayed by the projector depend on the type of reconstruction:
Point scanning
Hadamard patterns
Fourier patterns
The attached python scripts will generate the necessary images for you and save them to a local folder. I'll discuss how the measurements are used for the reconstruction in another step. The file location of where you save these images needs to be updated in the python acquisition script.
Point Scan With Photodiode







The data from the acquisition will be saved to an npz file that can be uploaded for the reconstruction algorithm attached. This processing is relatively straightforward. Each measurement is simply placed in the corresponding pixel position.
However, there is a deeper theory called dual photography (Helmholtz reciprocity) at play. Try placing the detector at any position while keeping the object fixed. You'll find that the reconstructed image is always from the perspective of the projector. And the lighting on the object is as if the photodetector is acting like the light source. Like a little flashlight shining only on the side that the photodetector faces. The roles of the illumination and detector seem to be flipped!
In the examples above, I have three different detector positions. When I color code and overlay them you see them perfectly overlap.
Downloads
Non-line-of-sight Imaging


I really mean try putting the detector anywhere! The detector doesn't even need to be facing the scene. As long as enough light from the diffuse reflections of the scene makes it to the detector, you can construct an image of the scene from the perspective of the projector. It seems unbelievable at first, but when you think about it the paper is just acting like a diffuser in front of the detector. All we need to do is collect the light from the illumination pattern. It doesn’t matter if it’s scattered around before it reaches the detector.
In the example images above, I have the detectors from two positions. One is facing the scene and the other is facing a piece of paper. The reconstructed images are from the same perspective, even though the non-line-of-sight detector is noisier.
Downloads
Compressed Sensing - Fourier Patterns







In addition to non-line-of-sight imaging, this setup can be used for compressed sensing with structured illumination. It is less intuitive because multiple points are illuminated simultaneously and there’s only a single point measurement, but if multiple patterns are used you can actually reconstruct an image.
There are two popular pattern types used: sinusoidal and Hadamard patterns. In this step, I'll describe the use of sinusoidal patterns.
Most the time we compute the Fourier transform of a digital image we’ve uploaded onto a computer. When we plot the magnitude of the 2D Fourier transform, we learn what spatial frequencies make up the image. Each point in the plot corresponds to unique frequencies in the x and y directions.
The idea here is to actually measure the 2D Fourier transform of the scene directly using single photodetector measurements. And then reconstruct the image by applying the inverse Fourier transform. It seems amazing that a Fourier transform could actually be measured directly like this.
To get it to work, we need to measure both the amplitude and phase for each frequency component. In short, for each frequency, you need to use multiple illumination patterns with different phase shifts. And then each point in the Fourier transform is calculated using the equation shown above in the images, where D is the photodetector measurement, and the subscript corresponds to the phase of the illumination pattern. So each point requires four patterns.
To acquire the dataset, you generate the patterns using genFourierPatterns.py (Step 4) and then run the acquisition code (Step 3). The reconstruction code for the Fourier patterns is attached in this step.
A few of my results are shown above: a number 5 and a cat object. I’ve always considered the Fourier transform as a pretty complicated operation that you only could do on a computer, so I thought it was so cool to physically measure it like this.
Compressed Sensing - Hadamard Patterns



I also wanted to try the same scene using Hadamard patterns. A couple helpful comments on my previous project led me to read more carefully about using them. Hadamard matrices consist of 1’s and negative 1’s, not 1’s and 0’s like the binary images I was projecting onto the object in my last video. To overcome this problem, you need to illuminate with the inverted Hadamard pattern and subtract the two measurements. Subtracting the inverse provides slightly better result, but it was a bit underwhelming. I think I may be still having some mistakes here with how I am running the reconstruction.
Compressed Sensing With Non-line-of-sight



Finally, I wanted to try non-line-of-sight imaging using compressed sensing, so I moved the detector to face a piece of paper again. This seemed even more challenging because the illumination patterns aren’t spatially confined on the object.
But look at these reconstruction results from using the sinusoidal and Hadamard patterns! We get an image of the scene from the same perspective. Again, when you think of the paper as just a diffuser before the detector, it isn't too crazy to imagine this working. It just feels odd to have the detector facing away from the object of interest. Actually, the scene appears to be more broadly illuminated when reflecting off the paper, so we can see the wall behind the cat more clearly than when we have the photodetector facing the scene directly.