Calisto

Abstract: A system to acquire heart rate via mobile phone camera with flash.
Resumo: Um sistema para adquirir taxa de batimentos cardíacos via câmera de celular com flash. The concept: One of the biggest health issues in Brazil is the low quality and low speed of the reception process in hospitals which is hazardous especially in cases of emergencies where life and death can be a matter of seconds. Plenty of time and workforce is spent on screening the patient and transferring him to the appropriate location, and that's the problem this project tries to solve: an embedded system of self-attendance, self-evaluation and medical consultation scheduling, the system will be capable of collecting vital data such as pressure, heart frequency rate, height, and temperature, plus all information needed for a complete screening based on already available medical protocols. This tutorial shows how to get heart beat rate via mobile phone camera.
What I'm Using

1. Dragonboard 410c or Raspberry Pi because
the project will have to deal with image processing, stable database connection, and client-side requests.
2. USB Hub: In case that you have a Dragonboard, the number of USB connections will not be enough, so unless you connect it within SSH, the hub will be needed.
3. An Android device with a camera. (not needed, read the section below)
4. If you are making an Android native app or a desktop application, the source code that was programmed with Python will work the same way, the only difference the hardware driver and the libraries that will use, a problem that will be covered in the article.
Getting the Things Done With the Camera:
As I'm working with an IP camera, I need to make the program take the data stream from the host and translate it into an image, so we'll be using the urllib library to request the data, numpy to convert the stream to a matrix and opencv to decode the matrix to the image Object, so this is the code: (notice that if you're using Python 2.xx, you will have to import just the entire urllib instead of urllib.request)
import urllib.request as request # import urllib as request if in python 2.xx
import cv2 import numpy as np
url = 'https://your.ip:port/frame.jpg' <br>
Doing the request:
imgResp = request.urlopen(url)
Translating to a matrix:
imgNp = np.array(bytearray(imgResp.read()), dtype=np.uint8)
And finally decoding:
img = cv2.imdecode(imgNp, -1)
This project is using the IP Camera app to serve the data, so this is how it will look like:
If you're using the PyCamera, you don't have to do any of this, just do instead all of the above:
from picamera import PyCamera
camera = PiCamera()
rawCapture = PiRGBArray(camera)
camera.capture(rawCapture, format="bgr")
img = rawCapture.array
If you're using a webcam, just do:
cap = cv2.VideoCapture(0)
ret, img = cap.read()
Image Processing


We measure the heart pulse rate through the change of the luminosity in the finger tissue when it's submitted by a source of light, in this case, the camera flash. So there are a few steps of doing it:
- Given an input image, create a new binary image that separates the image in a lighter and a darker side.
- Contour each one of the parts to determine the area of the actual light size.
- Take the biggest contour of the image, because there will be some noise.
- Make it a temporal value.
- Teste if the last value was a valley.
- Measure the distances between the valleys and calculate the median of the values, to represent the time period. Take the inverse of the value, to take the frequency.
- The algorithm takes the time in seconds, so the frequency must be multiplied by 60 to take the value in rpm.
Due to the heartbeat has two valleys in a time period, the result must be divided by two. In opencv, you create the binary image by calling:
ret, thresh = cv2.threshold(img, threshholdValue, 255, cv2.THRESH_BINARY)
The threshhold takes an image, and divides it by the threshholdValue which is done by comparing every one of the rgb values, and due to the parameter cv2.THRESH_BINARY, it returns a binary image.
I'm using 210 as threshholdValues.
Then the image is converted to grayscale, to be used by the findContours function.
gray = cv2.cvtColor(ret, cv2.COLOR_BGR2GRAY)
To contour the parts, the used function is:
contours, hierarchy = cv2.findContours(gray, 1, 2)
The heuristic part is very well described in the repository https://github.com/whoismath/sancathon2018, wherein the getHeartRate is described the way we use the it.