Migraine Alert

by SteevAtBlueDust in Circuits > Microcontrollers

1493 Views, 10 Favorites, 0 Comments

Migraine Alert

20161012_105458.jpg

The plan

By monitoring the environment we plan to build a data set to analyse the most likely situations that trigger a migraine, in specific people.

The How

We process the supplied sensors (temperature, light, sound) every second, and store the results on the local SD memory card. Additionally, a push-button is included for the user to indicate that a migraine is beginning.

At the end of each day/week the user synchronizes this data with their own account in the cloud.

All submitted data is anonymized and analysed using collaborative filtering (or similar techniques) to look for cases where there is a correlation between your environmental data, and the migraine trigger. Since every person's physiology can be slightly different it uses the anonymized user ID as a key factor, and the system will draw connections between similar people to build an 'alert profile' for your physiological body type.

This 'alert profile' information is downloaded to your device so that it can warn you ahead of time, in the future.

A feedback loop is thus introduced to improve the results over time.

Important notes

We will be updating this instructable in real-time, as we build it, so expected mis-steps, wrong assumptions, and so on. Please use the comments to help out, if you are able. Especially with the data analysis section.

Unboxing

pic3.jpg
pic2.jpg
pic4.jpg

As with all new toys, the unboxing is the first joy. Sorting through each box and packet, looking for something new and interesting to play with. Here, the kit doesn’t disappoint. Even the humble LED gets its own bag, and own mini board and cable to connect it. Sure, it’s overkill, but it’s something else to open.

Making the Hardware

pic5.jpg
pic6.jpg

Next is the physical build. The Edison chip is on its own board that slots into the main Arduino board and fixed by two bolts. They’re small, but manageable. Next is the shield, which also slots into the main Arduino board. Alas, in this case it doesn’t go all the way down and (despite being normal) might cause consternation to the first timer. The kit also comes with 4 screws and plastic tubes; they could have been buffer between the shield and the board but are actually feet for the whole unit. After this connect the cables are you’re ready. Just to note, there’s two USBs to connect, one for Edison, one for Arduino, and both are generally needed. Additionally, the weight of the cables tend to make the Edison tip over, so if you can, use Blu-Tack or screw the feet into a piece of wood, to make it a bit more sturdy.

Connecting and Set-up

20161112_130952-1.jpg
Screen Shot 2016-10-29 at 12.27.52.png
Screen Shot 2016-10-29 at 13.00.57.png

This section should really be renamed “Tribulations with Edison”. We are lucky enough to have access to machines with different Operating Systems; if you don’t you might face some trouble.
Depending on your luck, connecting the Edison to your network is either simple or impossible. To date, we’ve been in both positions!

The first setup was easy enough, all done through the Intel Edison Setup Software (Mac OSX). Download and flash the latest firmware, then connect to your home WiFi, and enable ssh. All that done (almost) without hassle. For some reason, when setting up ssh you had to save the board name, then get back out to the main menu, skip the name-setting stage and set the password; otherwise, setting the password would fail. Also note, you get root privileges when you set up ssh, so you can ssh root@the-ip-stated-in-wifi-section (Being able to ssh into a machine as root is usually considered a big no-no in general security circles. So it’s surprising that it is enabled by default on an IoT device, given the security concerns surrounding IoT at the moment.)

Things became complicated when we wanted to put the board on a different network. It was impossible to detect new ones using the Setup Software. Although, we noticed the board was broadcasting its own WiFi access point, so we were able to ssh still, but with no Internet connection it’s not much use for our Migraine Alert system. Instead, you have to go through a serial connection and configure the WiFi from there. However, on Mac OSX, the /dev/cu.usbserial-* device, as described here wasn’t detected. We had to use a Linux box to follow the steps described here for it to work.

First Code

20161120_121600.jpg

Naturally, the first thing we wrote was based on an existing sample. It read the value of the sensors and reported them via the web. All code starts with the basic setup boilerplate to initialize the pins, and server the data on port 1337:

var mraa = require('mraa');

var pinLight = new mraa.Aio(0);
var pinSound = new mraa.Aio(1);
var pinTemperature = new mraa.Aio(2);</p><p>var http = require('http');
http.createServer(function (req, res) {
        res.writeHead(200, {'Content-Type': 'text/json'});
        res.end(JSON.stringify({lightLevel:pinLight.read(), soundLevel:pinSound.read(), temperatureLevel:pinTemperature.read()}));
    }
}).listen(1337, ‘192.168.0.5’);

The observant will note the sensors (Light, Sound, Temperature) are placed in numeric order (0, 1, and 2) according to order of LST in the alphabet. It just helps as a reminder in case of problems.

You run it with the (obvious):

node main.js

The results at http://192.168.0.5:1337 looked like this:

{"lightLevel":271,"soundLevel":83,"temperatureLevel":448}

We also experimented with the backlit LCD and LEDs to display this information, also. That was equally as simple:

var lcd = require('jsupm_i2clcd');
<p>var display = new lcd.Jhd1313m1(0, 0x3E, 0x62);</p>
display.setCursor(0,0);
display.write('Hello Edison');

Now our workload splits into two even halves. The first half is to build a full NodeJS app to collect and store these parameters which, every hour, uploads them to a remote server. The second half is to build that server!

Note: In order for NodeJS to serve this data to external clients (such as our web browser) it needs to listen on a specific port and IP address. In our case the IP address is 192.168.0.5, but yours may vary. You can determine this by using bash and typing:

ifconfig wlan  | grep 'inet addr:' | cut -d: -f2 | awk '{ print $1}'

Alternatively, you can determine it automatically with the NodeJS “os” package.

If you ever lose the IP of the Edison (such as when there's a power cut and it claims a different address from your DHCP server) you can always scan the network for open SSH ports with:

nmap -p 22 --open -sV 192.168.0.0/24

​Preparing the Server

amazon-aws-logo.jpg

We’ve adopted a basic LAMP stack with PHP as the primary language, running on a micro AWS instance. On Ubuntu you’ll need to install the following packages.

apt-get install apache2  mysql-server php libapache2-mod-php php-mcrypt php-mysql php-curl php7.0-mbstring

The default server is located at http://52.16.55.190/ but there is no reason why you can’t set up your own local server so that the data never leaves your own data perimeter. However, that eliminates the chance of your migraine patterns helping others.

In fact, there is no reason why it couldn’t run on the Edison directly.

We also set-up a free dyndns domain, http://migrainealert.hopto.org/, through NoIP, for a slightly more user-friendly name. However, since the intention is for uses to interact with the system through the web page on their Edison, this is almost a moot point.

Our final piece of server preparation was to get an SSL certificate. Due to the nature of experiment, it didn’t seem worth buying one. However, LetsEncrypt provide free certificates. Unfortunately, despite free being the best price in town, we were unable to this working and suffered repeated ‘TLS-SNI-01 challenge’ errors. With time against us, we decide to drop support for https.

Once all the system software is installed, you need only to prepare the database. To use our settings you need to log into MySQL and type:

CREATE DATABASE ma_data;
CREATE USER 'newuser'@'localhost' IDENTIFIED BY 'password';
GRANT ALL PRIVILEGES ON ma_data . * TO 'newuser'@'localhost';
FLUSH PRIVILEGES;

And, as always, replace the username and password with your own.

We can now prepare the Edison client.

​Basic Architecture

pic1.jpg

As you’d expect from the last step, this a simple client-server affair. The Edison runs a client which collects the sensor data, and every hour synchronizes it with a remote server. Due to time constraints this process was much simplified, such that no checks are made for duplicate or omitted data.

Security and privacy are both important. To achieve the former there is a token and secret pair which is shared only by server and client. These act like a tradition username-password pair. These are used to retrieve a JWT API token (https://jwt.io/), used for all subsequent requests. It’s not perfect, since the token can be duplicated and (potentially) sniffed, but it’s good enough for our initial experiment.

Privacy is ensured by never asking for real data, so the machine token and a user token have no connection to the real world machine or user name.

The implementation involved writing a small PHP library called UltraTest which could call the API with the appropriate parameters, and testing the results. For example,

$api = new \Ultra\Test\API("http://52.16.55.190/api/");
$apiRegisterTest = $api->makeGetTest("Register machine", "register.php",
        function($test) {
                global $gToken;
                $test->setURLParameter('token', $gToken);
        }, function($result) {
                global $gSecret;
                $gSecret = $result->result->secret;
                return $result->success == 1 ? true : false;
        });

The test suite, of one test, is then built and run via:

$testing = new \Ultra\Test\TestSuite();
$testing->addTest($apiRegisterTest);
$testing->test();

From here it was (comparatively) simple to add tests to create users, environments, and data. You can review this test code on github shortly!

Then, after writing the tests, we wrote the API following the ideals of test-driven development (TDD.) Yes… really… I know it’s a hack, and software engineering best practise is meant to go out the window, but for this we created the tests and _then_ the code. (However, with time against us, we soon reverted to hacking, as you’ll see!)

For the API side, we used Ultra\REST\API another “as small as possible to do the job” library, created as part of the project. The original code is at: https://github.com/MarquisdeGeek/ultraweb With the 'Migraine Alert'-specific version at github.

With that in place, the specific API calls could be written. Creating, or reviewing, a user uses the provided BasicRecord functionality:

$api = new BasicAPI($_GET);
$api->connect($gConfigDB);
$u = new \Ultra\REST\BasicRecord("users", $api, new \Ultra\REST\Request());
$u->process();

Throughout the API we have chosen to use RedBean. This is a low-configuration ORM for PHP that provides a low barrier to entry for apps such as this. Once you connect to the database with:

\R::setup( 'mysql:host=localhost;dbname=' . $config['dbname'], $config['username'], $config['password'] )

All subsequent calls will “just work”. So, to create a new record (such as a user) we call:

        public function createNew($request) {
                $record = \R::dispense($this->table_name);
                $record->name = $request->getParameter("name", "No name");
                $record->token = uniqid();
                $record->machine_token = $this->machine_token;

                $id = \R::store($record);
                $this->api->success("Registered new entry, " . $record->name, $this->presentData($record));
        }

This is common in many RESTful applications. However, we never had time to abstract away the MA-specifics of ‘machine_token’, ‘token’, or the reliance of a single data field ‘name’.

However, the Ultra server code (api/ultra/server.php) provides a fairly clean process method that determines whether the API call intends to create a record, view it, or list all those available. Consequently, we can override the methods createNew, getData, or listData respectively, if appropriate.

Note: The Ultra prefix is named for the web server of the same name. A server, NoSQL DB, and programming language in under 64K! Disclaimer: I also wrote it! Thus, I decided to umbrella all these ‘ultra small tools’ under a single monicker.

So to recap, when a machine is initialized the process is:

Generate a machine token, unique to this machine via its MAC address and the current time Register Edison with the server using this token The server generates a secret, and returns it to the Edison The Edison uses the token and secret to generate an API token in the JWT format

From this point, the server will now accept all other requests, if this JWT token is valid. Before data can be sent, we need to generate two types of token:

  1. The Edison creates a user token, and registers it with the server.
  2. The Edison creates an environment token, and registers it with the server.

This gives us three tokens, for machine, environment, and user. In short:

  1. The machine means the physical Edison box.
  2. The environment means the location in which the Edison is installed. If the machine moves between locations (e.g. home and office), then you should configure two environments and add a switch on the front to indicate which location you’re in.
  3. The user is the person using the machine to track their migraines.

This separation allows for several pieces of functionality for free:

Many users may use the same box. Such as a family, or workplace. The machine may be moved, without making the data irrelevant. Indeed, by knowing the locations analytical software might be able to deduce a causal link between the migraine and location.

Each minute, both the environment and users parameters are sent to the server. At the moment, the only parameter the user is able to send is their current migraine pain level. We could add a second potentiometer so they could indicate their stress level, for example. Similarly, we could enhance the environment stats to include air pressure or quality. However, in doing so the analytics code would be more complex, since some users have 3 variables affecting migraine, while others have 4. So, for now, we limit ourselves.

​Preparing the Edison Client - the Circuit

P1000311.JPG
P1000315.JPG

The advantage with the Grove shield for Edison is that you don’t have to worry about wiring; you plug the components in and they just work - like magic! (Well, we took a little time to figure out why the LCD wasn’t displaying text properly during our test phase; this is because the shield was running on 3.3V instead of 5V. There is a small white switch on the side of the shield that allows you to toggle between the two.

For Migraine Alert we need: a light sensor, a sound sensor (NOTE: Grove also have a “loudness” sensor, this is not the one we have), a temperature sensor, and a rotary potentiometer for user input. We’ll also add an LCD for feedback. Intel have a very comprehensive list of available components and libraries at https://software.intel.com/en-us/iot/hardware/sen... In node, we will mostly make use of the MRAA and JSUPM_GROVE packages to interact with the sensors.

For convenience, we’ll keep the same (alphabetical) order as our test run; so our analog sensors Light/Sound/Temperature are connected respectively to A0, A1, A2 on the shield.

The rotary potentiometer is set separately, because it requires user input, as opposed to environmental (passive) sensors, so it doesn’t respect the alphabetical order.

Preparing the Client - Node.js

P1000319.JPG
P1000321.JPG
P1000322.JPG
P1000323.JPG

Working outside of the Edison

Writing code directly on the Edison through ssh isn’t the best solution; so, to test the API and the rest of the code we need to be able to run it on a regular computer, with a more convenient text editor than vim. The following describes the process of writing the code, with highlighted snippets. The full code to run on the Edison is available on github.

Since the MRAA package only runs on the Edison, we can start by checking if it is available:

try {
   mraa = require('mraa');
   ENV = 'EDISON'; setupPins();
} catch(e) {
   console.error("You're not running this on the Edison");
   CONFIG.ip = 'localhost';
   sensorInterval = setInterval(getSensorsValue,1000);
} 

In this case we change the global variable ENV to ‘EDISON’, only if MRAA is present, otherwise it remains its default value, ‘LOCAL’. In that way, in getSensorsValue, we can either get an actual reading or just return a random number for testing:

function getSensorsValue() {
   var sensorData = {};
   if(ENV === 'LOCAL') {
        sensorData.sound = Math.round(Math.random()*100);
        sensorData.temp = Math.round(Math.random()*200);
        sensorData.light = Math.round(Math.random()*150);
    } else {
        sensorData.sound = soundSensor.read();
        sensorData.temp = tempSensor.value();
        sensorData.light = lightSensor.value();
    }
   sensorData.timestamp = Math.round((new Date().getTime())/1000);
   envData.push(sensorData);
 }
 

And we collect the environment data in an Array (envData) that we’ll send through the API at a set interval. For local testing, we can log values to the console. To be able to send this collected (test) data to the main server, we first need to create tokens, as described in the Basic Architecture section above.

The API

For convenience, we moved all the possible endpoints to a JSON file, structured like so:

{
 "serverIP": "52.16.55.190",
 "endpoints": {
   "getSecret": {
     "path": "/api/register.php",
     "method": "GET",
     "token": true,
     "secret": false,
     "apiToken": false,
     "resultParam": "secret"
    }
 } 

This structure allows us to have only one function call for every endpoint, depending on the different parameters to vary the specifics of the call. We keep API interactions in a separate file to the main index.js, only exposing necessary functions for communication between the two in module.exports. In api.js, we start by loading the endpoints to be able to interact with the API:

function loadAPI(callback) {
  fs.readFile('endpoints.json', 'utf8',  function(err, res){
    if(err) throw err 
    else {
      _API = JSON.parse(res);
      loadConfig(callback);
    }
  });
} 

We save the result in an _API variable, available throughout the file.

Once this is done, we load the configuration file. A config.json is created the first time the node application is run, providing that it doesn’t already exist. It will contain the machine IP, its token, secret and API token.

function loadConfig(callback) {
  fs.readFile('config.json', 'utf8',  function(err, res){
    if(err && err.errno === -2) {
      setConfig(callback); }
    else {
      conf = JSON.parse(res);
      callback();
    }
  });
} 

In the above snippet, we tried to load config.json, and run setConfig if it doesn’t exist. setConfig will make various API calls, and finally save all the tokens mentioned above to a file. The API calls are all made through one unique function, that checks for various parameters defined in the endpoints.json configuration.

function callEndpoint(endpoint, params, callback) {
   var path = endpoint.path;
   var callParams = [];
   var headers = {'Content-Type': 'application/json'};

  if(endpoint.token) {
     callParams.push("token=" + params.token);
  }

  if(endpoint.secret) {
     callParams.push("secret="+params.secret);
  }

  if(endpoint.apiToken) {
     headers['Auth'] = 'Bearer '+ conf.apiToken;
     if(endpoint.method === 'GET') {
        for (var param in params) {
           if(!params.hasOwnProperty(param)) continue;
           callParams.push(param +'='+params[param]);
        }
     } else { //bit hacky, but required for post
        callParams.push('new=new'); 
     }
  }

  if(callParams.length > 0) { 
    path += '?' + callParams.join('&'); 
  }

  var options = { 
      host: _API.serverIP, 
      path: path,
      method: endpoint.method,
      headers: headers 
   };

  var req = http.request(options, function(response) {
     var body = '';
     var result = null;

     response.on('data', function(d) {
        body += d; 
     });

     response.on('end', function() {
        if(endpoint.resultParam === null) {
           result = result = JSON.parse(body).description;
        } else {
           result = JSON.parse(body).result[endpoint.resultParam];
        }
        callback(result);
      });
   });

   if(endpoint.method === 'POST') {
      console.log('post data');
      req.write(params);
   }
   req.end();
 } 

Saving device configuration

After all the initial API calls have been made we have:

  1. Generated a token (we started testing with a randomly generated string, to then create a hash from the MAC Address and a timestamp)
  2. Got a secret from that token
  3. Got an API token in JWT format, necessary for all future calls

We save this in the config.json file. However, to be able to sync the data, we still need a user and environment name to identify the machine and tie the data to. So, in index.js, we check whether the configuration already has an environment token and user token, if not, we set them before going further. Note: at the moment we’re hard-coding the user and machine name, but the aim is to create a web interface that the user can access on first setup, to choose them.

function setupEnvironment() {
  CONFIG = CONFIG || api.config();
  if (!CONFIG.environment) {
     api.setConfigParam('setEnvironment', {'new':'new', 'name': 'Edison'}, function(envt){
        CONFIG.environment = envt; setupEnvironment();
     }); 
  } else if (!CONFIG.user) {
     api.setConfigParam('setUser', {'new': 'new', 'name': 'EdisonUser'}, function(user){
        CONFIG.user = user; 
        setupEnvironment();
     });
  } else { 
     setMachine();
  }
} 

Sending data

Depending on whether the Node application is running on a test machine or the Edison, the frequency of the data collection will vary. We set the frequency as variables:

var _SENSOR_INTERVAL, _SYNC_INTERVAL; 

Set respectively at 1 minute, 1 hour for the live version and 1 second, 30 second for testing. When the synchronisation timeout has run (either 1 hour or 30 seconds), we send all the data collected with:

function syncData() { 
   var data = envData; 
   var envData = [];
   var data_sync = { 
      "environment": {
         "token": CONFIG.environment, 
         "data": data
      }, 
      "user": { 
         "token": CONFIG.user,
         "environment": CONFIG.environment,
         "data": []
      }
   };

   api.sendData('sendData', JSON.stringify(data_sync), function(results){
      setTimeout(syncData, _SYNC_INTERVAL);
   }); 
} 

data_sync is a JSON object constructed as required by the API.

Note: in the above we empty envData and userData immediately, to avoid duplicate data. This will need to be handled better (with data_sync potentially saved to a file), so as not to lose collected data in case the connection fails or the API server is unresponsive. The sendData function will call the callEndpoint function, as described above.

Adjustments for the Edison

Now that we have a basic application that sets up the API and collects and sends data, we can upload it to the Edison. Our migraine-alert folder that contains the app will be in the /code folder at the root. By running `node index.js` in that folder, we can now setup the Edison for our migraine monitor. Instead of doing a basic analog read of the sensors, we used the jsupm_grove package to translate light and temperature data to meaningful values (LUX and ºC). At this stage, we still need a way for the user to record the occurrence of a migraine. We decided to use a potentiometer (or Rotary Angle Sensor), so migraine sufferers could also indicate the intensity of the pain at the moment of reporting. For the reading, we used the GroveRotary function on analog pin 3, converting the absolute reading into a percentage. Next step will be to add an LCD display, so the the user is able to know what percentage they’re recording.

LCD setup
As mentioned, users wouldn’t be able to record their migraine accurately if they didn’t know what value the angle represented. Thus, we added an LCD display, that gives the machine IP from the configuration file, or the pain level when there is one. We’ve also made the most of the RGB backlight and added colour feedback based on the pain level. It is worth noticing that the RGB colours that we’ve set aren’t pure red or green colours, as we wouldn’t want to cause more pain to someone already suffering from a migraine. Also, over 90% perceived pain, we switch the backlight off, as sources of light can be very affecting at this stage.

The code managing the LCD has been written in a separate file (found at https://github.com/Lily2point0/migraine-alert/blo... and in the main application, changes have been made to regularly check the potentiometer’s value. This occurs at the same time as setting up the other sensors, like so:

LCDDisplay.initDisplay(CONFIG.ip);
LCDInterval = setInterval(function(){ 
   	var val = getPercentMigraine();
    	LCDDisplay.setPotValue(val);
    }, 100);

In the future, the LCD can be used to alert users from an upcoming migraine, or for the initial Edison set up, as well as reminders to drink water throughout the day (for example).

Startup script

The problem we face now that the code is running on the Edison, is that it stops when we close our ssh connection. We followed Michael Kuehl’s guide to configuring a startup service, with only a few tweaks. Specifically,

  • Changed description
  • Updated path to the application (in our case /home/root/code/migraine-alert/index.js)
  • Changed user from Nobody to root
  • Updated Working Directory (/home/root/code/migraine-alert/)
  • Removed Environment, as it’s not needed in this case

Our updated migraine-alert.service looks like so:

[Unit] 
Description=Migraine Alert Service 
Wants=network-online.target 
After=network-online.target

[Service] 
ExecStart=/usr/bin/node /home/root/code/migraine-alert/index.js 
User=root 
Restart=on-failure 
RestartSec=10 
WorkingDirectory= /home/root/code/migraine-alert/

[Install] 
WantedBy=multi-user.target
 

Finding out all the needed modifications was done by trial and error, and we had to test with the original node project on the board, as Migraine Alert didn’t have an available web page to check this Edison was still running the Node server at this stage. Unfortunately, in the midst of all the changes, the old path stayed cached in the service and was running for a few days before we noticed there wasn’t any data gathered. So, if you’re making amends to your service, do run:

systemctl stop migraine-alert.service 

And also

systemctl disable migraine-alert.service 

Before you run `systemctl deamon-reload` and restart the service with:

systemctl enable migraine-alert.service 
systemctl start migraine-alert.service 

We would advise to leave a console.log in your Node application that can be visible when typing

systemctl status migraine-alert.service 

All that having been fixed, we now have data being recorded through the API every hour, that can be visualised on a graph.

Data Representation

Screen Shot 2016-11-29 at 07.50.38.png
Graph_real_data.png
Screen Shot 2016-12-02 at 15.10.16.png

Data visualisation

In order to show the user their collected data, we will use D3.js, JavaScript library. D3 allows to manipulate and visualise data in SVG format (supported by all modern browsers, even mobile). In this way we will be able to toggle sections of the data on and off, scale our graph based on the value range of the data collected, etc.

For more information on D3.js, please refer to https://d3js.org/

Examples are available at https://d3js.org/

Barebones graph

The code for the graph can be found on Github

We started by plotting the graph with generated test data. We need to ensure that the whole data fits on the graph, without being cut off. That is why we set the extent of our y axis to a combination of all three sensors.

y.domain(d3.extent(function(array, names){
	var res = [];
	array.forEach(function(item){
		names.forEach(function(name){
			res = res.concat(item[name]);
		});
	});

	return(res);
}(env_result, properties)));

Note: we didn’t have Cross-domain set up on the server initially, so we had to save the data in a local json file, to be able to manipulate the data.

In the first image we’re using test data, and default colours, but it gives us the basic structure for the graph. We need to overlay migraine data as well. Once we had collected data, we could swap out the json files for the data from our Edison, through an API call. The second image show what it looks like with actual collected data.

You’ll notice that the sound graph is a bit strange. That’s because the sound sensor reading gets stored as a raw analog value rather than being translated to dB (that’s because we had a grove Microphone, and not a loudness sensor). By using the microphone sensitive (as found in the data sheets) we were able to compute a value that much better resembles the dB. (Of course, we assumed a linear amplifier stage. Only a full calibration of the unit will give us accurate dB values, but these are good enough for now.)

We also have had to restrict the amount of data displayed due to the processing overhead, now that we have a few weeks worth of them.Therefore, we’re only showing the last 24 hours by default. Future versions will allow choosing the date range displayed on the graph. The latest version can be seen live at http://migrainealert.hopto.org

​Envisaged UI Design

ma_design.png

Our aim is to enable users to view their stats and environmental data in the browser, providing that they know their machine token and its associated secret. From here they can retrieve a JWT token as normal. They could even enter their migraine information via a web browser.

Visually, we have chosen soft colours, compatible with migraine sufferers (we wouldn’t want to use bright pure colours, as in our example, as they could worsen a migraine).

In our proposed interface, the user would be able to toggle sensors preview from the graph, zoom and pan the graph to a day/week/month/all time view (possible with D3), and view the recorded values at a particular date and time by hovering the graph. We would aim to surface meaningful data, such as frequent triggers, migraine frequency, last recorded migraine, average migraine duration, etc.

We would also like to implement, as a future improvement, a calendar view, to give a better overview of migraine occurrence and recurrence.

​Data Analysis

In its current state, the user is able to review the various data graphs and look for patterns. This is a manual process. The project hope to analyse this information and present a textual description such as “The weather is getting cooler, this has increased your chance of a migraine by 20% in the last year”. Unfortunately, the time taken to get to this stage has been greater than anticipated, as you might tell from the issues covered in this write-up. So, the part to auto-analyse is missing, although some research has been carried out into using chi-square tests and Crammer’s V. Both look reasonable to code, and could produce results for a single user.

Next Steps

P1000324.JPG

There are many pieces of proverbial low-hanging fruit we can implement to improve the current entry.

At the moment, the LCD is used to display the IP address at boot-up, and the user's migraine level. It could be split into three parts: display immediately, queued messages, and idle. This would require some extra code which displays the current pain level to reflect the user interface, whilst managing the case where queued messages appear (such as an hourly "remember to drink some water"), and also handling the case where "screensaver"-like behaviour is required. This latter state could display the current sensor values, or even receive push notifications from the server with helpful health tips and advice.

We are going to continue gathering data, and try to identify migraine triggers from it. Unfortunately for us (fortunately for them), our test subject didn’t have a migraine in the last week so we have no recording of it, yet. We then want to give access to the platform to a few more migraine sufferers, so we can improve predicting migraine occurrences; and at the same time refine the warning/alert process on the Edison. We will also do our best to implement the UI, as described in the previous step.

Finally, we are going to look into 3D-printing a case for the Edison and its Grove Shield, based on this design . But with an exposed space for the sensors. In the meantime, we can cut out the cardboard box the Edison came in, to see all the modifications the 3D model requires.

​Future Improvements

P1000325.JPG

While we have a very nice migraine reporting and review system, there is nothing to prevent the exact same unit being used for any other type of environmental monitoring. However, we feel the Edison has more to offer than what we’ve built so far.

There are many more sensors available which would improve the quality of the data results including an air quality sensor and an air pressure sensor. These are mentioned specifically since medical research has suggests there might be a link between air pressure and migraines.

With the Edison lowering the barrier to entry for such devices, it’s not unreasonable to have two machines - one for the office, and one for home. The software could support this without too much effort, since the abstractions and distinction between machine, environments, and users was made explicitly at the start of the project. However, an even better solution would be to make the Edison into a wearable so that it could be carried with the user at all times to monitor their environment _between_ the office and the home. The feasibility of this approach will depend on the current draw of the Edison, and the ergonomics of the casing. It might also be possible to use the person’s phone as a network hotspot/gateway to transmit their data to the remote server.

Outside of the unit itself, the ultimate goal was to collate the readings from different users for a better analysis. This is still a possibility. However, in addition to just allowing users to connect their data we could allow them to connect their real lives by incorporating a social network element where users exchange tips for dealing with migraines, various medicines, and approaches which help them, and therefore allow them to help others that have similar patterns of attack. This exchange could be facilitated in an anonymised format, if required.

Ultimately, the idea of crowdsourcing data for medical advance is fairly new and this is only a first step. It has allowed us (and anyone else who’s been good enough to stick with us, and read our learnings in this Instructable) to make that second step. It is indeed a good use for technology, and one we’re happy to have been a (very small) part of.