Details for Tremor analysis multi FFT.ipynb

Published by madprime


Analyze a set of Gforce datasets (e.g. tremor tracking data) with fourier transform to get the peak frequency and associated magnitude. Output a table for frequency & magnitude information at each timepoint.


Tags & Data Sources

tremor shaking fft accelerometer data


Please log in to comment.

Last updated 1 month, 2 weeks ago

Tremor analysis notebook - multiple timepoints

You can use this notebook to analyze tremor frequency and strength using accelerometer data for a set of files.

See instructions below on how to collect this data & run this code for yourself.

Note: you will an Open Humans account to use this. Open Humans is a nonprofit platform enabling individuals to manage personal data and privately conduct personal analyses. Click here to sign up.

Getting data

Install the app.

This notebook expects to analyze a data file from the "Physics Toolbox Sensor Suite" phone app.

Record and save a file.

  1. Open the "Gforce" section of the app
  2. Get ready to record
  3. Start recording with the red "+" button
  4. Stop recording with the red "■" button
  5. Email the file to yourself (or otherwise save it)

Upload this file to your account.

Upload this data type here:

Note: this will prompt a log in Open Humans.

Running data analysis

Open in "Personal Data Notebooks"

If you're viewing this notebook in the exploratory, click the button "Open in Personal Data Notebooks" to analyze your data privately on a personal virtual machine provided by Open Humans.

Note: this will prompt a log in Open Humans.

Run the notebook

There are several ways to do run this notebook code, including...

  • On your keyboard, "shift" and "enter" to run each block of code (one at a time)
  • Use the "Run" button above (one at a time)
  • "Run all" is not recommended as the notebook will ask for user input.

NOTE: at one point this notebook will ask for user input, to select a file for analysis.

A code block (or "cell") will display [*] while running, and a number (e.g. [3]) once it's done.

In [ ]:
# This code sets up a connection to your data files in Open Humans.
import  os
import ohapi
token = os.environ.get('OH_ACCESS_TOKEN')
user = ohapi.api.exchange_oauth2_member(token)
In [ ]:
# Load raw data and select the one you want to analyze.
import csv, io, re
import arrow, requests

loaded_data = []

# Fix malformed datetimes – the toolkit fails to zero-pad minutes!
def fix_datetime(datetime_string):
    regex = re.compile(r'([0-9]{4}-[0-9]{2}-[0-9]{2} )([0-9]{1,2})\:([0-9]{1,2})')
    matched =
    if matched:
        return regex.sub(matched.groups()[0] + matched.groups()[1].zfill(2) +
                         ':' + matched.groups()[2].zfill(2), datetime_string)
    return datetime_string

for x in user['data']:
    if '' in x['datatypes']:
            raw_data = requests.get(x['download_url']).content
            datafile = io.StringIO(raw_data.decode('utf-8'))
            csv_in = csv.reader(datafile)
            datetime = arrow.get(fix_datetime(csv_in.__next__()[0])).datetime
            stored = x.copy()
            stored.update({'raw_data': raw_data, 'datetime': datetime})
        except Exception:

if not loaded_data:
    raise Exception('No valid data found! Check "Getting Data" instructions above.\n'
          'Files should be visible as "G-force accelerometer data" on the '
          'dashboard at:')
    loaded_data.sort(key=lambda ld: ld['datetime'])

Select your data set

It's possible to load multiple accelerometer data files into your account.

USER INPUT: Which data set do you want to analyze? Hit ENTER after selecting.

The code below prompts you to select which data to analyze.

Your data files are listed according to the date they were recorded. Select the number identifying the file you want to analyze. If you leave this blank, it will analyze the most recent recording.

In [ ]:
# Choose a data set?

print("Pick which dataset to analyze.")
for i in range(0, len(loaded_data)):
    print("{}. {}".format(i+1, loaded_data[i]['datetime']))

dataset_id = input("Number of dataset (hit ENTER, leave blank for most recent): ")
dataset = loaded_data[-1]

if dataset_id:
        dataset = loaded_data[int(dataset_id) - 1]
    except Exception:
        print("\nPlease run this code block again and pick a valid number.")

raw_data_asfile = io.StringIO(dataset['raw_data'].decode('utf-8'))
plot_title = "FFT tremor analysis for {}".format(dataset['datetime'])

Load and clean data

Load libraries

Run this to import libraries. They will be used by the code below.

In [ ]:
import math
import statistics

import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from scipy.fftpack import fft
from scipy.interpolate import interp1d
import seaborn

Load data

In [ ]:
file_data_set = [io.StringIO(ds['raw_data'].decode('utf-8')) for ds in loaded_data]
data_raw_set = [pd.read_csv(fd, parse_dates=['time']) for fd in file_data_set]

peak_freq_set = [None] * len(data_raw_set)
peak_mag_set = [None] * len(data_raw_set)
datetime_set = [None] * len(data_raw_set)

Clean and analyze

Before doing a fourier transform, data should be a uniform sampling – there should be an equal amount of time between each data timepoint. This isn't always true in the raw data. Use an interpolation to get uniform data.

In [ ]:
for i in range(len(data_raw_set)):

    data_raw = data_raw_set[i]
    # Get the median interval time for this data (i.e. typical interval)
    intervals = [data_raw.index[i] - data_raw.index[i-1] for
                 i in range(1, len(data_raw.index))]
    interval = statistics.median(intervals)

    # Convert to unix epoch timestamp for easier math.
    ts_raw = [t.timestamp() for t in data_raw.time]

    # Set up interpolation for total gForce.
    gf_raw = list(data_raw.gFTotal)
    interpolate = interp1d(ts_raw, gf_raw)

    # Create uniform timepoints, derive interpolated gForce values.
    ts_uniform = np.linspace(ts_raw[0], ts_raw[-1], len(ts_raw))
    avg_delta = (ts_raw[-1] - ts_raw[0]) / len(ts_raw)
    gf_uniform = interpolate(ts_uniform)

    gf_fft_all = np.fft.fft(gf_uniform)
    freqs_all = np.fft.fftfreq(len(ts_uniform), d=avg_delta)

    # discard complex conjugate
    target_len = int(len(freqs_all)/2)
    freqs = freqs_all[1:target_len]
    gf_fft = gf_fft_all[1:target_len]

    peak_index = np.argmax(gf_fft)
    peak_freq_set[i] = freqs[peak_index]
    peak_mag_set[i] = abs(gf_fft[peak_index])
    datetime_set[i] = data_raw['time'][0]

Create and print table

In [ ]:
set_df = pd.DataFrame({'Peak magnitude': peak_mag_set, 'Peak frequency': peak_freq_set}, index=datetime_set)