I seem to forget how to convert a voltage signal in the time domain to the frequency domain every time I want to do it. There are also far too many website that try to tell you how to do this in Matlab. I end up spending too much time searching for the best one as I reteach myself the fast Fourier transform (fft) function.

Well, no more. I’ve now upload a Matlab function that takes a signal in the time domain and converts it to the frequency domain; specifically, the single-sided amplitude spectrum. You can download it here, but you will need to change the file extension from .docx to .m to run it in Matlab.

Below is an example on electromyographic (EMG) data. These data are from my colleague (Thanks Dr. John Harry!) and it is a 3-s surface electrode recording of the biceps femoris during a countermovement vertical jump. The sampling frequency was 2000 Hz.



The EMG shows the bicep femoris contracts during the countermovement down (a to d) and the explosive movement up (d-f). This occurs between .75 and 1.75 s in the time domain plot. The EMG is relatively quiet until landing (h) between 2 and 2.75 s in the plot.

Let’s take a look at the frequencies in the EMG data.


Most of the power for surface EMG is typically between 10 and 400 Hz. That seems to be the case for our data, as shown in the amplitude spectrum above. The power of the frequencies rapidly increases around 10 Hz and the slowly decrease from 100 to 400 Hz.

Let’s see what the amplitude spectrum looks like after filtering the data with a low-pass filter at 200 Hz (dual-pass, second order, Butterworth), just for fun.


After the 200 Hz, low-pass filter, the power of frequencies above 200 Hz are drastically reduced. There is still some power in the frequencies between 200 and 350 Hz. That is caused by the filter’s frequency response. I’ll leave that conversation about the pass band, the transition band, and the stop band of different filters for another day. [You can find some details about the frequency response of a filter in two of my previous posts (Moving average filters, Solving the underdamped response of a 10 Hz low pass Butterworth filter) and on Wikipedia.]


I’m very curious about the role of universities in society. Vox just posted a video about universities where the American dream is still alive. I’ll let the video speak for itself.

The video mentions an op-ed in the New York Times, America’s great working class colleges, about the same study and it is worth a read, too.

If you want your income to be in the top 20%, then you should go to an Ivy or elite school. That is easier said then done because the deck is stacked against people with low socioeconomic status; for example, you might not be able to study for the SATs because you are too busy working to afford food. That brings up the question of the affordability of a university education. That is a whole other can of worms! I will give a shout-out to Texas Tech’s $8 million water park that was mentioned as an example of the excessive spending of universities in the one-sided debate by John Stossel.

I’ve been to Texas Tech’s water park, which is across the street from my office. It, no doubt, attracts students to Tech, but what is the cost to students’ tuition and taking money away from the core roles of the university, namely teaching and research? I understand that experiences outside the classroom are an important part of your four years at university, but when did the university become responsible for providing those experiences?

I believe universities should focus on academics and pull back on this recent foray into emulating all-inclusive resorts. The more rock climbing walls, water parks, and stylish residences we build, the more we increase tuition and student debt. I don’t need to remind you that the total US student debt is $1.4 trillion, almost twice the total US credit card debt (neither of which are good). Other problems are the increasing number of administators at universities and their skyrocketing salaries. Texas Tech’s president, for example, made $599 thousand in the 2017 financial year. That is equivalent to the tuition from 100 in-state students. Presidential pay is so high that teams of four people applied for the position at the University of British Columbia saying they would split the salary, continue to teach, and be more productive than a single person. Administrative salaries still pale in comparison to the football coaching staff. I’m amazed that Texas Tech’s coach made over $3 million in 2016 (tuition from 550 students) and the athlete-students were paid nothing.

It would be great to see more major universities that focus on offering tremendous value for your tuition. Imagine a university with modest residences, minimal administrators, and without rock walls or water parks. Where tuition is invested back into the academic pursuits of students and faculty. How would that university attract students? Can you image how boring the campus tour would be without flashy excess? Perhaps that type of university will become popular when the children of parents who are still saddled by student debt decide not to follow in their footsteps.

I’m always happy to discuss this issue and search for solutions. Perhaps we can debate while floating down the lazy river at Texas Tech’s water park.

I’m currently analysing bimanual reaching data in Lissajous plots. In these plots, the position of the left arm is plotted on the y-axis and the position of the right arm is plotted on the x-axis. The figure below is an example.


The important aspect in Lissajous plots is the shape of the trajectory. Therefore, to average several trials together, I need to create a spatial average of each trajectory. The reason for this is more obvious when we zoom into the start of the trajectory, shown below.


You can see that the points in the Lissajous plot begin close together and then get farther apart. This is because the arm begins stationary and then accelerates towards the target. This information is actually a problem when averaging together multiple trials. Imagine if one trial slowed down in the middle. There would be more data points in the middle and this would pull the average towards these points. What we need to do is preserve just the shape of the Lissajous plot. This is done with a spatial average.

A spatial average creates 100 points (for example) and places them equally along the trajectory. The next plot shows the original trajectory, which had 1000 points, and the spatial average with 100 equally spaced points.


You can now take several of these trajectories and average them while preserving (or properly averaging) the spatial information, hence a spatial average.

I wrote some Matlab code to calculate 2D spatial averages. The first argument, input_array, is the trajectory data with each row being a frame, the first column the x position, and the second column the y position. The second argument is the number of points in the spatial average. You can download the function here, but you will need to rename the file from .doc to .m

The code will only work if there are lots more frames in the original data than the spatial average (I used 1000 and 100 above). You can always increase the number of frames in your original data by using the interp1 command.

For example, say that your original data_array has 127 frames. The following code will increase it to 1000 frames.

new_data_array(:,1) = interp1((1:127)’, data_array(:,1), (1:(127-1)/(1000-1):127)’, ‘linear’);

new_data_array(:,2) = interp1((1:127)’, data_array(:,2), (1:(127-1)/(1000-1):127)’, ‘linear’);

This is the first entry in the trajectory analysis toolbox. I hope to add more examples and code as I use them in my research.

Update (September 12, 2015):

I reused this code for anther experiment and I found that the last point in the spatial average was sometimes missing. I’m not sure why this is happening, but I added a hack to fix it. At the start of the code, the first data point in the spatial average is set to the first point in the input array (as before) and the last data point in the spatial average is set to the last point in the input_array.

Update (May 30, 2017):

I wrote another version of this function to calculate a 1D spatial average. You can download the function here, but again, you will need to rename the file from .doc to .m. A 1D spatial average is helpful, for example, when you displacement by time data. You want to preserve the shape of the trajectory but time shouldn’t contribute to the “distance” traveled when calculating the spatial average.

When writing this function, I found and fixed a bug in the 2D spatial average function. The bug occurred when there was a large distance between points, which is odd for smooth, trajectory data. Both the 2D and 1D functions can now handle step functions and, hopefully, anything else you throw at them.

I’ve been digging into the research on the focus of attention. Professor Wulf has done a great job of synthesising all the research every few years and publishing a review. One of the latest reviews is Attentional focus and motor learning: a review of 15 years. I particularly enjoyed the tables that listed all the studies and the tasks used. They allowed me to quickly find all the research that has used a stability platform for further investigation.

I still need to read about Professor Wulf’s 2016 Optimizing Performance through Intrinsic Motivation and Attention for Learning (OPTIMAL) theory of motor learning. I’ll post an update on that soon.

How broken is University

January 15, 2017

I read an interesting article in PCMag (of all places) called How broken is college, and can we fix it? It is a short but well-balanced read about the challenges and successes of modern post-secondary education. One interesting argument is that post-secondary education isn’t broken for the wealthy but it is broken for the less wealthy students who are increasingly recruited by universities. It was also interesting to find out that decreased public financing of post-secondary education is not a current trend – it has been occurring for 40 years!

If you would like to know more about increasing tuition rates and student loan debt, then I recommend the 2014 documentary Ivory Tower. The trailer for it is a bit sensational, but the documentary is a well-balanced investigation of the business models of American colleges and the impact on students.

I’ve been setting up my lab at Texas Tech University. My main requirement is a motion capture system, and I compared and tested several systems. I was looking for something far less expensive than the Optotrak Certus by Northern Digital with similar capabilities. I decided on the Improv by PhaseSpace, and I’ve been impressed by the system. There is more testing to do and so I will have more to share in the next few months. Below is a picture of the six-camera Improv system in my lab.


You can see eight LED markers on the surface of the table. These attach to a small wireless micro driver. You can track up to 24 markers at 270 Hz (48 markers at 135 Hz). A key feature is that the markers are active markers and so there is no marker swapping, as there can be with passive markers in the Vicon and Optitrack systems. This allows measurement of markers even when they are right beside each other, which I will need to track the fingers during reach-to-grasp movements.

Setting up the hardware was easy. I like how small and light the cameras are (about 10 x 10 x 6 cm and 400 g) and that they can be attached to a standard tripod. The software is also easy to use. It took a few calls to PhaseSpace to figure out a few things, mostly because PhaseSpace doesn’t have a detailed manual yet (I might post my own later).

I’m currently writing a C++ program that will allow Matlab to make calls to the PhaseSpace application programming interface (API), similar to what I’ve done before with the Optotrak. I’ll post my notes on that when I have it figured out.

There are definitely problems with our reliance on p-values and null hypothesis significance testing (NHST). One common problem is setting your alpha to .05 and then getting a p-value of .06. This is a non-significant results, but what do you do when your thesis/postdoc/tenure requires publications? It seems that many of us invent colourful language for how close the test was to being statistically significant. These were collected in a blog post called, Still not significant. Some of my favorites are below.

  • Closely approaches the brink of significance
  • Flirting with conventional levels of significance
  • Just above the arbitrary level of significance
  • Not significant in the narrow sense of the word
  • Teetering on the brink of significance

One funny thing about this practice is that you don’t see the reverse statements when the p-value is just less than .05. Could you imagine a result that “approached non-significance, p = .04”?

To solve the problem with null hypothesis significance testing, we need to stop relying on p-values. I recommend reading Things I have learned (so far) by Professor Jacob Cohen to understand the problem and how to start fixing it (especially The Fischerian Legacy and following sections). The article includes one of my favourite quotes about p-values, “surely, God loves the .06 nearly as much as the .05” (Rosnow and Rosenthal 1989, p 1277).

That article by Professor Cohen was published in 1990 and we still continue to abuse p-values. Professor Cohen wasn’t even close to be the first to suggest that we should de-emphasise p-values. I just noticed a follow-up article, entitled “Things we still haven’t learned (so far),” by Ivarsson and colleagues (2015). The first sentence of the abstract hilariously captures our inability to stop using p-values. “Null hypothesis significance testing (NHST) is like an immortal horse that some researchers have been trying to beat to death for over 50 years but without any success.”