January 19, 2016
There are definitely problems with our reliance on p-values and null hypothesis significance testing (NHST). One common problem is setting your alpha to .05 and then getting a p-value of .06. This is a non-significant results, but what do you do when your thesis/postdoc/tenure requires publications? It seems that many of us invent colourful language for how close the test was to being statistically significant. These were collected in a blog post called, Still not significant. Some of my favorites are below.
- Closely approaches the brink of significance
- Flirting with conventional levels of significance
- Just above the arbitrary level of significance
- Not significant in the narrow sense of the word
- Teetering on the brink of significance
One funny thing about this practice is that you don’t see the reverse statements when the p-value is just less than .05. Could you imagine a result that “approached non-significance, p = .04”?
To solve the problem with null hypothesis significance testing, we need to stop relying on p-values. I recommend reading Things I have learned (so far) by Professor Jacob Cohen to understand the problem and how to start fixing it (especially The Fischerian Legacy and following sections). The article includes one of my favourite quotes about p-values, “surely, God loves the .06 nearly as much as the .05” (Rosnow and Rosenthal 1989, p 1277).
That article by Professor Cohen was published in 1990 and we still continue to abuse p-values. Professor Cohen wasn’t even close to be the first to suggest that we should de-emphasise p-values. I just noticed a follow-up article, entitled “Things we still haven’t learned (so far),” by Ivarsson and colleagues (2015). The first sentence of the abstract hilariously captures our inability to stop using p-values. “Null hypothesis significance testing (NHST) is like an immortal horse that some researchers have been trying to beat to death for over 50 years but without any success.”
January 18, 2016
I had previously heard that about 20% of PhD graduates end up as full-time university professors, which is pretty close to the reported value of 18.6%. Unfortunately, that 18.6% includes tenure and non tenure-track positions. I don’t know the division between those two categories, and it may be impossible to delineate them in the future with the invention of tenure-track professor of teaching positions.
In total, 39.4% (18.6 + 20.8%) of PhD graduates end up with some role in academia. The 60.6% majority end up out there in the real world, with large percentages in government, management, sciences, and health.
The article noted that with so many PhDs finding careers outside academia, we need to do a better job educating employers, and PhDs, about the worth of their specialised education. For example, “They [PhDs] can help business interface with universities and academia. As a personal aptitude, PhDs are extremely hard working. They are driven and focused. They know how to take a huge problem or issue and break it down into manageable steps and address it.”
January 12, 2016
Wired recently posted an article entitled, Governors can design higher education for the future, by Associate Professor Rhett Allain (Physics, Southeastern University). I enjoyed his view on the roles of universities in society and his helpful and humorous analogies. Dr. Allain has an older article on the same topic, Education, robots, and cosmos, which you might also enjoy.
October 23, 2015
Scapps was in Edmonton this year, both downtown and at the University of Alberta. It was a memorable conference for many reasons. The entire crew was almost reunited, were were just missing Dr. Cressman. Dr. Cameron and I shared a room at the hotel, which brought me back to when we were both studying at UBC. Dr. Chua expertly delivered the Wilberg lecture, which detailed the origins of the crew and our academic ancestors.
I had the honour of being the Franklin Henry young scientist award winner for motor behaviour. I was anxious before the presentation, but I did have fun presenting (click here for my presentation slides). Not sure how many opportunities like that I will have in my life!
I was impressed by the posters and presentations; I think they might be getting better every year. My golden chalice award goes to Dr. Bernier’s presentation entitled, Delta band oscillations predict hand selection for reaching.
I hope to see everyone next year in Waterloo!
September 12, 2015
I’m not sure how, of all years, I forgot to post about my own graduation in May.
The people in the photo, from left to right, are Professor Ian Franks, Professor Romeo Chua, Jonathan Kim (summer student), Nicolette Gowan (summer student), Laurence Chin (summer student and volunteer), Jarrod Blinch (me!), Jada Holmes (summer student and volunteer), Guilherme Martin (summer student), and Chris Forgaard (PhD student in Dr. Franks’ and Dr. Chua’s lab). I hope the lab photo tradition continues on without me.
As it was my PhD graduation, I couldn’t resist a photo of me with the UBC academic regalia.
September 12, 2015
I was reading the Repeated-measures designs chapter in Professor David C Howell’s Statistical Methods for Psychology (8th edition) and I learned a few things about sphericity.
First, Mauchly’s test of sphericity is “likely to have serious problems when we need them the most” (p 473). (I don’t know exactly what this means mathematically, but this site suggests that Mauchly’s will under detect violations with small sample sizes and over detect violations with large samples.) There are two possible solutions to this problem. 1) Use a more liberal alpha of .10 for Mauchly’s test, or 2) always use either the Huynh-Feldt or the Greenhouse-Geisser correction for sphericity, regardless of Mauchly’s test. I’ve used option one in the past but option two is more common in the papers I read.
Whichever option you choose, you should use the Huynh-Feldt correction when epsilon is greater than or equal to .75 and the Greenhouse-Geisser correction otherwise. Epsilon tells us the size of the violation to sphericity. It ranges from 0 to 1, with 1 meaning no violation. Huynh-Feldt is a smaller correction than Greenhouse-Geiser, so when there is a large violation to sphericity (epsilon < .75), we should use the larger, Greenhouse-Geisser correction.
The second thing I learned has to do with comparing epsilon to .75. We don’t actually know epsilon, but we have the Greenhouse-Geisser (epsilon with a ^ above it) and the Huynh-Feldt (epsilon with a ~ above it) estimates of epsilon. Professor Howell showed an example where epsilon ^ was .6569 and epsilon ~ was .8674. He decided to use the Huynh-Feldt correction as “these are in the neighborhood of .75” (p 473).
A huge issue with these corrections is that they make adjustments for only the main effects and interactions. In other words, violations to sphericity can create serious problems for any follow-up comparisons. The problem is decreased when separate error terms are used that are specific to each comparison instead of a pooled error term. A violation to sphericity suggests that the covariance between groups differs and so a different error term should be used for each comparison. When the covariance is comparable, that is there is no violation to sphericity, a single (pooled) error term can be applied to all comparisons.
There are other statistical tests that you can use that don’t require sphericity. These are MANOVAs or mixed models analysis. The benefit of the latter is that it can also handle missing data. I’m eager to learn more about the complicated world of mixed model analysis. Professor Howell has more on this topic on his website.
July 20, 2015
I’m currently analysing bimanual reaching data in Lissajous plots. In these plots, the position of the left arm is plotted on the y-axis and the position of the right arm is plotted on the x-axis. The figure below is an example.
The important aspect in Lissajous plots is the shape of the trajectory. Therefore, to average several trials together, I need to create a spatial average of each trajectory. The reason for this is more obvious when we zoom into the start of the trajectory, shown below.
You can see that the points in the Lissajous plot begin close together and then get farther apart. This is because the arm begins stationary and then accelerates towards the target. This information is actually a problem when averaging together multiple trials. Imagine if one trial slowed down in the middle. There would be more data points in the middle and this would pull the average towards these points. What we need to do is preserve just the shape of the Lissajous plot. This is done with a spatial average.
A spatial average creates 100 points (for example) and places them equally along the trajectory. The next plot shows the original trajectory, which had 1000 points, and the spatial average with 100 equally spaced points.
You can now take several of these trajectories and average them while preserving (or properly averaging) the spatial information, hence a spatial average.
I wrote some Matlab code to calculate 2D spatial averages. The first argument, input_array, is the trajectory data with each row being a frame, the first column the x position, and the second column the y position. The second argument is the number of points in the spatial average. You can download the function here, but you will need to rename the file from .doc to .m
The code will only work if there are lots more frames in the original data than the spatial average (I used 1000 and 100 above). You can always increase the number of frames in your original data by using the interp1 command.
For example, say that your original data_array has 127 frames. The following code will increase it to 1000 frames.
new_data_array(:,1) = interp1((1:127)’, data_array(:,1), (1:(127-1)/(1000-1):127)’, ‘linear’);
new_data_array(:,2) = interp1((1:127)’, data_array(:,2), (1:(127-1)/(1000-1):127)’, ‘linear’);
This is the first entry in the trajectory analysis toolbox. I hope to add more examples and code as I use them in my research.
Update (September 12):
I reused this code for anther experiment and I found that the last point in the spatial average was sometimes missing. I’m not sure why this is happening, but I added a hack to fix it. At the start of the code, the first data point in the spatial average is set to the first point in the input array (as before) and the last data point in the spatial average is set to the last point in the input_array.