Attribution
By Lindsay Plater
Time commitment
Less than 2 minutes
Description
The purpose of this video is to explain how to conduct a Friedman's test using SPSS (requires three or more ordinal or continuous variables from the same group of participants). This test can be used as the non-parametric alternative to a repeated measures ANOVA. This tutorial is designed to help students and researchers understand: the data type required for the test, the assumptions of the test, the data set-up for the test, and how to run and interpret the test.
Video
Transcript
[Lindsay Plater, PhD, Data Analyst II]
What is a Friedman test? [0:04]
What is a Friedman test? A Friedman test is used to determine whether one group’s rankings on three or more observations of the same continuous variable or ordinal variable differ. So again, ONE group it's going to be repeated or paired; one group of participants are going to be in three different conditions.
This is a non-parametric test; it means you do not have to meet the standard bell-shaped curve. So if you failed normality for your repeated measures ANOVA, you might switch to the non-parametric repeated measures ANOVA which is the Friedman’s test.
And if you're looking for additional help running the Friedman test, we have information in the University of Guelph SPSS LibGuide, there's information in the Laerd Statistics guide, or you could read the SPSS documentation.
Assumptions [0:51]
What are the assumptions of the Friedman test? We only have two.
The first is that your dependent variable must be ordinal or continuous, and the second is that your independent variable must be categorical; it has to have three or more paired groups or conditions (it means they're matched, the same participant is in three different groups, for example).
Check assumptions (ordinal / continuous dependent variable) [1:14]
[Slide contains a screenshot of a table in SPSS within Data View. The table’s column headers are as follows: Gender, Fake_Data1, Fake_Data2, Fake_Data3, Fake_Data4, Colour, and Group.]
Alright, let's check these.
So our first assumption is that your dependent variable must be ordinal or continuous. Here we're going to use Fake_Data1, Fake_Data2, and Fake_Data3.
[Fake_Data1, Fake_Data2, and Fake_Data3 columns are highlighted.]
If we look at the data in these columns, we can see that the data have decimals and a range of values; it’s a giveaway that this is probably continuous data, so we've passed assumption one.
Check assumptions (categorical independent variable) [1:35]
Our second assumption is we must have those repeated or matched conditions or groups for each participant.
[The first four columns of the first row are highlighted.]
So if we look for participant one, this person identifies as male and they have data for Fake_Data1, Fake_Data2, and Fake_Data3. They've got three different categories or groups of information, and its repeated data; each participant has all three, so we pass assumption two.
Step 1[2:02]
[Slide shows the table in Data View with the Analyze menu open and Nonparametric Tests is selected. From the Nonparametric Tests sub-menu One Sample is highlighted.]
How do we actually run a Friedman’s test? If you have passed all your assumptions, you're allowed to move on. You would click: Analyze > Nonparametric Tests > Related Samples.
This is another case where SPSS sort of hides what you're doing, you don't really know where to find it necessarily. Think to yourself: “What test was I just running? I was trying to run a repeated measures ANOVA. If I fail normality, I have to switch to the non-parametric test. Where do I find this test?”: Analyze > Nonparametric Tests > Related Samples, because it's related or paired data.
Step 2 [2:38]
If you click that, it will open the “Nonparametric Tests: Two or More Related Samples” dialog box with three tabs, and you have to click something in each tab.
[Nonparametric Tests: Two or More Independent Samples dialog box with three tabs: Objective (which is selected), Fields, and Settings. Under What is your objective you can select to automatically compare observed data to hypothesized or customize analysis. The description below explains that the automatic option will select appropriate tests (e.g., McNemar’s, Cochran’s Q, Wilcoxon signedrank, Friedman’s ANOVA) based on the data.]
In the Objective tab, you're going to click where it says, “Customize analysis”, and then you have to remember to switch to the Fields tab.
Step 3 [2:56]
[Nonparametric Tests: Two or More Related Samples dialog is on the Fields tab. At the top are radio buttons for Use predefined roles or Use custom field assignments (the latter selected). A yellow info icon warns “Select only 2 test fields to run 2 related sample tests.” Below, the left panel lists available variables, and the right panel lists three selected Test Fields. Between them are arrow buttons for moving fields.]
In the Fields tab, you're going to take your three continuous or ordinal variables or groups, and take them from the left side where it says “Fields:” and move them to the right side where it says, “Test Fields:”, and then you have to remember to click the Settings tab.
Steps 4 & 5 [3:13]
[Nonparametric Tests: Two or More Related Samples dialog is on the Settings tab. At the top are two radio buttons—Automatically choose tests based on the data and Customize tests (selected). There are six groups of checkboxes let you pick specific tests: Test for Change in Binary Data (McNemar’s test, Cochran’s Q), Test for Change in Multinomial Data (Marginal Homogeneity), Compare Median Difference to Hypothesized (Sign test, Wilcoxon signed‑rank), Estimate Confidence Interval (Hodges–Lehmann), Quantify Associations (Kendall’s coefficient), and Compare Distributions (Friedman’s two‑way ANOVA by ranks). Each section lists the test name and any related options (enabling pairwise comparisons, defining success, etc.). A tab pane on the left shows Choose Tests, Test Options, and User‑Missing Values.]
In the Settings tab, you have to click “Customize tests”, and then there are a bunch of different options here: you have to remember the name of the test you're trying to run. You are trying to run the “Friedmans 2-way ANOVA by ranks (k samples)”. It’s in the bottom right, it's hidden [in the Compare Distributions area]. And you'll probably want to select where it says “Multiple comparisons”; by default it has “All pairwise”, that's fine.
Once you've clicked all that, you're going click the Run button.
Output [3:46]
If you've remembered to click your buttons in all three tabs, you'll get the Friedman output, which looks something like this.
[Friedman output in SPSS. The left panel shows the Output Navigator. The main panel presents the following tables: Hypothesis Test Summary and Related-Samples Friedman’s Two-Way Analysis of Variance by Ranks Summary.]
Your first table is called the “Hypothesis Test Summary” table. This gives you your null hypothesis in words, the name of the test that you tried to run, a significance value (or p-value), and what decision – in words – you should make based on what your p-value says.
So you can either get your p-value from this first table up here, or you can get your p-value from the bottom table over here, which is called the “Related-Samples Friedman’s Two-Way Analysis of Variance by Ranks Summary” [in the Asymptotic Sig. (2-sided test) row]. It's a long title name, but they're the same p-value, so you could get it from either location.
So here we have a p-value less than (<) .05 which means we have statistical significance; we have found a difference between our groups.
We don't know where yet, we don't know where those differences are, but somewhere our Friedman test has told us we have differences between Fake_Data1, Fake_Data2, and Fake_Data3.
If you scroll down a little bit, you should have a plot that looks something like this.
[Bar chart titled “RelatedSamples Friedman’s TwoWay Analysis of Variance by Ranks,” showing three vertical bars. Each bar represents one repeatedmeasures variable: a blue bar at Rank 1, a maroon bar at Rank 2, and a teal bar at Rank 3. The xaxis shows Frequency (0–300 for the first two variables and 0-30 for the third) and the yaxis shows Rank (1–4). Mean ranks are labeled above each bar.]
For whatever reason, the non-parametric tests put things horizontally instead of vertically, so you're probably used to seeing these graphs on the vertical, but it's put it horizontal. And essentially what this is showing you is it's ranking your three different options. This is a little bit of a weird test to look at; the way the Friedman’s test works, is it's rank ordering each of your different variables. So in variable one, it's going to rank each of the participants as: this person has the highest data, then the next highest, then the next highest, then the next highest. Within variable two, it's going to rank order: this person has the highest data, then the next highest and the next highest and the next highest. Same with variable three.
Then what it does is it takes your three rank orders, and it compares them across the different groups to say “who has the highest rank” for things. So here, Fake_Data1 has the lowest rank, it's got a mean rank of 1.00. Fake_Data2 has our middling rank with a mean rank of 2.00. And then Fake_Data3 has our highest rank with the mean rank of 3.00. So Friedman's test is doing something a little bit different than what you're probably used to if you've seen ANOVAs before.
Looking at this graph, you might not know where the differences are. You've got a p-value, you've got a graph. To find where your differences are, you're actually going to scroll to where it says Pairwise Comparisons.
[Pairwise Comparisons table lists each Sample 1 – Sample 2 pairing in its rows and presents these columns: Test Statistic, Std. Error, Std. Test Statistic, Sig. (p-value), and Adj. Sig. A footnote notes that asymptotic twosided significance values are displayed and that significance levels have been Bonferronicorrected.]
Here, we've got Fake_Data1 versus Fake_Data2 [in the first row], and our p-value is less than (<) .05. So we can say there is a difference in the rankings of Fake_Data1 and Fake_Data2. Then we've got this is Fake_Data1 versus Fake_Data3 [in the second row]; again, p less than (<) .05, we found a difference. And then we've got Fake_Data2 versus Fake_Data3 [in the third row], p-value less than (<) .05, we can say we found a difference. And there's also a Bonferroni correction applied to this table as well.
So super quickly, that is how you run a Friedman's test.
License
This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.
- Ask Chat is a collaborative service
- Ask Us Online Chat hours
- Contact Us