# Independent-samples t-test using SPSS Statistics

## Introduction

The **independent-samples t-test**, also known as the **independent t-test**, **independent-measures t-test**, **between-subjects t-test** or **unpaired t-test**, is used to determine whether there is a **difference** between **two independent**, **unrelated groups** (e.g., undergraduate versus PhD students, athletes given supplement A versus athletes given supplement B, etc.) in terms of the **mean** of a **continuous dependent variable** (e.g., first-year salary after graduation in US dollars, time to complete a 100 meter sprint in seconds, etc.) in the **population** (e.g., the population of all undergraduate and PhD students in the United States, the population of all sprinters who have competed in an internationally-recognised 100 meter sprint event for their country in the last 12 months, etc.).

For example, you could use an independent-samples t-test to understand whether the mean (average) number of hours students spend revising for exams in their first year of university differs based on gender. Here, the dependent variable is "mean revision time", measured in hours, and the independent variable is "gender", which has two groups: "males" and "females". Alternatively, you could use an independent-samples t-test to understand whether there is a mean (average) difference in weight loss between obese patients who undergo a 12-week exercise programme compared to obese patients who are given dieting pills (appetite suppressors) by their doctor. Here, the dependent variable is "weight loss", measured in kilograms (kg), and the independent variable is "type of weight loss intervention", which has two groups: "exercise programme" and "dieting pills".

In this **introductory guide** to the independent-samples t-test, we first describe two study designs where the independent-samples t-test is most often used, before explaining why an independent-samples t-test is being carried out to analyse your data rather than simply using descriptive statistics. Next, we set out the basic requirements and assumptions of the independent-samples t-test, which your study design and data must meet. Making sure that your study design, variables and data **pass/meet** these assumptions is **critical** because if they do not, the independent-samples t-test is likely to be the **incorrect** statistical test to use. In the third section of this introductory guide, we set out the example we use to illustrate how to carry out an independent-samples t-test using **SPSS Statistics**, before showing how to set up your data in the **Variable View** and **Data View** of SPSS Statistics. In the Procedure section, we set out the simple 8-step procedure in SPSS Statistics to carry out an **independent-samples t-test**, including useful **descriptive statistics**. Next, we explain how to interpret the main results of the independent-samples t-test where you will determine whether there is a statistically significant **mean difference** between your two groups in terms of the dependent variable, as well as a **95% confidence interval (CI) of the mean difference**. We also introduce the importance of calculating an **effect size** for your results. In the final section, Reporting, we explain the information you should include when reporting your results. A Bibliography and Referencing section is included at the end for further reading. Therefore, to continue with this introductory guide, go to the next section.

###### SPSS Statistics

**Study designs** where an independent-samples t-test could be used

An independent-samples t-test is most often used to analyse the results of two different types of **study design**: (a) determining if there is a mean difference between **two independent groups**; and (b) determining if there is a mean difference between **two interventions**. To learn more about these two types of study design where the independent-samples t-test can be used, see the examples below:

Note 1: An independent-samples t-test can also be used to determine if there is a mean difference between **two change scores** (also known as **gain scores**). However, a one-way ANCOVA is more commonly recommended for this type of study design.

Note 2: In the examples that follow we introduce terms such as **mean difference**, **95% CI of the mean difference**, **statistical significance value** (i.e., **p****-value**), and **effect size**. However, do not worry if you do not understand these terms. They will become clearer as you work through this introductory guide.

- Difference between two INDEPENDENT GROUPS
- Difference between two TREATMENT/EXPERIMENTAL GROUPS

**Difference between two INDEPENDENT GROUPS**

Some degree courses include mandatory 1-year internships (also known as placements), which are thought to help students’ job prospects after graduating. Therefore, imagine that a researcher wanted to determine whether students who enrolled in a 3-year degree course that included a mandatory 1-year internship (also known as a placement) received better graduate salaries than students who did not undertake an internship. The researcher was specifically interested in students who undertook a Finance degree.

A total of 60 first-year graduates who had undertaken a Finance degree were recruited to the study. Of these 60 graduates, 30 had undertaken a 3-year Finance degree that included a mandatory 1-year internship. This group of 300 graduates represented the "internship group". The other 30 had undertaken a 3-year Finance degree that did not include an internship. This group of 30 graduates represented the "no internship group". The first-year graduate salaries of all 60 graduates were recorded in US dollars.

Therefore, in this study the dependent variable was "salary", measured in US dollars, and the independent variable was "course type", which had two independent groups: "internship group" and "no internship group". The two groups were independent because no graduate could be in more than one group and the students in the two groups could not influence each other’s salaries.

The researcher analysed the data collected to determine whether salaries were greater (or smaller) in the internship group compared to the no internship group. An **independent-samples t-test** was used to determine whether there was a statistically significant **mean difference** in the salaries between the internship group and the no internship group. The **95% confidence interval (CI) of the mean difference** was included as part of the analysis to improve our assessment of the independent-samples t-test result. The independent-samples t-test was supplemented with an **effect size calculation** to assess the practical/clinical importance of the mean difference in salaries between the internship group and the no internship group. Together, the **mean difference**, **95% CI of the mean difference**, **statistical significance value** (i.e., **p****-value**), and **effect size calculation** are used to determine whether students who enrolled in a 3-year degree course that included a mandatory 1-year internship (also known as a placement) received better graduate salaries than students who did not undertake an internship.

**Difference between two TREATMENT/EXPERIMENTAL GROUPS**

Some parents use financial rewards (i.e., money) as an incentive to encourage their children to get top marks in their exams (e.g., an "A" grade or what might be a score of 80 or more out of 100). Therefore, imagine that an educational psychologist wanted to determine whether financial rewards increased academic performance amongst school children.

A total of 26 students were randomly assigned to one of two groups. In one group, the school children were offered $500 if they got an "A" grade in their maths exam. This is called the "experimental group". In the other group, the school children are not offered anything, irrespective of how well they performed in the same maths exam. This is called the "control group". All 26 students undertook the same maths exam. After the students have taken the maths exam, their scores (between 0 and 100 marks) were recorded.

Therefore, in this study the dependent variable was "exam result", measured from 0 to 100 marks, and the independent variable was "financial reward", which had two independent groups: "experimental group" and "control group". The two groups were independent because no student could be in more than one group and the students in the two groups were unable to influence each other’s exam results.

The researcher analysed the data collected to determine whether the exam results were better (or worse) amongst students in the experimental group compared to the control group. An **independent-samples t-test** was used to determine whether there was a statistically significant **mean difference** in the exam results between the experimental group and the control group. The **95% confidence interval (CI) of the mean difference** was included as part of the analysis to improve our assessment of the independent-samples t-test result. The independent-samples t-test was supplemented with an **effect size calculation** to assess the practical/clinical importance of the mean difference in exam results between the experimental group and the control group. Together, the **mean difference**, **95% CI of the mean difference**, **statistical significance value** (i.e., **p****-value**), and **effect size calculation** are used to determine whether financial rewards increased academic performance amongst school children.

In the next section we explain why you are using an independent-samples t-test to analyse your results, rather than simply using descriptive statistics.

###### SPSS Statistics

**Understanding** why the independent-samples t-test is being used

To briefly recap, an independent-samples t-test is used to determine whether there is a **difference** between two independent, unrelated groups (e.g., undergraduate versus PhD students, athletes given supplement A versus athletes given supplement B, etc.) in terms of the **mean** of a continuous dependent variable (e.g., first-year salary after graduation in US dollars, time to complete a 100 meter sprint in seconds, etc.) in the **population** (e.g., the population of all PhD students in the United States, the population of all sprinters who have competed in an internationally-recognised 100 meter sprint event for their country in the last 12 months, etc.).

In other words, you are using an independent-samples t-test because you are **not** only interested in determining whether there is a mean difference in the dependent variable between your two groups in your single study (i.e., the **sample** of 150 male students and **sample** of 150 female students), but whether there is a mean difference in these **two samples** in the wider **populations** from which these **two samples** were drawn. To understand these two concepts – sample versus population – and how the independent-samples t-test is used to make **inferences** from a sample to a population, imagine a study where a researcher wanted to know if there was a mean difference in the amount of time male and female university students in the United States use their mobile phones each week. In this example, the dependent variable is "weekly screen time" and the two independent groups are "male" and "female" university students in the United States.

Ideally, we would test the whole **population** of university students in the United States to determine if there were differences in weekly screen time between males and females. We could then simply compare the mean difference in weekly screen time between all male and all female students using **descriptive statistics** to understand whether there was a difference. However, it is rarely possible to study the whole population (i.e., imagine the time and cost involved in collecting data on weekly screen time for the millions of university students in the United States). Since we **cannot test everyone**, we use a **sample**, which is a **subset** of the **population**. In order to make **inferences** from a sample to a population, we try to obtain a sample that is **representative** of the population we are studying. Here, the term "**inferences**" simply means that we are trying to make **predictions/statements** about a population using data from a sample that has been drawn from that population.

For example, imagine that we **randomly selected** 150 male and 150 female university students in the United States to form our **two samples**. After recording the weekly screen time of these 300 students over a 3-month period, we found that female students spent **27 minutes** more time viewing their mobile phones each week compared to male students. Therefore, 27 minutes is the **mean difference** in weekly screen time between male and female university students in our **two samples**. This **sample mean difference**, which is called a "**point estimate**", is the **best estimate** that we have of what the **population mean difference** is (i.e., what the mean difference in weekly screen time is between **all** male and females university students in the United States, which is the population being studied).

We use this **sample** mean difference to **estimate** the **population** mean difference. However, the mean difference of 27 minutes in weekly screen time between males and females is based on only a **single study** of one sample of 150 male students and another sample of 150 female students, and **not** from the millions of university students in the United States. If we carried out a **second study** with a sample of 150 male students and a sample of 150 female students, or a **third study** with a sample of 150 male students and a sample of 150 female students, or a **fourth study** with a sample of 150 male students and a sample of 150 female students, it is **likely** that the mean difference in weekly screen time will be different each time, or at least, most of the time (e.g., 31 minutes in the second study, 25 minutes in the third study, 28 minutes in the fourth study). In other words, there will be some **variation** in the sample mean difference **each time** we sample our populations.

As previously stated, the **sample** mean difference is the **best estimate** of the **population** mean difference, but since we have just one study where we took a single sample from each of our two populations, we know that this estimate of the population mean difference in weekly screen time between male and female students will **vary** (i.e., it will not always be the same as in this study). In order to **quantify** this **uncertainty** in our **estimate** of the population mean difference, we can use the **independent-samples t-test** to provide a **95% confidence interval (CI)**, which is a way of providing a **measure** of this uncertainty. Presenting a mean difference with a 95% CI to understand what the population mean difference is, and your uncertainty in its **value**, is an approach called "**estimation**". Another approach uses a **null hypothesis** and ** p-value** and is called

**significance testing**or

**Null Hypothesis Significance Testing (NHST)**. It is important that you understand both approaches when analysing your data using an independent-samples t-test because they affect how you: (a)

**carry out**an independent-samples t-test using SPSS Statistics, as discussed in the Procedure section later; and (b)

**interpret**and

**report**your results, as discussed in the Interpreting Results and Reporting sections respectively. In fact, it is being more common practice to report both

*p*-values

**and**confidence intervals (CI) in journal articles and student reports (e.g., dissertations/theses). Therefore, both approaches are briefly discussed below:

**Estimation using confidence intervals (CI):**One approach to quantify the uncertainty in using the sample mean difference to**estimate**the population mean difference is to use a**confidence interval (CI)**. When doing so, you can set different**levels**of confidence for your confidence interval (CI). For example, the most common confidence interval (CI) is the**95% CI**, which is the**default**in SPSS Statistics (and most statistics packages). Therefore, we show you how to select the**95% CI**using SPSS Statistics in the Procedure section later.

A confidence interval will give you, based on your sample data, a**likely/plausible range of values**that the mean difference**might be**in the**population**. For example, we know that the mean difference in weekly screen time between male and female students in our**two samples**was**27 minutes**. A**95% CI**could suggest that the mean difference in weekly screen time between male and female students in the**population**might plausibly be**somewhere between 13 minutes and 42 minutes**. Here, 13 minutes reflects the**lower bound**of the 95% CI and 42 minutes reflect the**upper bound**of the 95% CI, both of which are reported by SPSS Statistics when you carry out an independent-samples t-test.

The 95% CI is a very useful method to quantify the uncertainty in using the sample mean difference to**estimate**the population mean difference because although the sample mean difference is the**best estimate**that you have of the population mean difference, in reality, the mean difference in the population**could plausibly be any value between the lower bound and upper bound of the 95% CI**.

If you choose to**increase**the CI when carrying out an independent-samples t-test from, for example,**95%**to**99%**, you**increase**your level of confidence that the population mean difference is somewhere between the lower and upper bounds that are reported.**Null Hypothesis Significance Testing (NHST) using**Another approach to statistical inference is to test your*p*-values:**sample**data**against**a**null hypothesis**in a process called**Null Hypothesis Significance Testing (NHST)**. NHST is used as a means to find evidence against a null hypothesis. The null hypothesis most commonly tested when carrying out an independent-sample t-test is that there is**no**mean difference between two groups in the population. Thecalculated as part of NHST is the*p*-value**probability**of finding a result**as extreme or more extreme**than the result in your study,**given that the null hypothesis is true**. If your result – or one more extreme – is**unlikely**to have happened**by chance**(i.e., due to**sampling variation**), you make the declaration that you**believe**the**null hypothesis**is**false**(i.e., there**is**a mean difference between the two groups in the population). NHST does**not**say what this difference might be. Whether to**accept**or**reject**that there is a difference in the population is based on a**preset probability level**.Note: Unless you are familiar with statistics, the idea of NHST can be a little challenging at first and benefits from a detailed description, but we will try to give a brief overview in this section. However, since it can be challenging to understand how the independent-samples t-test is used under NHST, we will be adding a guide dedicated to explaining this, including concepts such as the

Under NHST, the independent-samples t-test is used to test the,*t*-distribution**alpha (ɑ) levels**,**statistical power**,**Type I**and**Type II errors**,**p****-values**, and more. If you would like us to let you know when we add this guide to the site, please contact us.**null hypothesis**that there is**no**mean difference between your two groups in the**population**, based on the data from your**two samples**. For example, it tests the null hypothesis that there is**no**mean difference in weekly screen time between male and female students in the population. Furthermore, the independent-samples t-test is**typically**used to test the null hypothesis that the mean difference between the two groups in the population is**zero**(e.g., a mean difference of 0 minutes of weekly screen time between female and female students), which is also referred to as the**nil hypothesis**, but it can be tested against a**specific value**(e.g., a mean difference of 20 minutes of weekly screen time between male and female students). Whilst the**null hypothesis**states that there is**no**mean difference between your two groups in the population, the**alternative hypothesis**states that there**is**a mean difference between your two groups in the**population**(e.g., the alternative hypothesis that the mean difference in weekly screen time between male and female students is**not**0 minutes or the specific value you set, such as 20 minutes, in the population).

Determining whether to**reject**or**fail to reject**the**null hypothesis**is based on a**preset probability level**(i.e., sometimes called an**alpha (ɑ) level**). This**alpha (ɑ) level**is usually set at**.05**, which means that if theis*p*-value**less than .05**(i.e.,), you declare the result to be*p*< .05**statistically significant**, such that you**reject**the**null hypothesis**and**accept**the**alternative hypothesis**. Alternatively, if theis*p*-value**greater than .05**(i.e.,), you declare the result to be*p*> .05**not statistically significant**, such that you**fail to reject**the**null hypothesis**and**reject**the**alternative hypothesis**.

These concepts can be fairly difficult to understand without further explanation, but for the purpose of this introductory guide, the important point is that the independent-samples t-test will produce athat can be used to either (a)*p*-value**reject**the**null hypothesis**of no mean difference in the population and**accept**the**alternative hypothesis**that there is a mean difference; or (b)**fail to reject**the**null hypothesis**, suggesting that there is**not enough evidence**to accept the alternative hypothesis that there is a mean difference in the population (i.e.,**rejecting**the**alternative hypothesis**that there is a mean difference). It is important to note that you**cannot**accept the null hypothesis that there is no mean difference.

A major criticism of NHST is that it results in a**dichotomous decision**where you simply conclude that there either**is**a mean difference between your two groups in the population or there is**not**a mean difference. Therefore, it provides**far less information**that the 95% CI, which is now the preferred approach. Nonetheless, when carrying out an independent-samples t-test, it is common to interpret and report**both**the*p*-value**and**95% CI.

Hopefully this section has highlighted how: **(a)** the independent-samples t-test, using the **NHST** approach, gives you an idea of whether there is a mean difference between your two groups in the population, based on your sample data; and **(b)** the independent-samples t-test, as a method of **estimation** using **confidence intervals (CI)**, provides a plausible range of values that the mean difference between your two groups could be in the population, based on your sample data. In order that these *p*-values and confidence intervals (CI) are **accurate/valid**, your sample data must **"pass/meet"** a number of **basic requirements** and **assumptions** of the independent-samples t-test. This is discussed in the next section: **Basic requirements** and **assumptions** of the independent-samples t-test.

###### SPSS Statistics

**Basic requirements** and **assumptions** of the independent-samples t-test

The first and **most important** step in an independent-samples t-test analysis is to check whether it is appropriate to use this statistical test. After all, the independent-samples t-test will only give you **valid/accurate results** if your study design and data "**pass/meet**" **six assumptions** that underpin the independent-samples t-test.

In many cases, the independent-samples t-test will be the **incorrect** statistical test to use because your data "**violates/does not meet**" one or more of these assumptions. This is not uncommon when working with real-world data, which is often "messy", as opposed to textbook examples. However, there is often a solution, whether this involves using a **different statistical test**, or making **adjustments** to your **data** so that you can continue to use an independent-samples t-test.

Before discussing these options further, we briefly set out the six assumptions of the independent-samples t-test, **three** of which relate to your **study design** and how you **measured** your **variables** (i.e., Assumptions #1, #2 and #3 below), and **three** which relate to the **characteristics** of your **data** (i.e., Assumptions #4, #5 and #6 below):

**Assumption #1:**You have**one dependent variable**that is measured on a**continuous**scale (i.e., it is measured at the**interval**or**ratio**level). Examples of**continuous variables**include salary (measured in US dollars), revision time (measured in hours), height (measured in cm), test score (measured from 0 to 100), intelligence (measured using IQ score), age (measured in years), and so forth.**Assumption #2:**You have**one independent variable**that consists of**two categorical**,**independent groups**(i.e., you have a**dichotomous variable**). A**dichotomous variable**can be either**ordinal**or**nominal**.**Ordinal variables**with**two groups**, also referred to as**levels**, include income level (two levels: "low income" and "high income"), exam result (two levels: "pass" and "fail"), intelligence (two levels: "below average IQ" and "above average IQ"), age group (two levels: "under 21 years old" and "21 years old and over"), educational level (two levels: "undergraduate" and "postgraduate"), and so forth.**Nominal variables**with**two groups**include gender (two groups: "male" and "female"), drug trial (two groups: "drug A" and "drug B"), choice of transport (two groups: "car" and "bus"), employment status (two groups: "employed" and "unemployed"), credit card application (two groups: "granted" and "denied"), presence of heart disease (two groups: "yes" and "no"), and so forth.Note: You can learn more about the differences between dependent and independent variables, as well as continuous, ordinal, nominal and dichotomous variables in our guide: Types of variable.

**Assumption #3:**There should be**independence of observations**, which means that there is**no relationship between the observations in each group of the independent variable or between the groups themselves**. Indeed, an important distinction is made in statistics when comparing values from either**different**individuals or from the**same**individuals. Independent groups (in an independent-samples t-test) are groups where there is no relationship between the participants in either of the groups. Most often, this occurs simply by having different participants in each group.

For example, if you split a group of individuals into two groups based on their gender (i.e., a male group and a female group), no one in the female group can be in the male group and vice versa. As another example, you might randomly assign participants to either a control trial or an intervention trial. Again, no participant can be in both the control group and the intervention group. This will be true of any two independent groups you form (i.e., a participant cannot be a member of both groups). In actual fact, the "no relationship" part extends a little further and requires that participants in both groups are considered unrelated, not just different people; for example, participants might be considered related if they are husband and wife, or twins. Furthermore, participants in Group A cannot influence any of the participants in Group B, and vice versa.

If you are using the**same**participants in each group, or they are otherwise**related**, a dependent t-test (also known as a**paired-samples t-test**) is a more appropriate test. An example of where related observations might be a problem is if all the participants in your study (or the participants within each group) were assessed together, such that a participant's performance affects another participant's performance (e.g., participants encourage each other to lose more weight in a 'weight loss intervention' when assessed as a group compared to being assessed individually; or athletic participants being asked to complete '100m sprint tests' together rather than individually, with the added competition amongst participants resulting in faster times, etc.).

Independence of observations is largely a study design issue rather than something you can test for, but it is an important assumption of the independent-samples t-test. If your study fails this assumption, you will need to use another statistical test instead of the independent-samples t-test.

Since assumptions #1, #2 and #3 relate to your **study design** and how you **measured** your **variables**, if **any** of these three assumptions are **not met** (i.e., if any of these assumptions do not fit with your research), the independent-samples t-test is the **incorrect** statistical test to analyse your data. It is likely that there will be other statistical tests you can use instead, but the independent-samples t-test is not the correct test.

After checking if your study design and variables meet **assumptions #1**, **#2** and** #3**, you should now check if your **data** also meets **assumptions #4**, **#5** and **#6** below. When checking if your data meets these three assumptions, do not be surprised if this process takes up the majority of the time you dedicate to carrying out your analysis. As we mentioned above, it is not uncommon for **one or more** of these assumptions to be **violated** (i.e., not met) when working with real-world data rather than textbook examples. However, with the right guidance this does not need to be a difficult process and there are often other statistical analysis techniques that you can carry out that will allow you to continue with your analysis.

**Assumption #4:**There should be**no problematic outliers**in your data. An outlier is a single case/observation in your data set that does not follow the usual pattern. For example, imagine a study comparing income levels between male and female graduates in full-time employment in the United Kingdom in their first year after leaving university. Of the 60 graduates in the study, salaries ranged between £16,000 and £48,000, except for one graduate who earnt more than £1,500,000 (e.g., she had started a tech firm at university and sold a stake in this during her first year after graduation; or he was working for his father’s family business who could afford to pay an extremely high salary that was not linked to his work at the business). In the event, you would be unlikely to know the reason why the graduate earnt more than £1,500,000, only that this salary does not fit the unusual pattern of salaries amongst the**sample**of 60 graduates in the study (and most likely not the wider**population**of first year graduates). When using an independent-samples t-test, this might be considered an**outlier**.

Strictly speaking, testing for outliers is**not**an assumption of the independent-samples t-test. However, outliers can be problematic because they can disproportionately influence the assumptions and result of the independent-samples t-test, and lead to**invalid conclusions**. Therefore, you need to detect if there are any problematic outliers in your data before running an independent-samples t-test. Fortunately, there are several methods to**detect**outliers using SPSS Statistics, as well as methods to**deal with outliers**when you have any in your data. Ways to deal with outliers range from applying**transformations**to your data to techniques such as**winsorization**, which can help you to overcome problems associated with having outliers in your data and allow you to continue with your analysis.Note: Outliers are not inherently "bad" (i.e., an outlier is not bad simply because it is an outlier). Therefore, when deciding how to deal with outliers in your data, you not only need to consider the

**statistical implications**of any outliers, but also**theoretical factors**that relate to your research goals and study design.**Assumption #5:**Your**dependent variable**should be**approximately normally distributed for each category of your independent variable**. In other words, the distribution of scores of your dependent variable should**approximately**follow a**normal distribution**in each category of your independent variable. Taking the example of male and female first-year graduate salaries above, the distribution of graduate salaries should be approximately normally distributed for "males" and approximately normally distributed for "females". Therefore, before you run an independent-samples t-test, you need to check whether these two groups are approximately normally distributed using a mix of**numeric methods**(e.g., the**Shapiro-Wilk test of normality**) and**graphical methods**(e.g., histograms and Normal Q-Q plots), all of which can be carried out using SPSS Statistics. If your data is**not**normally distributed there are methods to**deal with non-normality**(e.g., applying a**transformation**to your data), and after applying these methods, it may still be possible to use an independent-samples t-test. If your data is not normally distributed and no methods are able to "coax" your data towards normality, the independent-samples t-test**may**be the**incorrect**statistical test to analyse your data (although there are some exceptions to this). However, there are other statistical tests that can be used when your data violates the assumption of normality.Note: Technically, it is the

**residuals**that must be approximately normally distributed within each group rather than the**data**within each group, but in an independent-samples t-test, the results will be the same.**Assumption #6:**There needs to be**homogeneity of variances**, which means that the**(population) variance for each category of your independent variable is the same**. You can test whether your data meets this assumption using SPSS Statistics using**Levene's test for homogeneity of variances**. If your data does**not**meet this assumption, the independent-samples t-test is**not**a suitable statistical test. However, you can run a**different t-test**instead, known as the**Welch t-test**, which makes an adjustment for**unequal variances**. The Welch t-test can also be run using SPSS Statistics.

Therefore, before running an independent-samples t-test it is **critical** that you **first** check whether your **data** meets assumption #4 (i.e., **no problematic outliers**), assumption #5 (i.e., **normality**) and assumption #6 (i.e., **homogeneity of variances**). In some cases, **failure** to meet one or more of these assumptions will make the independent-samples t-test the **incorrect** statistical test to use. In other cases, you may simply have to make some **adjustments** to your data before continuing to analyse it using an independent-samples t-test.

Due to the importance of checking that your data meets assumptions #4, #5 and #6, we dedicate seven pages of our enhanced independent t-test guide to help you get this right. This includes: (1) setting out the procedures in SPSS Statistics to test these assumptions; (2) explaining how to interpret the SPSS Statistics output to determine if you have met or violated these assumptions; and (3) explaining what options you have if your data does violate one or more of these assumptions. You can access this enhanced guide by subscribing to Laerd Statistics, which will also give you access to all of the enhanced guides in our site.

When you are confident that your data has met all **six** assumptions described above, you can carry out an independent-samples t-test to determine whether there is a difference between the two groups of your independent variable in terms of the mean of your dependent variable. In the sections that follow we show you how to do this using SPSS Statistics, based on the example we set out in the next section: **Example** used in this guide.