Skip Nav

# Types of Statistical Tests

## Preparing Data

❶The basic concept and the practical use.

## Types of Statistical Tests   There are hundreds of ways to visualize data, including data tables, pie charts, line charts, etc. Note that the analysis is limited to your data and that you are not extrapolating any conclusions about a full population. Descriptive statistic reports generally include summary data tables kind of like the age table above , graphics like the charts above , and text to explain what the charts and tables are showing. There are thousands of expensive research reports that do nothing more than descriptive statistics.

Descriptive statistics usually involve measures of central tendency mean, median, mode and measures of dispersion variance, standard deviation, etc.

Well, there are about 7 billion people in the world, and it would be impossible to ask every single person about their ice cream preferences. Instead, you would try to sample a representative population of people and then extrapolate your sample results to the entire population. This is the idea behind inferential statistics. As you can imagine, getting a representative sample is really important. There are all sorts of sampling strategies , including random sampling. A true random sample means that everyone in the target population has an equal chance of being selected for the sample.

Imagine how difficult that would be in the case of the entire world population since not everyone in the world is easily accessible by phone, email, etc. Another key component of proper sampling is the size of the sample.

Obviously, the larger the sample size, the better, but there are trade-offs in time and money when it comes to obtaining a large sample. If we rank the data and after ranking, group the observations into percentiles, we can get better information of the pattern of spread of the variables.

In percentiles, we rank the observations into equal parts. The median is the 50 th percentile. Variance[ 7 ] is a measure of how spread out is the distribution. It gives an indication of how close an individual observation clusters about the mean value.

The variance of a population is defined by the following formula:. The variance of a sample is defined by slightly different formula:. Each observation is free to vary, except the last one which must be a defined value. The variance is measured in squared units. To make the interpretation of the data simple and to retain the basic unit of observation, the square root of variance is used.

The square root of the variance is the standard deviation SD. The SD of a sample is defined by slightly different formula:. An example for calculation of variation and SD is illustrated in Table 2. Most of the biological variables usually cluster around a central value, with symmetrical positive and negative deviations about this point.

It is a distribution with an asymmetry of the variables about its mean. In a negatively skewed distribution [ Figure 3 ], the mass of the distribution is concentrated on the right of Figure 1. In a positively skewed distribution [ Figure 3 ], the mass of the distribution is concentrated on the left of the figure leading to a longer right tail. In inferential statistics, data are analysed from a sample to make inferences in the larger collection of the population.

The purpose is to answer or test the hypotheses. A hypothesis plural hypotheses is a proposed explanation for a phenomenon. Hypothesis tests are thus procedures for making rational decisions about the reality of observed effects.

Probability is the measure of the likelihood that an event will occur. Probability is quantified as a number between 0 and 1 where 0 indicates impossibility and 1 indicates certainty.

Alternative hypothesis H 1 and H a denotes that a statement between the variables is expected to be true. The P value or the calculated probability is the probability of the event occurring by chance if the null hypothesis is true.

The P value is a numerical between 0 and 1 and is interpreted by researchers in deciding whether to reject or retain the null hypothesis [ Table 3 ]. However, if null hypotheses H0 is incorrectly rejected, this is known as a Type I error. Numerical data quantitative variables that are normally distributed are analysed with parametric tests.

However, if the distribution of the sample is skewed towards one side or the distribution is unknown due to the small sample size, non-parametric[ 14 ] statistical techniques are used. Non-parametric tests are used to analyse ordinal and categorical data. The parametric tests assume that the data are on a quantitative numerical scale, with a normal distribution of the underlying population. The samples have the same variance homogeneity of variances. The samples are randomly drawn from the population, and the observations within a group are independent of each other.

Student's t -test is used to test the null hypothesis that there is no difference between the means of the two groups. It is used in three circumstances:. The group variances can be compared using the F -test.

If F differs significantly from 1. The Student's t -test cannot be used for comparison of three or more groups.

The purpose of ANOVA is to test if there is any significant difference between the means of two or more groups. The within-group variability error variance is the variation that cannot be accounted for in the study design. It is based on random differences present in our samples. However, the between-group or effect variance is the result of our treatment.

These two estimates of variances are compared using the F-test. However, a repeated measure ANOVA is used when all variables of a sample are measured under different conditions or at different points in time. As the variables are measured from a sample at different points of time, the measurement of the dependent variable is repeated.

Using a standard ANOVA in this case is not appropriate because it fails to model the correlation between the repeated measures: When the assumptions of normality are not met, and the sample means are not normally, distributed parametric tests can lead to erroneous results.

Non-parametric tests distribution-free test are used in such situation as they do not require the normality assumption. That is, they usually have less power.

As is done for the parametric tests, the test statistic is compared with known values for the sampling distribution of that statistic and the null hypothesis is accepted or rejected. The types of non-parametric analysis techniques and the corresponding parametric analysis techniques are delineated in Table 5. Median test for one sample: The sign test and Wilcoxon's signed rank test. The sign test and Wilcoxon's signed rank test are used for median tests of one sample.

These tests examine whether one instance of sample data is greater or smaller than the median reference value. Therefore, it is useful when it is difficult to measure the values. Wilcoxon's rank sum test ranks all data points in order, calculates the rank sum of each sample and compares the difference in the rank sums.

It is used to test the null hypothesis that two samples have the same median or, alternatively, whether observations in one sample tend to be larger than observations in the other. Mann—Whitney test compares all data xi belonging to the X group and all data yi belonging to the Y group and calculates the probability of xi being greater than yi: The two-sample Kolmogorov-Smirnov KS test was designed as a generic method to test whether two random samples are drawn from the same distribution.

The null hypothesis of the KS test is that both distributions are identical. The statistic of the KS test is a distance between the two empirical distributions, computed as the maximum absolute difference between their cumulative curves. The Kruskal—Wallis test is a non-parametric test to analyse the variance. The data values are ranked in an increasing order, and the rank sums calculated followed by calculation of the test statistic.

In contrast to Kruskal—Wallis test, in Jonckheere test, there is an a priori ordering that gives it a more statistical power than the Kruskal—Wallis test. The Friedman test is a non-parametric test for testing the difference between several related samples. The Friedman test is an alternative for repeated measures ANOVAs which is used when the same parameter has been measured under different conditions on the same subjects. Chi-square test, Fischer's exact test and McNemar's test are used to analyse the categorical or nominal variables.

The Chi-square test compares the frequencies and tests whether the observed data differ significantly from that of the expected data if there were no differences between groups i. It is calculated by the sum of the squared difference between observed O and the expected E data or the deviation, d divided by the expected data by the following formula:.

A Yates correction factor is used when the sample size is small. Fischer's exact test is used to determine if there are non-random associations between two categorical variables. It does not assume random sampling, and instead of referring a calculated statistic to a sampling distribution, it calculates an exact probability.

McNemar's test is used for paired nominal data. The null hypothesis is that the paired proportions are equal. The Mantel-Haenszel Chi-square test is a multivariate test as it analyses multiple grouping variables. It stratifies according to the nominated confounding variables and identifies any that affects the primary outcome variable. If the outcome variable is dichotomous, then logistic regression is used. Numerous statistical software systems are available currently.

There are a number of web resources which are related to statistical power analyses. It is important that a researcher knows the concepts of the basic statistical methods used for conduct of a research study.

This will help to conduct an appropriately well-designed study leading to valid and reliable results. Inappropriate use of statistical techniques may lead to faulty conclusions, inducing errors and undermining the significance of the article. Bad statistics may lead to bad research, and bad research may lead to unethical practice. Hence, an adequate knowledge of statistics and the appropriate use of statistical tests are important.

An appropriate knowledge about the basic statistical methods will go a long way in improving the research designs and producing quality medical research which can be utilised for formulating the evidence-based guidelines.

National Center for Biotechnology Information , U. Journal List Indian J Anaesth v. Zulfiqar Ali and S Bala Bhaskar 1. This article has been corrected. See Indian J Anaesth. This article has been cited by other articles in PMC. Abstract Statistical methods involved in carrying out a study include planning, designing, collecting data, analysing, drawing meaningful interpretation and reporting of the research findings. Basic statistical tools, degree of dispersion, measures of central tendency, parametric tests and non-parametric tests, variables, variance.

Open in a separate window. ## Main Topics

Inferential Analysis: From Sample to Population Introduction Inferential analysis is used to generalize the results obtained from a random (probability) sample back to .

### Privacy FAQs

With inferential statistics, you are trying to reach conclusions that extend beyond the immediate data alone. Analysis of Covariance (ANCOVA), regression analysis, and many of the multivariate methods like factor analysis, multidimensional scaling, cluster analysis, discriminant function analysis, and so on. An understanding of that.