In parametric statistics, the population distribution is known and is based on a set of preset parameters.
When you use a parametric test, the distribution of values that you get from sampling is close to a normal distribution of values, or a bell curve. Common parametric tests look at and compare the mean or variance of data. Usually, parametric tests are thought to be more powerful than nonparametric tests (Knapp, 1998).
In order to compare the difference in means between two groups, a test called the t-test can be used. The data from the two groups could be paired or unpaired. A paired t-test is used when we want to know the difference between two variables for the same person. Unpaired t-tests (or independent t tests) compare the difference in means between two groups of people to see if there is a significant difference between them. It is not possible to make comparisons between more than two groups using a t test (Xu et al., 2017).
In my research proposal, (In a rural ED setting, does placing an Advanced Practice Provider (APP) in triage during high volume times, compared to a nurse only triage approach, decrease LOS?) an unpaired or independent t test could be utilized. I am comparing the mean LOS between two independent groups with assumed equal variance (nurse only triage and APP in triage). I will use data from patients during pre-intervention and post intervention and obtain a mean length of stay.
Assumptions made using an unpaired t test are the dependent variable (LOS) is normally distributed and the variance of data is the same between both nurse only triage and APP in triage (SPSS Tutorials: Independent Sample T Test, 2022).
Nonparametric tests are frequently used in circumstances when the distribution is not normal, the distribution is unknown, or the sample size is insufficient to assume a normal distribution. Additionally, nonparametric tests should be utilized when there are extreme values or values that are obviously “out of range” (Knapp, 1998).
- Wilcoxon Rank Sum Test
The Wilcoxon rank sum test is a nonparametric test that can be used to see if the distributions of data obtained from two different groups on the same dependent variable are systematically different. This test is frequently referred to as the non-parametric equivalent of the two-sample t-test because it does not assume known distributions and does not deal with parameters. It examines if one variable has higher values than the other in two independent samples without defining directionality.
The test is non-parametric, which means it makes no assumptions about the distribution of scores. However, various assumptions are made when using this test, such as the sample taken from the population is random, that there is independence within the samples and mutual independence, and that an ordinal measurement scale is used (du Prel et al., 2010).
du Prel, J. B., Röhrig, B., Hommel, G., & Blettner, M. (2010). Choosing statistical tests: part 12 of a series on evaluation of scientific publications. Deutsches Arzteblatt international, 107(19), 343–348. https://doi.org/10.3238/arztebl.2010.0343
Knapp, T. (1998). Quantitative nursing research. Sage Publications.
SPSS tutorials: Independent sample t test. (2022, February 9). Kent State University. Retrieved March 1, 2022, from https://libguides.library.kent.edu/SPSS/IndependentTTest
Xu, M., Fralick, D., Zheng, J. Z., Wang, B., Tu, X. M., & Feng, C. (2017). The Differences and Similarities Between Two-Sample T-Test and Paired T-Test. Shanghai archives of psychiatry, 29(3), 184–188. https://doi.org/10.11919/j.issn.1002-0829.217070
A parametric test is a test whereby the population is well known, and there is the assumption of parameters. The mean value is always utilized when calculating the central tendency in a parametric test. Parametric tests are widespread as they make research straightforward. In a subject like statistics, generalizations when creating records on the mean of a population are given using parametric tests. The test is considered a hypothesis test that assumes the primary allotments in particular data (Mariappan, 2019).
For a parametric test to be successful, a researcher must ensure that various validity conditions are met so that the results may be reliable. Therefore, a parametric test is more significant in statistical power compared to non-parametric tests. When an effect is present in any data, a parametric test will detect it.
Parametric tests usually assume normal value distribution (a bell-shaped curve). For instance, the height is a normal distribution that when you draw a graph of different people’s height, the graph’s curve would be bell-shaped. Experts refer to the distribution as Gaussian. It is appropriate to use a parametric test when the sample size is large. It is also suitable to use the parametric test when analyzing non-normal appropriations for data sets. A parametric test is also used when the mean accurately represents the center of the distribution. A parametric test is effective because it helps in making effective and efficient decisions.
An example of a parametric test is the standard T-test used to determine the significant differences between means of two distinct groups (Chavan & Kulkami, 2017). The T-test will then provide the basis of comparison between the test group and the control group. A parametric test may be appropriate when a researcher wants to determine the amount of money people spend seeking quality health care services. A parametric test leads to a rejection of H0.
Unlike the parametric test, a non-parametric test is independent of any distribution. The median value is essential in non-parametric tests as it is the basis of measurements. Therefore, the central tendency is the median value. Additionally, there are no assumptions made in a non-parametric test. The test is independent of any underlying assumptions as it is a distribution-free test. The nominal and ordinal levels help in finding the test values (Carrasco et al., 2017).
It is appropriate to perform a non-parametric test when all the independent variables are non-metric hence its name, non-parametric test. In terms of the probabilistic distribution, a non-parametric test has an arbitrary distribution. The knowledge of the population is also not required. Non-parametric tests have the edge over parametric tests because they analyze ranked and ordinal data, while parametric tests only analyze continuing data.
A non-parametric test is usually used in cases where the distribution is abnormal or skewed. These tests are considered flexible as they allow one to deal with variables and attributes equally well. It is appropriate to use a non-parametric test when the sample size is small and when the data is not in a normal distribution pattern. Mostly, when a parametric test is not appropriate, a non-parametric test is used. The non-parametric tests are used when a researcher would like to rank measurements test whether the distribution is weird (Lenart & Pippien, 2017).
When drawing a graphical representation of a non-parametric test, the curve appears skewed. An example of a non-parametric test is the Mann-Whitney U-test that is somehow related to the standard t-tests in that it looks at the differences that exist in groups but is used with ordinal data. Psychologists use the Mann-Whitney U-test a lot, especially when comparing how attitudes in patients correlate with behavioral patterns.
Assumptions to be met
Parametric and non-parametric tests have one thing in common: they both have assumptions that should be met. An investigator or a researcher must go through various conventions before successfully conducting the test (Mircioiu & Atkinson, 2017). In a parametric test, the assumptions are as follows:
- The researcher properly knows all the variables
- And the researcher should measure all the interest variables in intervals.
The assumptions in a non-parametric test are as follows:
- Nominal and ordinal methods are the primary strategies for calculating variables.
- No-parametric tests should be used with variables.
- There is no particular scale that is used to measure it.
Brunner, E., Konietschke, F., Pauly, M., & Puri, M. L. (2017). Rank‐based procedures in factorial designs: Hypotheses about non‐parametric treatment effects. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 79(5), 1463-1485 https://rss.onlinelibrary.wiley.com/doi/full/10.1111/rssb.12222.
Carrasco, J., García, S., Del Mar Rueda, M., & Herrera, F. (2017). Repost: An r package covering non-parametric and bayesian statistical tests. In International Conference on Hybrid Artificial Intelligence Systems (pp. 281-292). Springer, Cham. https://link.springer.com/chapter/10.1007/978-3-319-59650-1_24
Chavan, P., & Kulkarni, R. V. (2017). Role of the non-parametric test in management and social science research. Quest International Multidisciplinary Research Journal, 6(9), 38-52. https://www.researchgate.net/profile/Pravin-Chavan-2/publication/324770300_role_of_non-parametric_test_in_management_social_science_research/links/5ae15dc9aca272fdaf8d95d0/role-of-non-parametric-test-in-management-social-science-research.pdf