The purpose of this report is to overview the procedures for checking normality in statistical analysis using SPSS. It is important to ascertain whether data show a serious deviation from normality ( 8). Although true normality is considered to be a myth ( 8), we can look for normality visually by using normal plots ( 2, 3) or by significance tests, that is, comparing the sample distribution to a normal one ( 2, 3). According to the central limit theorem, (a) if the sample data are approximately normal then the sampling distribution too will be normal (b) in large samples (> 30 or 40), the sampling distribution tends to be normal, regardless of the shape of the data ( 2, 8) and (c) means of random samples from any distribution will themselves have normal distribution ( 3). If we have samples consisting of hundreds of observations, we can ignore the distribution of the data ( 3). With large enough sample sizes (> 30 or 40), the violation of the normality assumption should not cause major problems ( 4) this implies that we can use parametric procedures even when the data are not normally distributed ( 8). Normality and other assumptions should be taken seriously, for when these assumptions do not hold, it is impossible to draw accurate and reliable conclusions about reality ( 2, 7).
The assumption of normality is especially critical when constructing reference intervals for variables ( 6).
Many of the statistical procedures including correlation, regression, t tests, and analysis of variance, namely parametric tests, are based on the assumption that the data follows a normal distribution or a Gaussian distribution (after Johann Karl Gauss, 1777–1855) that is, it is assumed that the populations from which the samples are taken are normally distributed ( 2- 5). Statistical errors are common in scientific literature, and about 50% of the published articles have at least one error ( 1).