Give a correlation in the top (r1) box and a sample size in the third (N1) box. The significance gives you the probability that the correlation is statistically significantly different from there being no correlation, ie. that the correlation would be zero. A number of confidence intervals around the correlation are produced.

Difference between two independent correlations were one correlation is known with certainty (a population or historical correlation) and the other correlation is a correlation found in a sample. Give the population correlation in the top (r1) box and the sample correlation in the second (r2) box. Give the sample size in the third (N1) box. A number of confidence intervals around the difference between the two correlations and a p-value are produced. This method is used to answer questions such as: "does my correlation differ from a correlation that is widely mentioned in the literature?"

Difference between two independent correlations from two different samples. Give the correlation of the first sample in the top (r1) box and the correlation of the second sample in the second (r2) box. Give the sample size of the first sample in the third (N1) box and the sample size of the second sample in the fourth (N2) box. A number of confidence intervals around the difference between the two correlations and a p-value are produced. This method is used to answer questions such as: "is the effect of age on neuroticism stronger for males than for females?"

Difference between two dependent correlations from a single sample. The correlations are overlapping, share one variable in common. This procedure allows you to see if two correlations in a triangle are statistically significantly different. You have to input three correlations in the r1, r2, r3 boxes respectively. These three correlations have to form a triangle: they must be rxy, rzy and rxz. Give the sample size in the bottom (N2) box. A number of confidence intervals around the difference between the two correlations and p-values are produced. This method is used to answer questions such as: "is the effect of age stronger on neuroticism than on anxiety?". Steiger's Z is preferred above Hotelling's t-test .

Difference between two dependent correlations from a single sample. The two correlations are non-overlapping, do not share a variable. You have to input six correlations in a triangle. Give the sample size in the separate box. This method is used to answer questions such as: "is the effect of overweight on heart disease stronger than the effect of smoking on cancer?". Raghunathan, Rosenthal, & Rubin's test is preferred above Pearson & Filon's test. The input is provided on a separate web page.

Give a correlation in the top (r1) box and a sample size in the fourth (N2) box. The power gives you the probability of discovering that the correlation is statistically significantly different from there being no correlation, ie. that the correlation would be zero. The power for a number of different significance levels is given.

Only for the case of a difference between two correlations. One correlation is a sample correlation the other one is a population or historical correlation known with certainty. Give a correlation in the top (r1) box, another one in the second (r2) box, and a sample size in the __fourth__ (N2) box. The power for a number of different significance levels is given.

Only for the case of a difference between two correlations, the correlations are from two different samples. Give a correlation in the top (r1) box, another one in the second (r2) box, and the sample size of one of the samples in the __fourth__ (N2) box. The size of the other sample is considered to be the same. Unfortunately, as yet are calculations for two different sample sizes not possible. The power for a number of different significance levels is given. To do this calculation you will have to use the 'Alt' button.

Give a single correlation in the top box (r1). The sample size gives you the number of cases required to discover, given a certain likelihood (power) to do the discovery, that the correlation is statistically significantly different from there being no correlation, ie. that the correlation would be zero. The sample size for a number of different significance and power levels is given.

Only for the case of a difference between two correlations. One correlation is a sample correlation the other one is a population or historical correlation. Give a correlation in the top (r1) box, another one in the second (r2) box. The sample size gives you the number of cases required to discover, given a certain likelihood (power) to do the discovery, that the two correlations are statistically significantly different. The sample size for a number of different significance and power levels is given. This sample size procedure is another version of the one above but it is a slightly less precise one.

Only for the case of a difference between two correlations. Both correlations are considered to be sample correlations. Give a correlation in the top (r1) box, another one in the second (r2) box. The sample size gives you the number of cases required to discover, given a certain likelihood (power) to do the discovery, that the two correlations are statistically significantly different. The size for one sample is given the other sample is considered to have the same size. Unfortunately, as yet are calculations for two different sample sizes not possible. The sample size for a number of different significance and power levels is given. To do this calculation you will have to use the 'Alt' button.

This procedure gives some statistics for the relationships between three correlations. The procedure follows the text by Cohen and Cohen (1983). You should give three correlations which form a triangle, rxy, rzy and rxz in the r1, r2 and r3 box respectively. Thus, the correlations between the dependent and the two independent variables go into the first two boxes, the correlation between the two independent variables goes into the third, r3, box. In the output field each correlation is given again together with its partial equivalent, keeping the third variable constant. An Anova table and f-test about the variance explained (R-square) is produced showing the variance explained by X in Y (the square of the simple correlation you have given in box r1 (ie ryx-squared)) and the additional variance explained, after X, by Z (which is the square of the partial between Z and Y constant for X (ie ryz.x-squared)). You can give a Number of cases in the fourth box if you want tests of statistical significance. In that case confidence intervals are given for the correlation between Y and X and Z taken together (ry.xz). Of course, this correlation is the square root from the R-square. To do this procedure you need to press the 'alt' button

Blalock HM. ** Social Statistics.** New York: McGraw-Hill,1960.

Cohen J. ** Statistical Analysis for the Behavioral Sciences. **New Jersey: Lawrence Erlbaum Ass. 1969 & 1977.

Cohen J. ** Statistical Power Analysis for the Behavioral Sciences. **New Jersey: Lawrence Erlbaum Ass. 1988.

Cohen J, Cohen P. ** Applied Multiple Regression/Correlation Analysis for the Behavioral Sciences.** Hillsdale, New York: Lawrence Erlbaum Assoc, 1983.

Machin D, Campbell M, Fayers P, Pinol A. ** Sample size tables for clinical studies, 2nd Edition.** London, Edinburgh, Malden and Carlton: Blackwell Science 1997.