Z-test, give the z-value in the parameter box. Leave the df boxes both zero.

T-test, give the t-value in the parameter box and the degrees of freedom for the t-value in the first degrees of freedom box. Take care to keep the value in the second df box zero ('0').

F-test, give the f-value in the parameter box. Give the degrees of freedom for the numerator in the first degrees of freedom box and the degrees of freedom for the denominator in the second df box.

Correlation, give the correlation in the parameter box. Give the degrees of freedom in the second df box.

Chi-square, give the Chi-square in the parameter box. Give the degrees of freedom in the second degrees of freedom box.

Degrees of freedom is mostly the number of cases minus 1. Major exception is the Chi-square where the number of rows-1, columns-1 and layers-1..etc is taken (Thus, df(C2)=(r-1)*(c-1)*(l-1)). Also, if the t-value is the result of the t-test for a difference between two independent groups it works different, please read the discussion on the t-test helppage.

Significance is designed to provide help for people who calculate statistical procedures by hand or to be used in case your statistical program gives you parameter values but not the statistical significance of the parameter. Also, comparing SISA with standard statistical packages showed that standard statistical packages give less precise estimates of significance levels.

This program replaces the tables you often find at the back of statistics books. Here you can look up the p-value on the basis of a parameter value. The procedure reverse significance allows you to look up the parameter value on the basis of the p-value. The procedures here are only relevant for continuous distribution, for discrete distributions use the relevant discrete SISA procedure.

It all works simple, fill in the parameter, the degrees of freedom or the number of cases, push the appropriate button, and you get the statistical significance value.

The F-test is used to calculate the one sided probability of the likelihood that two variances are different. Divide the one variance by the other variance and fill the result in the F-value box. Fill the number of degrees of freedom of the variance you used in the numerator in the degrees of freedom box, the number of degrees of freedom for the denominator variance in the number of cases box. You can also test if two standard deviations are different, square them before you do the division. If you do not know the numbers of degrees of freedom take the sample size for each standard deviation minus one. Take the group sizes if you are interested in making comparisons in a single sample. For example, if you interested in the question if the females are more diverse in their responses to a particular question, and you have 50 females and 75 males, the standard deviation for females equals 7 and for males 5, then the input for the f-test is: F: 1.96 [(7*7)/(5*5)]; df numerator: 49; df denominator 74. 'Click' the f-value button, p=0.00432, males and females are significantly different in the "richness" of their response.

Everything should run with at least six digit precission (within a "reasonable" range).

The f-distribution algorithm is based on Egon Dorrer's CACM 322 algorithm.

The Chi-square distribution algorithm is based on Poole et al, the algorithm is also mentioned in the 'Epi-Info' manual (1994).

The procedure to approximate the significance of the t-value is based on algorithm '03' from Applied Statistics (1968). After making slight additional improvements the results are very satisfactory. The procedure is unfortunately not very efficient as it contains a loop. We put the cut-off point for the loop at df=3000, change this to a higher number if you want more precise results, you will find however that the improvement will be small.

The z-value calculation is based on algorithm 209 from the CACM by D.Ibbetson. The procedure is not very good for estimating large z-values. In that case the t-value is given for 3000 degrees of freedom.

The significance of the correlation coefficient is calculated by using a single sided t-test, following Cohen.