Skip to main content
Statistical tools for experiments

This article describes the statistical tools available for experimental data analysis at TeselaGen.

Andrés Ramirez avatar
Written by Andrés Ramirez
Updated over 3 months ago

In research, it is often necessary to test experimental data. Once you have mapped your experimental data (DataGrids) into Assays, you can run statistical tests over your Assays’ values.

Running statistical tests

Within the Assay detailed view, you can access the Statistical Analyses creation menu by following these steps:

1. Open the detailed view of your Assay.

2. Click on the "Statistical Analyses" tab.

3. Click on "Create Statistical Analysis" to start a new analysis.

The creation menu looks like this:

On the following section we provide a description of the available methods.

Statistical Tests

TeselaGen currently implements the following tests:

One Way ANOVA

Description:

The One Way ANOVA test compares the means of two or more groups (also known as treatments or categories) for one factor (feature).

Factor Parameter:

Name of the column that contains the factor with group names.

Value Parameter:

Name of the column that contains the values. Defaults to “value”. This column must be numeric, otherwise the test will fail.

Null Hypothesis:

In statistical testing, the null hypothesis is a general statement or default position that there is no relationship between two measured phenomena. For the One Way ANOVA, the null hypothesis is: The means of all groups are equal.

Reading the p-value:

The p-value is a measure of the strength of the evidence against the null hypothesis. It ranges from 0 to 1, with lower values indicating stronger evidence against the null hypothesis.

p-value < 0.05: Reject the null hypothesis, indicating that there are significant differences between group means.

p-value ≥ 0.05: Fail to reject the null hypothesis, indicating that there is no significant difference between group means.

Two Way ANOVA

Description:

The Two Way ANOVA test compares the means of groups for two factors (features), allowing you to understand not only the main effects of each factor but also if there is an interaction effect between the two factors on the dependent variable. An interaction effect occurs when the effect of one factor depends on the level of the other factor.

Factor Parameters:

Names of the columns that contain the factors with group names.

Value Parameter:

Name of the column that contains the data to be tested. Defaults to “value”. This column must be numeric, otherwise the test will fail.

Null Hypotheses:

1. The means of all groups are equal for Factor 1.

2. The means of all groups are equal for Factor 2.

3. There is no interaction effect between Factor 1 and Factor 2 on the dependent variable.

Reading the p-value:

p-value < 0.05 for Factor 1: Reject the null hypothesis, indicating that there are significant differences between group means for Factor 1.

p-value < 0.05 for Factor 2: Reject the null hypothesis, indicating that there are significant differences between group means for Factor 2.

p-value < 0.05 for Interaction: Reject the null hypothesis, indicating a significant interaction effect between the two factors on the dependent variable.

p-value ≥ 0.05 for any hypothesis: Fail to reject the null hypothesis, indicating no significant difference or interaction effect.

Shapiro-Wilk Normality Test

Description:

The Shapiro-Wilk Normality test tests the null hypothesis that the data was drawn from a normal distribution.

Value Parameter:

Name of the column that contains the values. This column must be numeric, otherwise the test will fail.

Null Hypothesis:

The data follows a normal distribution.

Reading the p-value:

p-value < 0.05: Reject the null hypothesis, indicating that the data does not follow a normal distribution.

p-value ≥ 0.05: Fail to reject the null hypothesis, indicating that the data follows a normal distribution.

One Way Levene Variance Homogeneity Test

Description:

The Levene test tests the null hypothesis that all input samples are from populations with equal variances.

Factor Parameter:

Name of the column that contains the factor with group names.

Center Parameter: “mean”

Recommended for symmetric, moderate-tailed distributions.

Center Parameter: “median”

Recommended for skewed (non-normal) distributions.

Center Parameter: “trimmed”

Recommended for heavy-tailed distributions.

Proportion Cut Parameter: “trimmed”

When center is “trimmed”, this gives the proportion of data points to cut from each end. Defaults to 0.05.

Value Parameter:

Name of the column that contains the values. This column must be numeric, otherwise the test will fail.

Null Hypothesis:

The variances of all groups are equal.

Reading the p-value:

p-value < 0.05: Reject the null hypothesis, indicating that there are significant differences in variances between groups.

p-value ≥ 0.05: Fail to reject the null hypothesis, indicating that there is no significant difference in variances between groups.

One Way Tukey HSD Pairwise Test

Description:

Tukey’s HSD (Honestly Significant Difference) is used to check which groups are different. It is often used as a post-hoc test when conducting ANOVA tests. Tukey test is more informative than ANOVA as it compares every group pair.

Factor Parameter:

Name of the column that contains the factor with group names.

Value Parameter:

Name of the column that contains the values. This column must be numeric, otherwise the test will fail.

Significance Level (optional):

Threshold on p-values under which the hypothesis “groups have similar means” is rejected for any group pair. Defaults to 0.05.

Null Hypothesis:

The means of the groups are equal for each pair.

Reading the p-value:

p-value < 0.05: Reject the null hypothesis for that pair, indicating a significant difference in means.

p-value ≥ 0.05: Fail to reject the null hypothesis for that pair, indicating no significant difference in means.

Did this answer your question?