An "Experiment" is a matrix of samples and probes arising from some original data source. The original data source can be FCS data, sequence data, or other sources. At each intersection of the matrix, a statistical
object is defined, containing a mean signal level, standard deviation, confidence interval, and other measures. Variables can be defined to identify groups of samples.
An experiment separates the abstraction of the data from the original source data. When an experiment is saved or exported, only the abstracted sample/probe data matrix is retained, not the source data (e.g. detailed particle decoding information for flow cytometry data, raw reads for sequencing).
An experiment also is responsible for the processing of data, through the processing pipeline. An experiment therefore distinguishes between its input data (un-processed) and its output data (processed through the pipeline). The input data points are typically directly from an experimental source. The output data points are processed through the processing pipeline of negative sample subtraction, blank probe subtraction, and normalization or standardization.
Charts and tables show the output (processed) data of the experiment. If there is no processing (no blanks or negatives, no negatives, no normalization, no standardization) the output will be the same as the input. The output is updated on the fly as controls change, for instance, as soon as negative wells are defined, the output will be background subtracted.
An experiment can be a subset or a superset of a plate. For analyzing data from multiple plates together, the experiment is the appropriate organizational unit.
Data Processing Pipeline