Parse Experiment Data tool

How to turn your raw experimental data file into functioning data sets which you can interact with TeselaGen

E
Written by Eduardo Abeliuk
Updated over a week ago

Your process to turn your raw data into organized and functionable data starts with data parsing with the Parse Experiment Data tool.

This tool parses your file (*.csv or *.xlsx) with experiment data and stores the content in Data Grid(s). The Data Parsers are customizable and are set-up as Node-RED flows at the Integration Server. Once the data is correctly parsed and successfully stored in Data Grid(s) you can run the import experiment data into Assay to get it into a TEST Assay.

Accessing the tool:

You can access the data parsing tool by first going to the BUILD module of the Teselagen platform and select Tools from main menu. Once you click on it, the Tool Library will open up.

Go to the search bar and type Parse Experiment Data and launch the tool from there to initiate the process.

Uploading your file:

Once you click the Launch Tool button, it will upon you the Upload Data File screen.


You can either click on the Up arrow icon to upload the experimental date file or just drag the file on the uploading area.

After you've upload the file click on the Select a parser drop-down in the Select data parser file section. Once you've imported the spreadsheet that file gets parsed through a node-red flow.

The node-red flow* is a part of Teselagen's integration which allows users to create their custom logic-flows in the node-red interface.

Your file will be transferred to the flow created on node-red and it will parse it and generate data-grids.


(go to the end of this document to see how you can access node-red)
โ€‹

Once the grids are generated, your data file will be created will be created which you can access by clicking on the hyperlinked name as indicated by 1 in the image below.

This will open up the Data section of the BUILD module and you can find your parsed file their indicated by the file's name.
โ€‹
Read the complete Data Import Pipeline here.

Did this answer your question?