Base is a genomics data aggregation and knowledge management solution suite. It is a secure and scalable integrated genomics data analysis solution which provides information management and knowledge mining. Refer to the Base documentation for more details.
This tutorial provides an exmple for exercising the basic operations used with the Base, including how to create a table, load the table with data, and query the table.
An ICA project with access to Base
If you don't already have a project, please follow the instructions in the Project documentation to create a project.
File to import
A tab delimited gene expression file (sampleX.final.count.tsv). Example format:
Tables are components of databases that store data in a 2-dimensional format of columns and rows. Each row represents a new data record in the table; each column represents a field in the record. On ICA, you can use Base to create custom tables to fit your data. A schema definition defines the fields in a table. On ICA you can create a schema definition from scratch, or from a template. In this activity, you will create a table for RNAseq count data, by creating a schema definition from scratch.
Go to the Projects > your_project > Base > Tables and enable Base by clicking on the Enable button.
Select Add Table > New Table.
Create your table
To create your table from scratch, select Empty Table from the Create table from dropdown.
Name your table FeatureCounts
Uncheck the box next to Include reference, to exclude reference data from your table.
Check the box next to Edit as text. This will reveal a text box that can be used to create your schema.
Copy the schema text below and paste it in into the text box to create your schema.
Click the Save button
Upload sampleX.final.count.tsv file with the final count.
Select Data tab (1) from the left menu.
Click on the grey box (2) to choose the file to upload or drag and drop the sampleX.final.count.tsv into the grey box
Refresh the screen (3)
The uploaded file (4) will appear on the data page after successful upload.
Data can be loaded into tables manually or automatically. To load data automatically, you can set up a schedule. The schedule specifies which files’ data should be automatically loaded into a table, when those files are uploaded to ICA or created by an analyses on ICA. Active schedules will check for new files every 24 hours.
In this exercise, you will create a schedule to automatically load RNA transcript counts from .final.count.tsv files into the table you created above.
Go to Projects > your_project > Base > Schedule and click the + Add New button.
Select the option to load the contents from files into a table.
Create your schedule.
Name your schedule LoadFeatureCounts
Choose Project as the source of data for your table.
To specify that data from .final.count.tsv files should be loaded into your table, enter .final.count.tsv in the Search for a part of a specific ‘Orignal Name’ or Tag text box.
Specify your table as the one to load data into, by selecting your table (FeatureCounts) from the dropdown under Target Base Table.
Under Write preference, select Append to table. New data will be appended to your table, rather than overwriting existing data in your table.
The format of the .final.count.tsv files that will be loaded into your table are TSV/tab-delimited, and do not contain a header row. For the Data format, Delimiter, and Header rows to skip fields, use these values:
Data format: TSV
Delimiter: Tab
Header rows to skip: 0
Click the Save button
Highlight your schedule. Click the Run button to run your schedule now.
It will take a short time to prepare and load data into your table.
Check the status of your job on your Projects > your_project > Activity page.
Click the BASE JOBS tab to view the status of scheduled Base jobs.
Click BASE ACTIVITY to view Base activity.
Check the data in the table.
Go back to your Projects > your_project > Base > Tables page.
Double-click your table to view its details.
You will land on the SCHEMA DEFINITION page.
Click the PREVIEW tab to view the records that were loaded into your table.
Click the DATA tab, to view a list of the files whose data has been loaded into your table.
To request data or information from a Base table, you can run an SQL query. You can create and run new queries or saved queries.
In this activity, we will create and run a new SQL query to find out how many records (RNA transcripts) in your table have counts greater than 100.
Go to your Projects > your_project > Base > Query page.
Paste the above query into the NEW QUERY text box
Click the Run Query button to run your query
View your query results.
Save your query for future use by clicking the Save Query button. You will be asked to "Name" the query before clicking on the "Create" button.
Find the table you want to export on the "Tables" page under BASE. Go to the table details page by clicking twice on the table you want to export.
Click on the Export As File icon and complete the required fields
Name: Name of the exported file
Data Format: A table can be exported in CSV and JSON format. The exported files can be compressed using GZIP, BZ2, DEFLATE or RAW_DEFLATE.
CSV Format: In addition to Comma, the file can be Tab, Pipe or Custom character delimited.
JSON Format: Selecting JSON format exports the table in a text file containing a JSON object for each entry in the table. This is the standard snowflake behavior.
Export to single/multiple files: This option allows the export of a table as a single (large) file or multiple (smaller) files. If "Export to multiple files" is selected, a user can provide "Maximum file size (in bytes)" for exported files. The default value is 16,000,000 bytes but can be increased to accommodate larger files. The maximum file size supported is 5 GB.
You can access the databases and tables within the Base module using Python from your local machine. Once retrieved as e.g. pandas object, the data can be processed further. In this tutorial, we will describe how you could create a Python script which will retrieve the data and visualize it using Dash framework. The script will contain the following parts:
Importing dependencies and variables.
Function to fetch the data from Base table.
Creating and running the Dash app.
This part of the code imports the dependencies which have to be installed on your machine (possibly with pip). Furthermore, it imports the variables API_KEY and PROJECT_ID from the file named config.
We will be creating a function called fetch_data to obtain the data from Base table. It can be broken into several logically separated parts:
Retrieving the token to access the Base table together with other variables using API.
Establishing the connection using the token.
SQL query itself. In this particular example, we are extracting values from two tables Demo_Ingesting_Metrics and BB_PROJECT_PIPELINE_EXECUTIONS_DETAIL. The table Demo_Ingesting_Metrics contains various metrics from DRAGEN analyses (e.g. the number of bases with quality at least 30 Q30_BASES) and metadata in the column ica which needs to be flattened to access the value Execution_reference. Both tables are joined on this Execution_reference value.
Fetching the data using the connection and the SQL query.
Here is the corresponding snippet:
Once the data is fetched, it is visualized in an app. In this particular example, a scatter plot is presented with END_DATE as x axis and the choice of the customer from the dropdown as y axis.
Now we can create a single Python script called dashboard.py by concatenating the snippets and running it. The dashboard will be accessible in the browser on your machine.
You can access the databases and tables within the Base module using snowSQL command-line interface. This is useful for external collaborators who do not have access to ICA core functionalities. In this tutorial we will describe how to obtain the token and use it for accessing the Base module. This tutorial does not cover how to install and configure snowSQL.
Once the Base module has been enabled within a project, the following details are shown in Projects > your_project > Project Settings > Details.
After clicking the button Create OAuth access token
, the pop-up authenticator is displayed.
After clicking the button Generate snowSQL command
the pop-up authenticator presents the snowSQL command.
Copy the snowSQL command and run it in the console to log in.
You can also get the OAuth access token via API by providing <PROJECT ID> and <YOUR KEY>.
API Call:
Response
Template snowSQL:
Now you can perform a variety of tasks such as:
Querying Data: execute SQL queries against tables, views, and other database objects to retrieve data from the Snowflake data warehouse.
Creating and Managing Database Objects: create tables, views, stored procedures, functions, and other database objects in Snowflake. you can also modify and delete these objects as needed.
Loading Data: load data into Snowflake from various sources such as local files, AWS S3, Azure Blob Storage, or Google Cloud Storage.
Overall, snowSQL CLI provides a powerful and flexible interface to work with Snowflake, allowing external users to manage data warehouse and perform a variety of tasks efficiently and effectively without access to the ICA core.
Show all tables in the database:
Create a new table:
List records in a table:
Load data from a file: To load data from a file, you can start by create a staging area in the internal storage using the following commend:
You can then upload the local file to the internal storage using the following command:
You can check if the file was uploaded properly using LIST command:
Finally, Load data by using COPY TO command. The command assumes the data.tsv is a tab delimited file. You can easily modify the following command to import JSON file setting TYPE=JSON.
Load data from a string: If you have data as JSON string, you can import the data into the tables using following commands.
Load data into specific columns: If you want to load sample_name into the table, you can remove the "count" from the column and the value list as below:
List the views of the database to which you are connected. As shared database and catalogue views are created within the project database, they will be listed. However, it does not show views which are granted via another database, role or from bundles.
Show grants, both directly on the tables and views and grants to roles which in turn have grants on tables and views.