Pipelines
A Pipeline is a series of Tools with connected inputs and outputs configured to execute in a specific order.
Linking Existing Pipelines
Linking a pipeline (Projects > your_project > Flow > Pipelines > Link) adds that pipeline to your project. This is not as a copy, but as the actual pipeline, so any changes to the pipeline are atomatically propagated to and from any project which has this pipeline linked.
You can link a pipeline if it is not already linked to your project and it is from your tenant or available in your bundle or activation code.
If you unlink a pipeline it removes the pipline from your project, but it remains part of the list of pipelines of your tenant, so it can be linked to other projects later on.
Create a Pipeline
Pipelines are created and stored within projects.
- Navigate to Projects > your_project > Flow > Pipelines > +Create. 
- Configure pipeline settings in the pipeline property tabs. 
- When creating a graphical CWL pipeline, drag connectors to link tools to input and output files in the canvas. Required tool inputs are indicated by a yellow connector. 
- Select Save. 
Pipelines use the latest tool definition when the pipeline was last saved. Tool changes do not automatically propagate to the pipeline. In order to update the pipeline with the latest tool changes, edit the pipeline definition by removing the tool and re-adding it back to the pipeline.
Pipeline Statuses
For pipeline authors sharing and distributing their pipelines, the draft, released, deprecated, and archived statuses provide a structured framework for managing pipeline availability, user communication, and transition planning. To change the pipeline status, select it at Projects > your_project > Pipelines > your_pipeline > change status.
Draft
Use the draft status while developing or testing a pipeline version internally.
Only share draft pipelines with collaborators who are actively involved in development.
Released
The released status signals that a pipeline is stable and ready for general use.
Share your pipeline when it is ready for broad use. Ensure users have access to current documentation and know where to find support or updates. Releasing a pipeline is only possible if all tools of that pipeline must be in released status.
Deprecated
Deprecation is used when a pipeline version is scheduled for retirement or replacement. Deprecated pipelines can not be linked to bundles, but will not be unlinked from existing bundles. Users who already have access will still be able to start analyses. You can add a message (max 256 chars) when deprecating pipelines.
Deprecate in advance of archiving a pipeline, making sure the new pipeline is available in the same bundle as the deprecated pipeline. This will allow the pipeline author to link the new or alternative pipeline in the deprecation message field.
Archived
Archiving a pipeline version removes it from active use; users can no longer launch analyses. Archived pipelines can not be linked to bundles, but are not automatically unlinked from bundles or projects. You can add a message (max 256 chars) when archiving pipelines.
Warn users in advance: Deprecate the pipeline before archiving to allow existing users time to transition. Use the archive message to point users to the new or alternative pipeline
Pipeline Properties
The following sections describe the properties that can be configured in each tab of the pipeline editor.
Depending on how you design the pipeline, the displayed tabs differ between the graphical and code definitions. For CWL you have a choice on how to define the pipeline, Nextflow is always defined in code mode.
CWL Graphical
- Details 
- Documentation 
- Definition 
- Analysis Report 
- Metadata Model 
- Report 
CWL Code
- Details 
- Documentation 
- Inputform files (JSON) or XML Configuration (XML) 
- CWL Files 
- Metadata Model 
- Report 
Nextflow Code
- Details 
- Documentation 
- Inputform Files (JSON) or XML Configuration (XML) 
- Nextflow files 
- Metadata Model 
- Report 
Any additional source files related to your pipeline will be displayed here in alphabetical order.
See the following pages for language-specific details for defining pipelines:
Details
The details tab provides options for configuring basic information about the pipeline.
Code
The name of the pipeline. The name must be unique within the tenant, including linked and unlinked pipelines.
Nextflow Version
User selectable Nextflow version available only for Nextflow pipelines
Categories
One or more tags to categorize the pipeline. Select from existing tags or type a new tag name in the field.
Description
A short description of the pipeline.
Proprietary
Hide the pipeline scripts and details from users who do not belong to the tenant who owns the pipeline. This also prevents cloning the pipeline.
Status
The release status of the pipeline.
Storage size
User selectable storage size for running the pipeline. This must be large enough to run the pipeline, but setting it too large incurs unnecessary costs.
Family
A group of pipeline versions. To specify a family, select Change, and then select a pipeline or pipeline family. To change the order of the pipeline, select Up or Down. The first pipeline listed is the default and the remainder of the pipelines are listed as Other versions. The current pipeline appears in the list as this pipeline.
Version comment
A description of changes in the updated version.
Links
External reference links. (max 100 chars as name and 2048 chars as link)
The following information becomes visible when viewing the pipeline details.
ID
Unique Identifier of the pipeline.
URN
Identification of the pipeline in Uniform Resource Name
The clone action will be shown in the pipeline details at the top-right. Cloning a pipeline allows you to create modifications without impacting the original pipeline. When cloning a pipeline, you become the owner of the cloned pipeline. When you clone a pipeline, you must give it a unique name because no duplicate names are allowed within all projects of the tenant. So the name must be unique per tenant. It is possible that you see the same pipeline name twice when a pipeline linked from another tenant is cloned with that same name in your tenant. The name is then still unique per tenant, but you will see them both in your tenant.
When you clone a Nextflow pipeline, a verification of the configured Nextflow version is done to prevent the use of deprecated versions.
Documentation
The Documentation tab provides is the place where you explain how your pipeline works to users. The description appears in the tool repository but is excluded from exported CWL definitions. If no documentation has been provided, this tab will be empty.
Definition (Graphical)
When using graphical mode for the pipeline definition, the Definition tab provides options for configuring the pipeline using a visualization panel and a list of component menus.
Machine profiles
Compute types available to use with Tools in the pipeline.
Shared settings
Settings for pipelines used in more than one tool.
Reference files
Descriptions of reference files used in the pipeline.
Input files
Descriptions of input files used in the pipeline.
Output files
Descriptions of output files used in the pipeline.
Tool
Details about the tool selected in the visualization panel.
Tool repository
A list of tools available to be used in the pipeline.
In graphical mode, you can drag and drop inputs into the visualization panel to connect them to the tools. Make sure to connect the input icons to the tool before editing the input details in the component menu. Required tool inputs are indicated by a yellow connector.
Safari is not suported as browser for graphical editing.
XML Configuration / JSON Inputform Files (Code)
This page is used to specify all relevant information about the pipeline parameters.
Compute Resources
Compute Nodes
For each process defined by the workflow, ICA will launch a compute node to execute the process.
- For each compute type, the - standard(default - AWS on-demand) or- economy(AWS spot instance) tiers can be selected.
- When selecting an fpga instance type for running analyses on ICA, it is recommended to use the medium size. While the large size offers slight performance benefits, these do not proportionately justify the associated cost increase for most use cases. 
- When no type is specified, the default type of compute node is - standard-small.
By default, compute nodes have no scratch space. This is an advanced setting and should only be used when absolutely necessary as it will incur additional costs and may offer only limited performance benefits because it is not local to the compute node.
For simplicity and better integration, consider using shared storage available at /ces. It is what is provided in the Small/Medium/Large+ compute types. This shared storage is used when writing files with relative paths.
Compute Types
Daemon sets and system processes consume approximately 1 CPU and 2 GB Memory from the base values shown in the table. Consumption will vary based on the activity of the pod.
Compute Type
CPUs
Mem (GiB)
Nextflow (pod.value)
CWL (type, size)
standard-small
2
8
standard-small
standard, small
standard-medium
4
16
standard-medium
standard, medium
standard-large
8
32
standard-large
standard, large
standard-xlarge
16
64
standard-xlarge
standard, xlarge
standard-2xlarge
32
128
standard-2xlarge
standard, 2xlarge
standard-3xlarge
64
256
standard-3xlarge
standard, 3xlarge
hicpu-small
16
32
hicpu-small
hicpu, small
hicpu-medium
36
72
hicpu-medium
hicpu, medium
hicpu-large
72
144
hicpu-large
hicpu, large
himem-small
8
64
himem-small
himem, small
himem-medium
16
128
himem-medium
himem, medium
himem-large
48
384
himem-large
himem, large
himem-xlarge2
92
700
himem-xlarge
himem, xlarge
hiio-small
2
16
hiio-small
hiio, small
hiio-medium
4
32
hiio-medium
hiio, medium
fpga2-medium1
24
256
fpga2-medium
fpga2,medium
fpga2-large1
48
512
fpga2-large
fpga2,large
fpga-medium3
16
244
fpga-medium
fpga,medium
fpga-large3
64
976
fpga-large
fpga,large
transfer-small4
4
10
transfer-small
transfer, small
transfer-medium 4
8
15
transfer-medium
transfer, medium
transfer-large4
16
30
transfer-large
transfer, large
1 DRAGEN pipelines running on fpga2 compute type will incur a DRAGEN license cost of 0.10 iCredits per gigabase of data processed, with additional discounts as shown below.
- 80 or less gigabase per sample - no discount - 0.10 iCredits per gigabase 
- > 80 to 160 gigabase per sample - 20% discount - 0.08 iCredits per gigabase 
- > 160 to 240 gigabase per sample - 30% discount - 0.07 iCredits per gigabase 
- > 240 to 320 gigabase per sample - 40% discount - 0.06 iCredits per gigabase 
- > 320 and more gigabase per sample - 50% discount - 0.05 iCredits per gigabase 
If your DRAGEN job fails, only the compute cost is charged, no DRAGEN license cost will be charged.
DRAGEN Iterative gVCF Genotyper (iGG) will incur a license cost of 0.6216 iCredits per gigabase. For example, a sample of 3.3 gigabase human reference will result in 2 iCredits per sample. The associated Compute costs will be based on the compute instance chosen.
(3) FPGA1 instances will be decommissioned by Nov 1st 2025. Please migrate to F2 for improved capacity and performance with up to 40% reduced turnaround time for analysis.
Nextflow/CWL Files (Code)
Syntax highlighting is determined by the file type, but you can select alternative syntax highlighting with the drop-down selection list. The following formats are supported:
- DIFF (.diff) 
- GROOVY (.groovy .nf) 
- JAVASCRIPT (.js .javascript) 
- JSON (.json) 
- SH (.sh) 
- SQL (.sql) 
- TXT (.txt) 
- XML (.xml) 
- YAML (.yaml .cwl) 
Main.nf (Nextflow code)
The Nextflow project main script.
Nextflow.config (Nextflow code)
The Nextflow configuration settings.
Workflow.cwl (CWL code)
The Common Workflow Language main script.
Adding Files
Multiple files can be added by selecting the +Create option at the bottom of the screen to make pipelines more modular and manageable.
Metadata Model
See Metadata Models
Report
Here patterns for detecting report files in the analysis output can be defined. On opening an analysis result window of this pipeline, an additional tab will display these report files. The goal is to provide a pipeline-specific user-friendly representation of the analysis result.
To add a report select the + symbol on the left side. Provide your report with a unique name, a regular expression matching the report and optionally, select the format of the report. This must be the source format of the report data generated during the analysis.
Start a New Analysis
Use the following instructions to start a new analysis for a single pipeline.
- Select Projects > your_project > Flow > Pipelines. 
- Select the pipeline or pipeline details of the pipeline you want to run. 
- Select Start Analysis. 
- Configure analysis settings. (see below) 
- Select Start Analysis. 
- View the analysis status on the Analyses page. - Requested—The analysis is scheduled to begin. 
- In Progress—The analysis is in progress. 
- Succeeded—The analysis is complete. 
- Failed —The analysis has failed. 
- Aborted — The analysis was aborted before completing. 
 
- To end an analysis, select Abort. 
- To perform a completed analysis again, select Re-run. 
Analysis Settings
The Start Analysis screen provides the configuration options for the analysis.
User Reference
The unique analysis name.
Pipeline
This is not editable, but provides a link to the pipeline so you want to look up details of the pipeline.
User tags (optional)
One or more tags used to filter the analysis list. Select from existing tags or type a new tag name in the field.
Notification (optional)
Enter your email address if you want to be notified when the analysis completes.
Output Folder1
Select a folder in which the output folder of the analysis should be located. When no folder is selected, the output folder will be located in the root of the project.
When you open the folder selection dialog, you have the option to create a new folder (bottom of the screen). You can create nested folders by using the folder/subfolder syntax.
Do not use a / before the first folder or after the last subfolder in the folder creation dialog.
Logs Folder
Select a folder in which the logs of the analysis should be located. When no logs folder is selected, the logs will be stored as subfolder in the output folder. When a logs folder is selected which is different from the output folder, the outputs and logs folders are separated.
Files that already exist in the logs folder will be overwritten with new versions.
When you open the folder selection dialog, you have the option to create a new folder (bottom of the screen). You can create nested folders by using the folder/subfolder syntax.
Note: Choose a folder that is empty and not in use for other analyses, as files will be overwritten.
Note: Do not use a / before the first folder or after the last subfolder in the folder creation dialog.
Pricing
Select a subscription to which the analysis will be charged.
Input
Select the input files to use in the analysis. (max. 50,000)
Settings (optional)
Provide input settings.
Resources
Select the storage size for your analysis. The available storage sizes depend on your selected Pricing subscription. See Storage for more information.
1 When using the API, you can redirect analysis outputs to be outside of the current project.
Aborting Analyses
You can abort a running analysis from either the analysis overview (Projects > your_project > Flow > Analyses > your_analysis > Manage > Abort) or from the analysis details (Projects > your_project > Flow > Analyses > your_analysis > Details tab > Abort).
View Analysis Results
You can view analysis results on the Analyses page or in the output folder on the Data page.
- Select a project, and then select the Flow > Analyses page. 
- Select an analysis. 
- From the output files tab, expand the list if needed and select an output file. - If you want to add or remove any user or technical tags, you can do so from the data details view. 
- If you want to download the file, select Download. 
 
- To preview the file, select the View tab. 
- Return to Flow > Analyses > your_analysis. 
- View additional analysis result information on the following tabs: - Details - View information on the pipeline configuration. 
- Report - Shows the reports defined on the pipeline report tab. 
- Output files - View the output of the Analysis. 
- Steps - stderr and stdout information. 
- Nextflow timeline - Nextflow process execution timeline. 
- Nextflow execution - Nextflow analysis report. Showing the run times, commands, resource usage and tasks for Nextflow analyses. 
 
Last updated
Was this helpful?
