Pipelines

A Pipeline is a series of Tools with connected inputs and outputs configured to execute in a specific order.

Create a Pipeline

Pipelines are created and stored within projects.

  1. Navigate to Projects > your_project > Flow > Pipelines.

  2. Select CWL or Nextflow to create a new Pipeline.

  3. Configure pipeline settings in the pipeline property tabs.

  4. When creating a graphical CWL pipeline, drag connectors to link tools to input and output files in the canvas. Required tool inputs are indicated by a yellow connector.

  5. Select Save.

Pipelines use the latest tool definition when the pipeline was last saved. Tool changes do not automatically propagate to the pipeline. In order to update the pipeline with the latest tool changes, edit the pipeline definition by removing the tool and re-adding it back to the pipeline.

Individual Pipeline files are limited to 100 Megabytes. If you need to add more than this, split your content over multiple files.

Pipelines use the latest tool definition when the pipeline was last saved. Tool changes do not automatically propagate to the pipeline. In order to update the pipeline with the latest tool changes, edit the pipeline definition by removing the tool and re-adding it back to the pipeline.

You can edit pipelines while they are in Draft or Release Candidate status. Once released, pipelines can no longer be edited.

Status

Draft

Description

Fully editable draft.

Status

Release Candidate

Description

The pipeline is ready for release. Editing is locked but the pipeline can be cloned (top right in the details view) to create a new version.

Status

Released

Description

The pipeline is released. To release a pipeline, all tools of that pipeline must also be in released status. Editing a released pipeline is not possible, but the pipeline can be cloned (top right in the details view) to create a new editable version.

Pipeline Properties

The following sections describe the tool properties that can be configured in each tab of the pipeline editor.

Graphical vs Code definition

Depending on how you design the pipeline, the displayed tabs differ between the graphical and code definitions. For CWL you have a choice on how to define the pipeline, Nextflow is always defined in code mode.

CWL Graphical

  • Information

  • Documentation

  • Definition

  • Analysis Report

  • Metadata Model

CWL Code

  • Information

  • Documentation

  • XML Configuration

  • Metadata Model

  • workflow.cwl

  • New File

Nextflow Code

  • Information

  • Documentation

  • XML Configuration

  • Metadata Model

  • nextflow.config

  • main.nf

  • New File

Any additional source files related to your pipeline will be displayed here in alphabetical order.

See the following pages for language-specific details for defining pipelines:


Information

The Information tab provides options for configuring basic information about the pipeline.

FieldEntry

Code

The name of the pipeline.

Categories

One or more tags to categorize the pipeline. Select from existing tags or type a new tag name in the field.

Description

A short description of the pipeline.

Proprietary

Hide the pipeline scripts and details from users who do not belong to the tenant who owns the pipeline. This also prevents cloning the pipeline.

Status

The release status of the pipeline.

Storage size

User selectable storage size for running the pipeline. This must be large enough to run the pipeline, but setting it too large incurs unnecessary costs.

Family

A group of pipeline versions. To specify a family, select Change, and then select a pipeline or pipeline family. To change the order of the pipeline, select Up or Down. The first pipeline listed is the default and the remainder of the pipelines are listed as Other versions. The current pipeline appears in the list as this pipeline.

Version comment

A description of changes in the updated version.

Links

External reference links. (max 100 chars as name and 2048 chars as link)

The following information becomes visible when viewing the pipeline.

FieldEntry

ID

Unique Identifier of the pipeline.

URN

Identification of the pipeline in Uniform Resource Name

Nextflow Version

User selectable Nextflow version available only for Nextflow pipelines

In addition, the clone function will be shown (top-right). When cloning a pipeline, you become the owner of the cloned pipeline.

Documentation

The Documentation tab provides options for configuring the HTML description for the tool. The description appears in the tool repository but is excluded from exported CWL definitions. If no documentation has been provided, this tab will be empty.

Definition (Graphical)

When using graphical mode for the pipeline definition, the Definition tab provides options for configuring the pipeline using a visualization panel and a list of component menus.

MenuDescription

Machine profiles

Compute types available to use with Tools in the pipeline.

Shared settings

Settings for pipelines used in more than one tool.

Reference files

Descriptions of reference files used in the pipeline.

Input files

Descriptions of input files used in the pipeline.

Output files

Descriptions of output files used in the pipeline.

Tool

Details about the tool selected in the visualization panel.

Tool repository

A list of tools available to be used in the pipeline.

In graphical mode, you can drag and drop inputs into the visualization panel to connect them to the tools. Make sure to connect the input icons to the tool before editing the input details in the component menu. Required tool inputs are indicated by a yellow connector.

XML Configuration (code)

This page is used to specify all relevant information about the pipeline parameters.

Analysis Report (Graphical)

The Analysis Report tab provides options for configuring pipeline execution reports. The report is composed of widgets added to the tab.

Configure Pipeline Analysis Report (Graphical CWL Only)

The pipeline analysis report appears in the pipeline execution results. The report is configured from widgets added to the Analysis Report tab in the pipeline editor.

  1. [Optional] Import widgets from another pipeline.

    1. Select Import from other pipeline.

    2. Select the pipeline that contains the report you want to copy.

    3. Select an import option: Replace current report or Append to current report.

    4. Select Import.

  2. From the Analysis Report tab, select Add widget, and then select a widget type.

  3. Configure widget details.

    WidgetSettings

    Title

    Add and format title text.

    Analysis details

    Add heading text and select the analysis metadata details to display.

    Free text

    Add formatted free text. The widget includes options for placeholder variables that display the corresponding project values.

    Inline viewer

    Add options to view the content of an analysis output file.

    Analysis comments

    Add comments that can be edited after an analysis has been performed.

    Input details

    Add heading text and select the input details to display. The widget includes an option to group details by input name.

    Project details

    Add heading text and select the project details to display.

    Page break

    Add a page break widget where page breaks should appear between report sections.

  4. Select Save.

Free Text Placeholders

PlaceholderDescription

[[BB_PROJECT_NAME]]

The project name.

[[BB_PROJECT_OWNER]]

The project owner.

[[BB_PROJECT_DESCRIPTION]]

The project short description.

[[BB_PROJECT_INFORMATION]]

The project information.

[[BB_PROJECT_LOCATION]]

The project location.

[[BB_PROJECT_BILLING_MODE]]

The project billing mode.

[[BB_PROJECT_DATA_SHARING]]

The project data sharing settings.

[[BB_REFERENCE]]

The analysis reference.

[[BB_USERREFERENCE]]

The user analysis reference.

[[BB_PIPELINE]]

The name of the pipeline.

[[BB_USER_OPTIONS]]

The analysis user options.

[[BB_TECH_OPTIONS]]

The analysis technical options. Technical options include the TECH suffix and are not visible to end users.

[[BB_ALL_OPTIONS]]

All analysis options. Technical options include the TECH suffix and are not visible to end users.

[[BB_SAMPLE]]

The sample.

[[BB_REQUEST_DATE]]

The analysis request date.

[[BB_START_DATE]]

The analysis start date.

[[BB_DURATION]]

The analysis duration.

[[BB_REQUESTOR]]

The user requesting analysis execution.

[[BB_RUNSTATUS]]

The status of the analysis.

[[BB_ENTITLEMENTDETAIL]]

The used entitlement detail.

[[BB_METADATA:path]]

The value or list of values of a metadata field or multi-value fields.

Metadata Model

See Metadata Models

Workflow.cwl (code)

The Common Workflow Language main script.

Nextflow.config (code)

The Nextflow configuration settings.

Main.nf (code)

The Nextflow project main script.

+ New File (code)

Multiple files can be added to make pipelines more modular and manageable.

Syntax highlighting is determined by the file type, but you can select alternative syntax highlighting with the drop-down selection list. The following formats are supported:

  • DIFF (.diff)

  • GROOVY (.groovy .nf)

  • JAVASCRIPT (.js .javascript)

  • JSON (.json)

  • SH (.sh)

  • SQL (.sql)

  • TXT (.txt)

  • XML (.xml)

  • YAML (.yaml .cwl)

Compute Nodes

For each process defined by the workflow, ICA will launch a compute node to execute the process.

  • For each compute type, the standard (default - AWS on-demand) or economy (AWS spot instance) tiers can be selected.

  • When selecting an fpga instance type for running analyses on ICA, it is recommended to use the medium size. While the large size offers slight performance benefits, these do not proportionately justify the associated cost increase for most use cases.

  • When no type is specified, the default type of compute node is standard-small.

By default, compute nodes have no scratch space. This is an advanced setting and should only be used when absolutely necessary as it will incur additional costs and may offer only limited performance benefits because it is not local to the compute node.

For simplicity and better integration, consider using shared storage available at /ces. It is what is provided in the Small/Medium/Large+ compute types. This shared storage is used when writing files with relative paths.

Scratch space

If you do require scratch space via a Nextflow pod annotation or a CWL resource requirement, the path is /scratch.

  • For Nextflow pod annotation: 'volumes.illumina.com/scratchSize', value: '1TiB' will reserve 1 TiB.

  • For CWL, adding - class: ResourceRequirement tmpdirMin: 5000 to your requirements section will reserve 5GiB for CWL.

Avoid the following as it does not align with ICAv2 scratch space configuration.

  • Container overlay tmp path: /tmp

  • Legacy paths: /ephemeral

  • Environment Variables ($TMPDIR, $TEMP and $TMP)

  • Bash Command mktemp

  • CWL runtime.tmpdir

Compute Types

Daemon sets and system processes consume approximately 1CPU and 2GB Mem from the base values shown in the table. Consumption will vary based on the activity of the pod.

Compute Type

CPUs

Mem (GB)

Nextflow (pod.value)

CWL (type, size)

standard-small

2

8

standard-small

standard, small

standard-medium

4

16

standard-medium

standard, medium

standard-large

8

32

standard-large

standard, large

standard-xlarge

16

64

standard-xlarge

standard, xlarge

standard-2xlarge

32

128

standard-2xlarge

standard, 2xlarge

hicpu-small

16

32

hicpu-small

hicpu, small

hicpu-medium

36

72

hicpu-medium

hicpu, medium

hicpu-large

72

144

hicpu-large

hicpu, large

himem-small

8

64

himem-small

himem, small

himem-medium

16

128

himem-medium

himem, medium

himem-large

48

384

himem-large

himem, large

himem-xlarge

96

768

himem-xlarge

himem, xlarge

hiio-small

2

16

hiio-small

hiio, small

hiio-medium

4

32

hiio-medium

hiio, medium

fpga-small *

8

122

fpga-small

fpga, small

fpga-medium

16

244

fpga-medium

fpga, medium

fpga-large

64

976

fpga-large

fpga, large

transfer-small **

4

10

transfer-small

transfer, small

transfer-medium **

8

15

transfer-medium

transfer, medium

transfer-large **

16

30

transfer-large

transfer, large

* The compute type "fpga-small" is no longer available. Use 'fpga-medium' instead. fpga-large offers little performance benefit at additional cost.

** The transfer size selected is based on the selected storage size for compute type and used during upload and download system tasks.


Start a New Analysis

Use the following instructions to start a new analysis for a single pipeline.

  1. Select a project.

  2. From the project menu, select Flow > Pipelines.

  3. Select the pipeline to run.

  4. Select Start a New Analysis.

  5. Configure analysis settings. See Analysis Properties.

  6. Select Start Analysis.

  7. View the analysis status on the Analyses page.

    • Requested—The analysis is scheduled to begin.

    • In Progress—The analysis is in progress.

    • Succeeded—The analysis is complete.

    • Failed and Failed Final—The analysis has failed or was aborted.

  8. To end an analysis, select Abort.

  9. To perform a completed analysis again, select Re-run.

Alternatively, you can start a new analysis from Projects > <Your_Project> > Flow > Analyses
  1. Select New Analysis

  2. Select Pipeline

  3. Configure analysis settings. See Analysis Properties.

  4. Select Start Analysis.

Analysis Properties

The following sections describe the analysis properties that can be configured in each tab.

Analysis

The Analysis tab provides options for configuring basic information about the analysis.

FieldEntry

User Reference

The unique analysis name.

User tags

One or more tags used to filter the analysis list. Select from existing tags or type a new tag name in the field.

Entitlement Bundle

Select a subscription to charge the analysis to.

Input Files

Select the input files to use in the analysis. (max. 50,000)

Settings

Provide input settings.

View Analysis Results

You can view analysis results on the Analyses page or in the output_folder on the Data page.

  1. Select a project, and then select the Flow > Analyses page.

  2. Select an analysis.

  3. On the Result tab, select an output file.

  4. To preview the file, select the View tab.

  5. Add or remove any user or technical tags, and then select Save.

  6. To download, select Schedule for Download.

  7. View additional analysis result information on the following tabs:

    • Details—View information on the pipeline configuration.

    • Logs—Download information on the pipeline process.

Last updated