Nextflow
ICA supports running pipelines defined using Nextflow. See this tutorial for an example.
In order to run Nextflow pipelines, the following process-level attributes within the Nextflow definition must be considered.
System Information
Version 20.10 on Illumina Connected Analytics will be obsoleted on April 22nd, 2026. After this date, all existing pipelines using Nextflow v20.10 will no longer run.
The following table shows when which Nextflow version is
default (⭐) This version will be proposed when creating a new Nextflow pipeline.
supported (✅) This version can be selected when you do not want the default Nextflow version.
deprecated (⚠️) This version can not be selected for new pipelines, but pipelines using this version will still work.
removed (❌). This version can not be selected when creating new pipelines and pipelines using this version will no longer work.
The switchover happens in the January release of that year.
Nextflow Version
You can select the Nextflow version while building a pipeline as follows:
Compute Type
To specify a compute type for a Nextflow process, you can either define the cpu and memory (recommended) or use the compute type predefined sizes (required for specific hardware such as FPGA2).
Do not mix these definition methods within the same process2, use either one or the other method.
CPU and Memory
Specify the task resources using Nextflow directives in both the workflow script (.nf) and the configuration file (nextflow.config) cpus defines the number of CPU cores allocated to the process, memory defines the amount of RAM which will be allocated.
Process file example
Configuration file example
ICA will convert the required resources to the correct predefined size. This enables porting public Nextflow pipelines without configuration changes.
Predefined Sizes
To use the predefined sizes, use the pod directive within each process. Set the annotation to scheduler.illumina.com/presetSize and the value to the desired compute type. The default compute type, when this directive is not specified, is standard-small (2 CPUs and 8 GB of memory).
For example, if you want to use FPGA 2 medium, you need to add the line below
Often, there is a need to select the compute size for a process dynamically based on user input and other factors. The Kubernetes executor used on ICA does not use the cpu and memorydirectives, so instead, you can dynamically set the pod directive, as mentioned here. e.g.
It can also be specified in the configuration file. See the example configuration below:
Standard vs Economy
Concept
For each compute type, you can choose between the
scheduler.illumina.com/lifecycle: standard- AWS on-demand (Default) orscheduler.illumina.com/lifecycle: economy- AWS spot instance tiers.
Availability
Guaranteed capacity with Full control of starting, stopping, and terminating.
Not guaranteed. Depends on unused AWS capacity. Can be terminated and reclaimed by AWS when the capacity is needed for other processes with 2 minutes notice.
Best for
Ideal for critical workloads and urgent scaling needs.
Best for cost optimization and non-critical workloads as interruptions can occur any time.
Configuration
You can switch to economy in the process itself with the pod directive or in the nextflow.config file.
Process example
nextlow.config example
Inputs
Inputs are specified via the JSON-based input form or XML input form. The specified code in the XML will correspond to the field in the params object that is available in the workflow. Refer to the tutorial for an example.
Outputs
Outputs for Nextflow pipelines are uploaded from the out folder in the attached shared filesystem. The publishDir directive can be used to symlink (recommended), copy or move data to the correct folder. Symlinking is faster and does not increase storage cost as it creates a file pointer instead of copying or moving data. Data will be uploaded to the ICA project after the pipeline execution completes.
Nextflow version 20.10.10 (Deprecated)
Version 20.10 will be obsoleted on April 22nd, 2026. After this date, all existing pipelines using Nextflow v20.10 will no longer be able to run.
For Nextflow version 20.10.10 on ICA, using the "copy" method in the publishDir directive for uploading output files that consume large amounts of storage may cause workflow runs to complete with missing files. The underlying issue is that file uploads may silently fail (without any error messages) during the publishDir process due to insufficient disk space, resulting in incomplete output delivery.
Solutions:
Use "symlink" instead of "copy" in the
publishDirdirective. Symlinking creates a link to the original file rather than copying it, which doesn’t consume additional disk space. This can prevent the issue of silent file upload failures due to disk space limitations.Use Nextflow 22.04 or later and enable the "failOnError"
publishDiroption. This option ensures that the workflow will fail and provide an error message if there's an issue with publishing files, rather than completing silently without all expected outputs.
Nextflow Configuration
During execution, the Nextflow pipeline runner determines the environment settings based on values passed via the command-line or via a configuration file (see Nextflow Configuration documentation). When creating a Nextflow pipeline, use the nextflow.config tab in the UI (or API) to specify a nextflow configuration file to be used when launching the pipeline.
Syntax highlighting is determined by the file type, but you can select alternative syntax highlighting with the drop-down selection list.

If no Docker image is specified, Ubuntu will be used as default.
The following configuration settings will be ignored if provided as they are overridden by the system:
Best Practices
Process Time
Setting a timeout to between 2 and 4 times the expected processing time with the time directive for processes or task will ensure that no stuck processes remain indefinitely. Stuck process keep incurring costs for the occupied resources, so if the process can not complete within that timespan, it is safer and more economical to end the process and retry.
Sample Sheet File Ingestion
When you want to use a sample sheet with references to files as Nextflow input, add an extra input to the pipeline. This extra input lets the user select the samplesheet-mentioned files from their project. At run time, those files will get staged in the working directory, and when Nextflow parses the samplesheet and looks for those files without paths, it will find them there. You can not use file paths in a sample sheet without selecting the files in the input form because files are only passed as file/folder ids in the API payload when the analysis is launched.
You can include public data such as http urls because Nextflow is able download those. Nextflow is also able to download publicly accessible S3 urls (s3://...). You can not use Illumina's urn:ilmn:ica:region:... structure.
Last updated
Was this helpful?
