Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Bundles are curated data sets which combine assets such as pipelines, tools, and Base query templates. This is where you will find packaged assets such as Illumina-provided pipelines and sample data. You can create, share and use bundles in projects of your own tenant as well as projects in other tenants.
The following ICA assets can be included in bundles:
Some bundles come with additional restrictions such as disabling bench access or internet access when running pipelines to protect the data contained in them. When you link these bundles, the restrictions will be enforced on your project. Unlinking the bundle will not remove the restrictions.
As of ICA v.2.29, the content in bundles is linked in such a way that any updates to a bundle are automatically propagated to the projects which have that bundle linked.
If you have created bundle links in ICA versions prior to ICA v2.29 and want to switch them over to links with dynamic updates, you need to unlink and relink them.
From the main navigation page, select Projects > your_project > Project Settings > Details.
Click the Edit button at the top of the Details page.
Click the + button, under Linked bundles.
Click on the desired bundle, then click the +Link Bundles button.
Click Save.
The assets included in the bundle will now be available in the respective pages within the Project (e.g. Data and Pipelines pages). Any updates to the assets will be automatically available in the destination project.
To unlink a bundle from a project,
Select Projects > your_project > Project Settings > Details.
Click the Edit button at the top of the Details page.
Click the (-) button, next to the linked bundle you wish to remove.
Bundles and projects have to be in the same region in order to be linked. Otherwise, the error The bundle is in a different region than the project so it's not eligible for linking will be displayed.
To create a new bundle and configure its settings, do as follows.
From the main navigation, select Projects > your_project > Bundles.
Select + Create .
Enter a unique name for the bundle.
From the Region drop-down list, select where the assets for this bundle should be stored.
[Optional] Configure the following settings.
Categories—Select an existing category or enter a new one.
Status—Set the status of the bundle. When the status of a bundle changes, it cannot be reverted to a draft or released state.
Draft—The bundle can be edited.
Released—The bundle is released. Technically, you can still edit bundle information and add assets to the bundle, but should refrain from doing so.
Deprecated—The bundle is no longer intended for use. By default, deprecated bundles are hidden on the main Bundles screen (unless non-deprecated versions of the bundle exist). Select "Show deprecated bundles" to show all deprecated bundles. Bundles can not be recovered from deprecated status.
Short Description—Enter a description for the bundle.
Metadata Model—Select a metadata model to apply to the bundle.
Enter a release version for the bundle and optionally enter a description for the version.
[Optional] Links can be added with a display name (max 100 chars) and URL (max 2048 chars).
Homepage
License
Links
Publications
[Optional] Enter any information you would like to distribute with the bundle in the Documentation section.
Select Save.
To make changes to a bundle:
From the main navigation, select Bundles.
Select a bundle.
Select Edit.
Modify the bundle information and documentation as needed.
Select Save.
To make changes to a bundle:
Select a bundle.
On the left-hand side, select the type of asset under Flow (such as pipeline or tool) you want to add to the bundle.
Depending on the asset type, select add or link to bundle.
Select the assets and confirm.
Assets must meet the following requirements before they can be added to a bundle:
For Samples and Data, the project the asset belongs to must have data sharing enabled.
The region of the project containing the asset must match the region of the bundle.
You must have permission to access the project containing the asset.
Pipelines and tools need to be in released status.
Samples must be available in a complete
state.
When you link folders to a bundle, a warning is displayed indicating that, depending on the size of the folder, linking may take considerable time. The linking process will run in the background and the progress can be monitored on the Bundles > your_bundle > activity > Batch Jobs screen. To see more details and the progress, double-click the batch job and then double-click the individual item. This will show how many individual files are already linked.
Which batch jobs are visible as activity depends on the user role.
When creating a new bundle version, you can only add assets to the bundle. You cannot remove existing assets from a bundle when creating a new version. If you need to remove assets from a bundle, it is recommended that you create a new bundle. All users wich currently have access to a bundle will automatically have access to the new version as well.
From the main navigation, select Bundles.
Select a bundle.
Select + Create new Version.
Make updates as needed and update the version number.
Select Save.
When you create a new version of a bundle, it will replace the old version in your list. To see the old version, open your new bundle and look at Bundles > your_bundle > Details > Versioning. There you can open the previous version which is contained in your new version.
Assets such as data which were added in a previous version of your bundle will be marked in green, while new content will be black.
From the main navigation, Select Bundles > your_bundle > Bundle Settings > Legal.
To add Terms of Use to a Bundle, do as follows:
Select + Create New Version.
Use the WYSIWYG editor to define Terms of Use for the selected bundle.
Click Save.
[Optional] Require acceptance by clicking the checkbox next to Acceptance required.
Acceptance required will prompt a user to accept the Terms of Use before being able to use a bundle or add the bundle to a project.
To edit the Terms of Use, repeat Steps 1-3 and use a unique version name. If you select acceptance required, you can choose to keep the acceptance status as is or require users to reaccept the terms of use. When reacceptance is required, users need to reaccept the terms in order continue using this bundle in their pipelines. This is indicated when they want to enter projects which use this bundle.
If you want to collaborate with other people on creating a bundle and managing the assets in the bundle, you can add users to your bundle and set their permissions. You use this to create a bundle together, not to use the bundle in your projects.
From the main navigation, select Bundles > your_bundle > Bundle Settings > Team.
To invite a user to collaborate on the bundle, do as follows.
To add a user from your tenant, select Someone of your tenant and select a user from the drop-down list.
To add a user by their email address, select By email and enter their email address.
To add all the users of an entire workgroup, select Add workgroup and select a workgroup from the drop-down list.
Select the Bundle Role drop-down list and choose a role for the user or workgroup. This role defines the ability of the user or workgroup to view or edit bundle settings.
Viewer: view content without editing rights.
Contributor: view bundle content and link/unlink assets.
Administrator: full edit rights of content and configuration.
Repeat as needed to add more users.
Users are not officially added to the bundle until they accept the invitation.
To change the permissions role for a user, select the Bundle Role drop-down list for the user and select a new role.
To revoke bundle permissions from a user, select the trash icon for the user.
Select Save Changes.
Once you have finalized your bundle and added all assets and legal requirements, you can share your bundle with other tenants to use it in their projects.
Your bundle must be in released status to prevent it from being updated while it is shared.
Go to Bundles > your_bundle > Edit > Details > Bundle status and set it to Released.
Save the change.
Once the bundle is released, you can share it. Invitations are sent to an individual email address, however access is granted and extended to all users and all workgroups inside that tenant.
Go to Bundles > your_bundle > Bundle Settings > Share.
Click Invite and enter the email address of the person you want to share the bundle with. They will receive an email from which they can accept or reject the invitation to use the bundle. The invitation will show the bundle name, description and owner. The link in the invite can only be used once.
You can follow up on the status of the invitation on the Bundles > your_bundle > Bundle Settings > Share page.
If they reject the bundle, the rejection date will be shown. To re-invite that person again later on, select their email address in the list and choose Remove. You can then create a new invitation. If you do not remove the old entry before sending a new invitation, they will be unable to accept and get an error message stating that the user and bundle combination must be unique. They can also not re-use an invitation once it has been accepted or declined.
If they accept the bundle, the acceptance date will be shown. They will in turn see the bundle under Bundles > Entitled bundles. To remove access, select their email address in the list and choose Remove.
Entitled bundles are bundles created by Illumina or third parties for you to use in your projects. Entitled bundles can already be part of your tenant when it is part of your subscription. You can see your entitled bundles at Bundles > Entitled Bundles.
To use your shared entitled bundle, add the bundle to your project via Project Linking. Content shared via entitled bundles is read-only, so you cannot add or modify the contents of an entitled bundle. If you lose access to an entitled bundle previously shared with you, the bundle is unlinked and you will no longer be able to access its contents.
Learn how to .
The and documentation pages match navigation within ICA. We also offer supporting documentation for popular topics like , , and .
For more content on topics like , , , and other resources, view the section.
(link / unlink)
(link / unlink)
(add / delete)
(link/unlink)
and (link/unlink)
(read-only) (link/unlink)
The main Bundles screen has two tabs: My Bundles and Entitled Bundles. The My Bundles tab shows all the bundles that you are a member of. This tab is where most of your interactions with bundles occur. The Entitled Bundles tab shows the bundles that have been specially created by Illumina or other organizations and shared with you to use in your projects. See .
You can not link bundles which come with additional restrictions to .
Illumina® Connected Analytics is a cloud-based software platform intended to be used to manage, analyze, and interpret large volumes of multi-omics data in a secure, scalable, and flexible environment. The versatility of the system allows the platform to be used for a broad range of applications. When using the applications provided on the platform for diagnostic purposes, it is the responsibility of the user to determine regulatory requirements and to validate for intended use, as appropriate.
The platform is hosted in regions listed below.
Australia
AU
Canada
CA
Germany
EU
India
IN
Indonesia
ID
Japan
JP
Singapore
SG
South Korea
KR
United Kingdom
GB
United Arab Emirates
AE
United States
US
The platform hosts a suite of RESTful HTTP-based application programming interfaces (APIs) to perform operations on data and analysis resources. A web application user-interface is hosted alongside the API to deliver an interactive visualization of the resources and enables additional functionality beyond automated analysis and data transfer. Storage and compute costs are presented via usage information in the account console, and a variety of compute resource options are specifiable for applications to fine tune efficiency.
Use the search bar on the top right to navigate through the help docs and find specific topics of interest.
If you have any questions, contact Illumina Technical Support by phone or email:
Illumina Technical Support | techsupport@illumina.com | 1-800-809-4566
For customers outside the United States, Illumina regional Technical Support contact information can be found at www.illumina.com/company/contact-us.html.
To see the current ICA version you are logged in to, click your username found on the top right of the screen and then select About.
To view a list of the products to which you have access, select the 9 dots symbol at the top right of ICA. This will list your products. If you have multiple regional applications for the same product, the region of each is shown between brackets.
The More Tools category presents the following options
My Illumina Dashboard to monitor instruments, streamline purchases and keep track of upcoming activities.
Link to the Support Center for additional information and help.
Link to the order management from where you can keep track of your current and past orders.
Illumina Connected Analytics allows you to create and assign metadata to capture additional information about samples.
Each tenant has one root metadata model that is accessible to all projects in the tenant. This allows an organization to collect the same piece of information for every sample in every project in the tenant, such as an ID number. Within this root model, you can configure multiple metadata submodels, even at different levels.
Illumina recommends that you limit the amount of fields or field groups you add to the root model. If there are any misconfigured items in the root model, it will carry over into all other metadata models in the tenant. Once a root model is published, the fields and groups that are defined within it cannot be deleted. You should first consider creating submodels before adding anything to the root model. When configuring a project, you have the option to assign one published metadata model for all samples in the project. This metadata model can be the root model, a submodel of the root model, or a submodel of a submodel. It can be any published metadata model in the tenant. When a metadata model is selected for a project, all fields configured for the metadata model, and all fields in any parent models are applied to the samples in the project.
❗️ Illumina recommends that you limit the amount of fields or field groups you add to the root model. You should first consider creating submodels before adding anything to the root model.
The following terminology is used within this page:
Metadata fields = Metadata fields will be linked to a sample in the context of a project. They can be of various types and could contain single or multiple values.
Metadata groups = You can identify that a few fields belong together (for example, they all are related to quality metrics). That would be the moment to create a group so that the user knows these fields belong together
Root model = Model that is linked to the tenant. Every metadata model that you link to a project will also contain the fields and groups specified in this model as this is a parent model for all other models. This is a subcategory of a project metadata model
Child/Sub model = Any metadata model that is not the root model. Child models will inherit all fields and groups from their parent models. This is a subcategory of a project metadata model
Pipeline model = Model that is linked to a specific pipeline and not a project
Metadata in the context of ICA will always give information about a sample. It can be provided by the user, the pipeline and via the API. There are 2 general categories of metadata models: Project Metadata Model and Pipeline Metadata Model. Both models are built from metadata fields and groups. The project metadata model is specific per tenant, while the pipeline metadata model is linked to a pipeline and can be shared across tenants. These models are defined by users.
Each sample can have multiple metadata models. Whenever you link a project metadata model to your project, you will see its groups and fields present on each sample. The root model from that tenant will also be present as every metadata model inherits the groups and fields specified in the parent metadata model(s). When a pipeline is executed with sample and the pipeline contained a metadata model, the groups and fields will be present as well for each analysis that comes out of a pipeline execution.
The following field types are used within ICA:
Text: Free text
Keyword: Automatically complete value based on already used values
Numeric: Only numbers
Boolean: True or false, cannot be multiple value
Date: e.g. 23/02/2022
Date time: e.g. 23/02/2022 11:43:53, saved in UTC
Enumeration: select value out of drop-down list
The following properties can be selected for groups & fields:
Required: Pipeline can’t be started with this sample until the required group/field is filled in
Sensitive: Values of this group/field are only visible to project users of the own tenant. When a sample is shared across tenants, these fields won't be visible
Filled by pipeline: Fields that need to be filled by pipeline should be part of the same group. This group will automatically be multiple value and values will be available after pipeline execution. This property is only available for groups
Multiple value: This group/field can consist out of multiple (grouped) values
❗️ Fields cannot be both required and filled by pipeline
Project metadata model has metadata linked to a specific project. Values are known upront, general information is required for each sample of a specific project, and it may include general mandatory company information.
Pipeline metadata model has metadata linked to a specific pipeline. Values are populated during the pipeline execution and it requires an output file with the name 'metadata.response.json'.
❗️ Field groups should be used when configuring metadata fields that are filled by a pipeline. These fields should be part of the same field group and be configured with the Multiple Value setting enabled
Newly created and updated metadata models are not available for use within the tenant until the metadata model is published. When a metadata model is published, fields and field groups cannot be deleted, but the names and descriptions for fields and field groups can be edited. A model can be published after verifying all parent models are published first.
If a published metadata model is no longer needed, you can retire the model (except the root model).
First, check if the model contains any submodels. A model cannot be retired if it contains any published submodels.
When you are certain you want to retire a model and all submodels are retired, click on the three dots in the top right of the model window, and then select Retire Metadata Model.
To add metadata to your samples, you first need to assign a metadata model to your project.
Go to Projects > your_project > Project Settings > Details.
Select Edit.
From the Metadata Model drop-down list, select the metadata model you want to use for the project.
Select Save. All fields configured for the metadata model, and all fields in any parent models are applied to the samples in the project.
To manually add metadata to samples in your project, do as follows.
Precondition is that you have a metadata model assigned to your project
Go to Projects > your_project > Samples > your_sample.
Double-click your sample to open the sample details.
Enter all metadata information as it applies to the selected sample. All required metadata fields must be populated or the pipeline cannot start.
Select Save
To fill metadata by pipeline executions, a pipeline model must be created.
In the Illumina Connected Analytics main navigation, go to Projects > your_project > Flow > Pipelines > your_pipeline.
Double-click on your pipeline to open the pipeline details.
Create/Edit your model under Metadata Model tab. Field groups should be used when configuring metadata fields that are filled by a pipeline. These fields should be part of the same field group and be configured with the Multiple Value setting enabled.
In order for your pipeline to fill the metadata model, an output file with the name metadata.response.json
must be generated. After adding your group fields to the pipeline model, click on Generate example JSON
to view the required format for your pipeline.
❗️ The field names cannot have
.
in them, e.g. for the metric nameQ30 bases (excl. dup & clipped bases)
the.
afterexcl
must be removed.
Populating metadata models of samples allows having a sample-centric view of all the metadata. It is also possible to synchronize that data into your project's Base warehouse.
In the Illumina Connected Analytics main navigation, select Projects.
In your project menu select Schedule.
Select 'Add new', and then click on the Metadata Schedule option.
Type a name for your schedule, optionally add description, and select whether you would like the metadata source would be the current project or the entire tenant. It is also possible to select whether ICA references would be anonymized and if sensitive metadata fields would be included. As a reminder, values of sensitive metadata fields would not be visible to other users outside of the project.
Select Save.
Navigate to Tables under BASE menu in your project.
Two new table schemas should be added with your current metadata models.
The user documentation provides material for learning the basics of interacting with the platform including examples and tutorials. Start with the documentation to learn more.
In the section of the documentation, posts are made for new versions of deployments of the core platform components.
The event log shows an overview of system events with options to search and filter. For every entry, it lists the following:
Event date and time
Category (error, warn or info)
Code
Description
Tenant
Up to 200,000 results will be be returned. If your desired records are outside the range of the returned records, please refine the filters or use the search function at the top right.
Export is restricted to the amount of entries shown per page. You can use the selector at the bottom to set this to up to 1000 entries per page.
In order to create a Tool or Bench image, a Docker image is required to run the application in a containerized environment. Illumina Connected Analytics supports both public Docker images and private Docker images uploaded to ICA.
Navigate to System Settings > Docker Repository.
Click Create > External image to add a new external image.
Add your full image URL in the Url field, e.g. docker.io/alpine:latest
or registry.hub.docker.com/library/alpine:latest
. Docker Name and Version will auto-populate. (Tip: do not add http:// or https:// in your URL)
Note: Do not use :latest when the repository has rate limiting enabled as this interferes with caching and incurs additional data transfer.
(Optional) Complete the Description field.
Click Save.
The newly added image will appear in your Docker Repository list.
Verification of the URL is performed during execution of a pipeline which depends on the Docker image, not during configuration.
External images are accessed from the external source whenever required and not stored in ICA. Therefore, it is important not to move or delete the external source. There is no status displayed on external Docker repositories in the overview as ICA cannot guarantee their availability. The use of :stable instead of :latest is recommended.
In order to use private images in your tool, you must first upload them as a TAR file.
Navigate to Projects > your_project .
Select your uploaded TAR file and click in the top menu on Manage > Change Format .
Navigate to System Settings > Docker Repository (outside of your project).
Click on Create > Image.
Click on the magnifying glass to find your uploaded TAR image file.
Select the appropriate region and if needed, filter on project from the drop-down menus to find your file.
Select that file.
The newly added image should appear in your Docker Repository list. Verify it is marked as Available under the Status column to ensure it is ready to be used in your tool or pipeline.
Navigate to System Settings > Docker Repository.
Either
Select the required image(s) and go to Manage > Add Region.
OR double-click on a required image, check the box matching the region you want to add, and select update.
In both cases, allow a few minutes for the image to become available in the new region (the status becomes available in table view).
To remove regions, go to Manage > Remove Region or unselect the regions from the Docker image detail view.
You can download your created Docker images at System Settings > Docker Images > your_Docker_image > Manage > Download.
In order to be able to download Docker images, the following requirements must be met:
The Docker image can not be from an entitled bundle.
Only self-created Docker images can be downloaded.
The Docker image must be an internal image and in status Available.
You can only select a single Docker image at a time for download.
Docker image size should be kept as small as practically possible. To this end, it is best practice to compress the image. After compressing and uploading the image, select your uploaded file and click Manage > Change Format in the top menu to change it to Docker format so ICA can recognize the file.
The platform requires a provisioned tenant in the Illumina account management system with access to the Illumina Connected Analytics (ICA) application. Once a tenant has been provisioned, a tenant administrator will be assigned. The tenant administrator has permission to manage account access including add users, create workgroups, and add additional tenant administrators.
Each tenant is assigned a domain name used to login to the platform. The domain name is used in the login URL to navigate to the appropriate login page in a web browser. The login URL is https://<domain>.login.illumina.com
, where <domain>
is substituted with the domain name assigned to the tenant.
New user accounts can be created for a tenant by navigating to the domain login URL and following the links on the page to setup a new account with a valid email address. Once the account has been added to the domain, the tenant administrator may assign registered users to workgroups with permission to use the ICA application. Registered users may also be made workgroup administrators by tenant administrators or existing workgroup administrators.
For security reasons, it is best practice to not use accounts with administrator level access to generate API keys and instead create a specific CLI user with basic permission. This will minimize the possible impact of compromised keys.
Click the button to generate a new API Key. Provide a name for the API Key. Then choose to either include all workgroups or select the workgroups to be included. Selected workgroups will be accessible with the API Key.
Click to generate the API Key. The API Key is then presented (hidden) with a button to show the key to be copied and a link to download to a file to be stored securely for future reference. Once the window is closed, the key contents will not be accessible through the domain login page, so be sure to store it securely for future reference if needed.
After generating an API key, save the key somewhere secure to be referenced when using the command-line interface or APIs.
On the left, you have the navigation bar which will auto-collapse on smaller screens. When collapsed, use the ≡ symbol to expand it.
The central part of the display is the item on which you are performing your actions.
At the top right, you have icons to refresh the screen for information, status updates, and access to the online help.
The object data models for resources that are created in the platform include a unique id
field for identifying the resource. These fixed machine-readable IDs are used for accessing and modifying the resource through the API or CLI, even if the resource name changes.
Accessing the platform APIs requires authorizing calls using JSON Web Tokens (JWT). A JWT is a standardized trusted claim containing authentication context. This is a primary security mechanism to protect against unauthorized cross-account data access.
A JWT is generated by providing user credentials (API Key or username/password) to the token creation endpoint. Token creation can be performed using the API directly or the CLI.
A storage configuration provides ICA with information to connect to an external cloud storage provider, such as AWS S3. The storage configuration validates that the information provided is correct, and then continuously monitors the integration.
Refer to the following pages for instructions to setup supported external cloud storage providers:
The storage configuration requires credentials to connect to your storage. AWS uses the security credentials to authenticate and authorize your requests. On the System Settings > Credentials > Create, you can enter these credentials. Long-term access keys consist of a combination of the access key ID and secret access key as a set.
Fill out the following fields:
Type—The type of access credentials. This will usually be AWS user.
Name—Provide a name to easily identify your access key.
Access key ID—The access key you created.
Secret access key—Your related secret access key.
In the ICA main navigation, select System Settings > Storage > Create.
Configure the following settings for the storage configuration.
Type—Use the default value, eg, AWS_S3. Do not change.
Region—Select the region where the bucket is located.
Configuration name—You will use this name when creating volumes that reside in the bucket. The name length must be in between 3 and 63 characters.
Description—Here you can provide a description for yourself or other users to identify this storage configuration.
Bucket name—Enter the name of your S3 bucket.
Key prefix [Optional]—You can provide a key prefix to allow only files inside the prefix to be accessible. The key prefix must end with "/".
If a key prefix is specified, your projects will only have access to that folder and subfolders. For example, using the key prefix folder-1/ ensures that only the data from the folder-1 directory in your S3 bucket is synced with your ICA project. Using prefixes and distinct folders for each ICA project is the recommended configuration as it allows you to use the same S3 bucket for different projects.
Using no key prefix results in syncing all data in your S3 bucket (starting from root level) with your ICA project. Your project will have access to your entire S3 bucket, which prevents that S3 bucket from being used for other ICA projects. Although possible, this configuration is not recommended.
Secret—Select the credentials to associate with this storage configuration. These were created on the Credentials tab.
Server Side Encryption [Optional]—If needed, you can enter the algorithm and key name for server-side encryption processes.
Select Save.
With the action Set as default for region, you select which storage will be used as default storage in a region for new projects of your tenant. Only one storage can be default at a time for a region, so selecting a new storage as default will unselect the previous default. If you do not want to have a default, you can select the default storage and the action will become Unset as default for region.
The System Settings > Credentials > Share action is used to make the storage available to everyone in your tenant. By default, storage is private per user so that you have complete control over the contents. Once you decide you want to share the storage, simply select it and use the Share action. Do take into account that once shared, you can not unshare the storage. Once your storage is used in a project, it can also no longer be deleted.
Filenames beginning with / are not allowed, so be careful when entering full path names. Otherwise the file will end up on S3 but not be visible in ICA. If this happens, access your S3 storage directly and copy the data to where it was intended. If you are using an Illumina-managed S3 storage, submit a support request to delete the erroneous data.
Every 4 hours, ICA will verify the storage configuration and credentials to ensure availability. When an error is detected, ICA will attempt to reconnect once every 15 minutes. After 200 consecutively failed connection attempts (50 hours), ICA will stop trying to connect.
When you update your credentials, the storage configuration is automatically validated. In addition, you can manually trigger revalidation when ICA has stopped trying to connect by selecting the storage and then clicking Validate on the System Settings > Storage > Manage.
S3 Standard
Available
S3 Intelligent-Tiering
Available
S3 Express One Zone
Available
S3 Standard-IA
Available
S3 One Zone-IA
Available
S3 Glacier Instant Retrieval
Available
S3 Glacier Flexible Retrieval
Archived
S3 Glacier Deep Archive
Archived
Reduced redundancy (not recommended)
Available
A Tool is the definition of a containerized application with defined inputs, outputs, and execution environment details including compute resources required, environment variables, command line arguments, and more.
Tools define the inputs, parameters, and outputs for the analysis. Tools are available for use in graphical CWL pipelines by any project in the account.
Select System Settings > Tool Repository > + Create.
Configure tool settings in the tool properties tabs. See Tool Properties.
Select Save.
The following sections describe the tool properties that can be configured in each tab.
Name
The name of the tool.
Categories
One or more tags to categorize the tool. Select from existing tags or type a new tag name in the field.
Icon
The icon for the tool.
Description
Free text description for information purposes.
Status
The release status of the tool.
Docker image
The registered Docker image for the tool.
Regions
The regions supported by linked Docker image.
Tool version
The version of the tool specified by the end user. Could be any string.
Release version
The version number of the tool.
Family
A group of tools or tool versions.
Version comment
A description of changes in the updated version.
Links
External reference links.
Tool Status
The release status of the tool. can be one of "Draft", "Release Candidate", "Released" or "Deprecated".
Draft
Fully editable draft.
Release Candidate
The tool is ready for release. Editing is locked but the tool can be cloned to create a new version.
Released
The tool is released. Tools in this state cannot be edited. Editing is locked but the tool can be cloned to create a new version.
Deprecated
The tool is no longer intended for use in pipelines. but there are no restrictions placed on the tool. That is, it can still be added to new pipelines and will continue to work in existing pipelines. It is merely an indication to the user that the tool should no longer be used.
The Documentation tab provides options for configuring the HTML description for the tool. The description appears in the Tool Repository but is excluded from exported CWL definitions.
The General Tool tab provides options to configure the basic command line.
ID
CWL identifier field
CWL version
The CWL version in use. This field cannot be changed.
Base command
Components of the command. Each argument must be added in a separate line.
Standard in
The name of the file that captures Standard In (STDIN) stream information.
Standard out
The name of the file that captures Standard Out (STDOUT) stream information.
Standard error
The name of the file that captures Standard Error (STDERR) stream information.
Requirements
The requirements for triggering an error message.
Hints
The requirements for triggering a warning message.
The Hints/Requirements include CWL features to indicate capabilities expected in the Tool's execution environment.
Inline Javascript
The Tool contains a property with a JavaScript expression to resolve it's value.
Initial workdir
The workdir can be any of the following types:
String or Expression — A string or JavaScript expression, eg, $(inputs.InputFASTA)
File or Dir — A map of one or more files or directories, in the following format: {type: array, items: [File, Directory]}
Dirent — A script in the working directory. The Entry name field specifies the file name.
Scatter feature — Indicates that the workflow platform must support the scatter
and scatterMethod
fields.
The Tool Arguments tab provides options to configure base command parameters that do not require user input.
Tool arguments may be one of two types:
String or Expression — A literal string or JavaScript expression, eg --format=bam.
Binding — An argument constructed from the binding of an input parameter.
The following table describes the argument input fields.
Value
The literal string to be added to the base command.
String or expression
Position
The position of the argument in the final command line. If the position is not specified, the default value is set to 0 and the arguments appear in the order they were added.
Binding
Prefix
The string prefix.
Binding
Item separator
The separator that is used between array values.
Binding
Value from
The source string or JavaScript expression.
Binding
Separate
The setting to require the Prefix and Value from fields to be added as separate or combined arguments. Tru indicates the fields must be added as separate arguments. False indicates the fields must be added as a single concatenated argument.
Binding
Shell quote
The setting to quote the Value from field on the command line. True indicates the value field appears in the command line. False indicates the value field is entered manually.
Binding
Example
Prefix
--output-filename
Value from
$(inputs.inputSAM.nameroot).bam
Input file
/tmp/storage/SRR45678_sorted.sam
Output file
SRR45678_sorted.bam
The Tool Inputs tab provides options to define the input files and directories for the tool. The following table describes the input and binding fields. Selecting multi value enables type binding options for adding prefixes to the input.
ID
The file ID.
Label
A short description of the input.
Description
A long description of the input.
Type
The input type, which can be either a file or a directory.
Input options
Checkboxes to add the following options. Optional indicates the input is optional. Multi value indicates there is more than one input file or directory. Streamable indicates the file is read or written sequentially without seeking.
Secondary files
The required secondary files or directories.
Format
The input file format.
Position
The position of the argument in the final command line. If the position is not specified, the default value is set to 0 and the arguments appear in the order they were added.
Prefix
The string prefix.
Item separator
The separator that is used between array values.
Value from
The source string or JavaScript expression.
Load contents
The setting to require the Prefix and Value from fields to be added as separate or combined arguments. True indicates the fields must be added as separate arguments. False indicates the fields must be added as a single concatenated argument.
Separate
The setting to require the Prefix and Value from fields to be added as separate or combined arguments. True indicates the fields must be added as separate arguments. False indicates the fields must be added as a single concatenated argument.
Shell quote
The setting to quote the Value from field on the command line. True indicates the value field appears in the command line. False indicates the value field is entered manually.
The Tool Settings tab provides options to define parameters that can be set at the time of execution. The following table describes the input and binding fields. Selecting multi value enables type binding options for adding prefixes to the input.
ID
The file ID.
Label
A short description of the input.
Description
A long description of the input.
Default Value
The default value to use if the tool setting is not available.
Type
The input type, which can be Boolean, Int, Long, Float, Double or String.
Input options
Checkboxes to add the following options. Optional indicates the input is optional. Multi value indicates there can be more than one value for the input.
Position
The position of the argument in the final command line. If the position is not specified, the default value is set to 0 and the arguments appear in the order they were added.
Prefix
The string prefix.
Item separator
The separator that is used between array values.
Value from
The source string or JavaScript expression.
Separate
The setting to require the Prefix and Value from fields to be added as separate or combined arguments. True indicates the fields must be added as separate arguments. False indicates the fields must be added as a single concatenated argument.
Shell quote
The setting to quote the Value from field on the command line. True indicates the value field appears in the command line. False indicates the value field is entered manually.
The Tool Outputs tab provides options to define the parameters of output files.
The following table describes the input and binding fields. Selecting multi value enables type binding options for adding prefixes to the input.
ID
The file ID.
Label
A short description of the input.
Description
A long description of the input.
Type
The input type, which can be either a file or a directory.
Output options
Checkboxes to add the following options. Optional indicates the input is optional. Multi value indicates here is more than one input file or directory. Streamable indicates the file is read or written sequentially without seeking.
Secondary files
The required secondary files or directories.
Format
The input file format.
Globs
The pattern for searching file names.
Load contents
Automatically loads some contents. The system extracts up to the first 64 KiB of text from the file. Populates the contents field with the first 64 KiB of text from the file.
Output eval
Evaluate an expression to generate the output value.
The Tool CWL tab displays the complete CWL code constructed from the values entered in the other tabs. the CWL code automatically updates when changes are made in the tool definition tabs, and any changes to the CWL code are reflected in the tool definition tabs.
❗️ Modifying data within the CWL editor can result in invalid code.
From the System Settings > Tool Repository page, select a tool.
Select Edit.
From the System Settings > Tool Repository page, select a tool.
Select the Information tab.
From the Status drop-down menu, select a status.
Select Save.
In addition to the interactive Tool builder, the platform GUI also supports working directly with the raw definition when developing a new Tool. This provides the ability to write the Tool definition manually or bring an existing Tool's definition to the platform.
A simple example CWL Tool definition is provided below.
When creating a new Tool, navigate to System Settings > Tool Repository > your_tool > Tool CWL tab to show the raw CWL definition. Here a CWL CommandLineTool definition may be pasted into the editor. After pasting into the editor, the definition is parsed and the other tabs for visually editing the Tool will populate according to the definition contents.
General Tool - includes your base command and various optional configurations.
The base command is required for your tool to run, e.g. python /path/to/script.py
such that python
and /path/to/script.py
are added in separate lines.
Inline Javascript requirement - must be enabled if you are using Javascript anywhere in your tool definition.
Initial workdir requirement - Dirent Type
Your tool must point to a script that executes your analysis. That script can either be provided in your Docker image or using a Dirent. Defining a script via Dirent allows you to dynamically modify your script without updating your Docker image. In order to define your Dirent script define your script name under Entry name
(e.g. runner.sh
) and the script content under Entry
. Then, point your base command to that custom script, e.g. bash runner.sh
.
❗ What's the difference between Settings and Arguments?
Settings are exposed at the pipeline level with the ability to get modified at launch, while Arguments are intended to be immutable and hidden from users launching the pipeline.
How to reference your tool inputs and settings throughout the tool definition?
You can either reference your inputs using their position or ID.
Settings can be referenced using their defined IDs, e.g. $(inputs.InputSetting)
All inputs can also be referenced using their position, e.g. bash script.sh $1 $2
When looking at the main ICA navigation, you will see the following structure:
Projects are your primary work locations which contain your data and tools to execute your analyses. Projects can be considered as a binder for your work and information. You can have data contained within a project, or you can choose to make it shareable between projects.
Reference Data are reference genome sets which you use to help look for deviations and to compare your data against.
Bundles are packages of assets such as sample data, pipelines, tools and templates which you can use as a curated data set. Bundles can be provided both by Illumina and other providers, and you can even create your own bundles. You will find the Illumina-provided pipelines in bundles.
Audit/Event Logs are used for audit purposes and issue resolving.
System Settings contain general information susch as the location of storage space, docker images and tool repositories.
Projects are the main dividers in ICA. They provide an access-controlled boundary for organizing and sharing resources created in the platform. The Projects view is used to manage projects within the current tenant.
Note that there is a combined limit of 30,000 projects and bundles per tenant.
To create a new project, click the Projects > + Create Project button.
Required fields include:
Name
1-255 characters
Must begin with a letter
Characters are limited to alphanumerics, hyphens, underscores, and spaces
Analysis Priority (Low/Medium(default)/High) This is balanced per tenant with high priority analyses started first and the system progressing to the next lower priority once all higher priority analyses are running. Balance your priorities so that lower priority projects do not remain waiting for resources indefinitely.
Project Owner Owner (and usually contact person) of the project. The project owner has the same rights as a project administrator, but can not be removed from a project without first assigning another project owner. This can be done by the current project owner, the tenant administrator or a project administrator of the current project. Reassignment is done at Projects > your_project > Project Settings > Team > Edit.
Project Location Select your project location. Options available are based on Entitlement(s) associated with purchased subscription.
Storage Bundle (auto-selected based on user selection of Project Location)
Click the Save button to finish creating the project. The project will be visible from the Projects view.
During project creation, select the I want to manage my own storage checkbox to use a Storage Configuration as the data provider for the project.
With a storage configuration set, a project will have a 2-way sync with the external cloud storage provider: any data added directly to the external storage will be sync'ed into the ICA project data, and any data added to the project will be sync'ed into the external cloud storage.
Several tools are available to assist you with keeping an overview of your projects. These filters work in both list and tile view and persist across sessions.
Searching is a case-insensitive wildcard filter. Any project which contains the characters will be shown. Use * as wildcard in searches. Be aware that operators without search words are blocked and will result in Unexpected error occurred when searching for projects. You can use the brackets, AND, OR and NOT operators, provided that you do not start the search with them (Monkey AND Banana is allowed, AND Aardvark by itself is invalid syntax)
Filter by Workgroup : Projects in ICA can be accessible for different workgroups. This drop-down list allows you to filter projects for specific workgroups. To reset the filter so it displays projects from all your workgroups, use the x on the right which appears when a workgroup is selected.
Hidden projects : You can hide projects (Projects > your_project > Details > Hide) which you no longer use. Hiding will delete data in base and bench and will thus be irreversible.
You can still see hidden projects if you select this option and delete the data they contain at Projects > your_project > Data to save on storage costs.
If you are using your own S3 bucket, your S3 storage will be unlinked from the project, but the data will remain in your S3 storage. Your S3 storage can then be used for other projects.
Favorites : By clicking on the star next to the project name in the tile view, you set a project as favourite. You can have multiple favourites and use the Favourites checkbox to only show those favourites. This prevents having too many projects visible.
Tile view shows a grid of projects. This view is best suited if you only have a few projects or have filtered them out by creating favourites. A single click will open the project.
List view shows a list of projects. This view allows you to add additional filters on name, description, location, user role, tenant, size and analyses. A double-click is required to open the project.
Illumina software applications which do their own data management on ICA (such as BSSH) store their resources and data in a project much in the same was as manually created projects work in ICA. For ICA, these projects are considered to be externally-managed projects and from ICA, there are a number of restrictions on which actions are allowed on externally-managed projects. For example, you can not delete or move externally-managed data. This is to prevent inconsistencies when these applications want to access their own project data.
When you create a folder with a name which already exists as externally-managed folder, your project will have that folder twice. Once ICA-managed and once externally-managed as S3 does not require unique folder names.
Projects are indicated as externally-managed in the projects overview screen by a project card with a light grey accent and a lock symbol followed by "managed by app".
Upload your private image as a TAR file, either by dragging and dropping the file in the Data tab, using the CLI or a Connector. For more information please refer to the project .
Select DOCKER from the drop-down menu and Save.
Select the appropriate region, fill in the Docker Name and Version and if it is a tool or a bench image, and click Save. \
You need a with a download rule to download the Docker image.
New users may reference the for detailed guidance on setting up an account and registering a subscription.
For more details on identity and access management, please see the help site.
To access the APIs using the command-line interface (CLI), an API Key may be provided as credentials when logging in. API Keys operate similar to a user name and password and should be kept secure and rotated on a regular basis (preferably yearly). When keys are compromised or no longer in use, they must be revoked. This is done through the by navigating to the profile drop down and selecting "Manage API Keys", followed by selecting the key and using the trash icon next to it.
For long-lived credentials to the API, an API Key can be generated from the account console and used with the API and command-line interface. Each user is limited to 10 API Keys. API Keys are managed through the product dashboard after logging in through the by navigating to the profile drop down and selecting "Manage API Keys".
The web application provides a visual user interface (UI) for navigating resources in the platform, managing projects, and extended features beyond the API. To access the web application, navigate to the .
The command-line interface offers a developer-oriented experience for interacting with the APIs to manage resources and launch analysis workflows. Find instructions for using the command-line interface including download links for your operating system in the .
The HTTP-based application programming interfaces (APIs) are listed in the section of the documentation. The reference documentation provides the ability to call APIs from the browser page and shows detailed information about the API schemas. HTTP client tooling such as Postman or cURL can be used to make direct calls to the API outside of the browser.
When accessing the API using the API Reference page or through REST client tools, the Authorization
header must be provided with the value set to Bearer <token>
where <token>
is replaced with a valid JSON Web Token (JWT). For generating a JWT, see .
For more information, refer to the documentation.
ICA performs a series of steps in the background to verify the connection to your bucket. This can take several minutes. You may need to manually refresh the list to verify that the bucket was successfully configured. Once the storage configuration setup is complete, the configuration can be used while .
Refer to this for the troubleshooting guide.
ICA supports the following storage classes. Please see the for more information on each:
If you are using , which allows S3 to automatically move files into different cost-effective storage tiers, please do NOT include the Archive and Deep Archive Access tiers, as these are not supported by ICA yet. Instead, you can use lifecycle rules to automatically move files to Archive after 90 days and Deep Archive after 180 days. Lifecycle rules are supported for user-managed buckets.
Refer to the for further explanation about many of the properties described below. Not all features described in the specification are supported.
File/Directory inputs can be referenced using their defined IDs, followed by the desired field, e.g. $(inputs.InputFile.path)
. For additional information please refer to the .
On the project creation screen, add information to create a project. See page for information about each field.
Refer to the documentation for details on creating a storage configuration.
Hiding projects is not possible for projects.
If you are missing projects, especially those been created by other users, the workgroup filter might still be active. Clear the filter with the x to the right. You can verify the list of projects to which you have access with the icav2 projects list
.
What you can do is add and data such as to externally managed projects. Separation of data is ensured by only allowing additional files at the root level or in dedicated subfolders which you can create in your projects. Data which you have added can be moved and deleted again.
You can add to externally managed projects, provided those bundles do not come with additional restrictions for the project.
You can start workspaces in externally-managed projects. The resulting data will be stored in the externally-managed project.
Tertiary modules such as are not supported for externally-managed projects.
Externally-managed projects protect their notification subscriptions to ensure no user can delete them. It is possible to add your own subscriptions to externally-managed projects, see for more information.
For a better understanding of how all components of ICA work, try the .