Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Published: July 30, 2024
Last Updated: June 07, 2024
Latest Patch: 6.2.1
These Release Notes describe the key changes made to software components for Clarity LIMS v6.2.1. The Clarity LIMS v6.2.1 patch release supersedes v6.2.0. It is intended for customers on or migrating to Clarity LIMS v6.2.0.
This release resolves customer-reported issues related to the following functions:
Search-indexer performance
Basic search failure when searching with multiple custom fields in the system
Necessary handling of controls for a configured step
There are no new features added with this release.
Fixed a bug where the search-indexer did not quickly consume messages from the queue. This issue caused data in the system to be non-searchable.
Fixed a bug where the bulk indexing for a project, container, step, and file failed due to a limitation in memory space for search-indexer. This bug occurred when the indexing batch size was too large. Indexing bulk size can now be modified with an attribute in /opt/gls/clarity/search-indexer/conf/search-index-config.properties.
Fixed a bug where the basic search failed when searching with large numbers of custom fields in the system.
Fixed a bug where details about the step failed to show when selecting the LIMS ID link for steps in the Advanced Search results.
Fixed a bug where controls and samples must be added in a certain order for the configured step. This issue caused an error that prevented the configured protocol step from starting.
Fixed a bug where a step cannot be started when controls and samples are in the standard step and the Enable QC flag is set to Yes.
There can be additional or missing search results when using Date Received of a sample as one of the search criteria in Advanced Search when the client and server are in different time zones.
Multiple accounts that share an email address only receive a reset password email for one of the accounts when a reset password request is made using Email information.
Users with the Read-Only permission do not have access to Advanced Search.
{ProjectName} in the {ProjectName} token appears as duplicated when there is more than one sample in a step.
On the Place Samples screen, the sample well position in the source container takes precedence in the placement order of the destination plate when placing samples from the Sample View List table. This placement order change only happens when samples are transferred from more than one source container. If the Sample View List table is sorted by a specific column, the order of the listed samples is ignored when they are placed in a destination plate.
The config slicer tool will takes longer time (~8 times) to import a config slice into instance with Clarity LIMS v6.2 when compared to earlier versions (v6.1 and below).
Customers connecting remotely to PostgreSQL database need to make sure that their JDBC driver is compatible with the PostgreSQL version used.
For on-premise deployments, Clarity LIMS v6.2.1 is qualified to run with the following server OS versions:
Oracle Linux v8.8 and v8.9 (64-bit)
RedHat Enterprise Linux v8.8 and v8.9 (64-bit)
Security vulnerabilities in PostgreSQL 15.x have been addressed. PostgreSQL 15.2–15.6 is at a low risk of security vulnerabilities for customers on Clarity LIMS v6.2.1.
PostgreSQL 15.2 and 15.6 are now qualified to run in Clarity LIMS v6.2.1.
The following information is a summary of the technical requirements for an on-premise or cloud based deployment of Clarity LIMS v6 and later. To install and use Clarity LIMS, the client and server systems must meet these requirements. Clarity LIMS is designed to run on standard commodity hardware. The requirements provide general guidelines for your hardware configuration. You can obtain specific configuration quotes from the hardware vendor of your choice.
Before installing Clarity LIMS, you must also organize, install, and/or configure some essential components. For details on these components and installation and configuration instructions, refer to the Pre-Installation Requirements.
Allow enough time for the procurement of your hardware and software. Make sure that all components are installed and configured before proceeding with the installation of Clarity LIMS.
For on-premise deployments, Clarity LIMS has two levels of recommended hardware. For larger labs in full production, we strongly suggest the high-throughput requirements.
The production server must be configured in US locale.
Recommended
Server class 64-bit CPU with at least eight cores at 2.9 GHz
20 MB shared cache (L3) memory
32 GB RAM
6 GB allocated to Tomcat
6 GB allocated to the database
2 GB allocated to ElasticSearch
100 GB hard disk drive space for the operating system, application, and log storage
1 Gbps Ethernet network or faster
High-throughput:
Server class 64-bit CPU with at least 16 cores at 2.9 GHz
20 MB shared cache (L3) memory
64 GB RAM
8 GB allocated to Tomcat
8 GB allocated to the database
2 GB allocated to ElasticSearch
100 GB hard disk drive space for the operating system, application, and log storage
1 Gbps Ethernet network or faster
Memory requirements must be discussed at the beginning of the project, before ordering hardware.
The amount of hard disk drive space required is contingent on the frequency and amount of data generated in your lab. We recommend that you take inventory of all instruments that will be used with Clarity LIMS and calculate the amount of data generated for each of them.
To make sure that your data are protected, we recommend that your Clarity LIMS server contain redundant storage and that you perform regular backups.
For robust network performance, make sure that there are no bottlenecks lower than 100 Mbps on any connected network devices (routers, firewalls, switches). This is especially important when handling the large amounts of data produced by certain instruments.
The physical hardware specifications described are also valid for Virtual Machine (VM) environments. If you have questions about your VM architecture, contact Illumina Support.
For cloud hosted deployments, Illumina sizes the system accordingly for system load. We reserve the right to archive auditing information to maintain system performance as the data set grows.
If your subscription is not renewed for your cloud hosted deployment, at your request, we will supply you with an export of all user data. In practice, we will provide a database dump and details on the database schema for you to pull out any data you need going forward.
For on-premise deployments, Clarity LIMS has been qualified to run with the following server operating systems versions:
RedHat Enterprise Linux v8.8 and v8.9 (64-bit)
Oracle Linux v8.8 and 8.9 (64-bit)
SELinux is not supported and must be set to either permissive or disabled mode.
For cloud hosted deployments, Illumina uses the latest qualified Oracle Linux version.
For on-premise deployments, Clarity LIMS has been qualified to run with PostgreSQL 15.2 and 15.6.
For cloud hosted deployments, Illumina uses the latest qualified PostgreSQL version.
The following client requirements apply to both on-premise and cloud hosted deployments.
Hardware
64-bit processor (dual-core 3.0 GHz)
8 GB RAM
Operating Systems
Windows 10
Microsoft Surface Pro support is for all operations only when a mouse is used. Touch screen support is for read-only lab work. Running samples through steps is not supported.
Linux (restricted to the server-supported versions listed previously)
Macintosh OS X (12 Monterey or later)
iOS 15 on iPad running Safari browser
iPad support is for read-only lab work. Running samples through steps is not supported.
Web Browsers
Google Chrome (latest update)
Mozilla Firefox (latest update)
Apple Safari on iPad only (latest update)
Other Requirements
1280 x 800 or higher
Cookies and JavaScript must be enabled
For both on-premise and cloud hosted deployments, a 20 Mbs network connection speed from client to server is required. If remote access via VPN is needed for LDAP or instrument integrations, we recommend 100 Mbs network connection speed between your site and the hosted instance.
The following requirements apply to automation workers installed on premise, for both on premise deployments and Illumina cloud hosted deployments, to support instrument integrations.
Hardware
64-bit processor
1 Gb RAM
Hard disk drive space equivalent to twice the size of the largest file you are planning to transfer
Operating Systems
Windows 10
RedHat Enterprise Linux v8.8 and v8.9
Oracle Linux v8.8 and 8.9
Applications
Linux – Illumina installs Java Open JDK 8.0 update 362 (1.8.0_362)
Windows – Clients must install Java Open JDK 8.0 update 362 (1.8.0_292)
This section provides instructions for installing on-premise or cloud hosted deployments of Clarity LIMS v6.2. For assistance with installation steps, contact the Clarity LIMS Support team.
This document provides the steps required to install a new Clarity LIMS v6.2 instance to Oracle Linux/RedHat Enterprise Linux v8.8 and v8.9.
The installation procedure includes adding the Clarity LIMS repository, installing the Clarity LIMS RPM through yum commands, and configuring the installation through a series of configuration scripts.
Your system meets the requirements listed in Technical Requirements.
You have installed and configured the required components. For more information, see Pre-Installation Requirements.
You have a database user and two empty schemas on your database server. The schemas are populated during configuration.
You have received the appropriate repository files from the Clarity LIMS Support team.
All standard OS security updates have been applied.
All instances of Clarity LIMS must have a purchased SSL/TLS certificate installed. Purchase the certificate before installation or upgrade. For instructions on installing purchased SSL/TLS certificates, see Install a Purchased SSL/TLS Certificate.
With the Oracle Linux/RedHat Enterprise Linux server, the following error messages can display when you perform the yum commands used to install Clarity LIMS:
Failed loading plugin "product-id": No module named 'urllib3'
Failed loading plugin "subscription-manager": No module named 'urllib3'
Failed loading plugin "upload-profile": No module named 'urllib3'
These messages do not affect the installation of Clarity LIMS. You can resolve these error messages by running the following command:
Using scp/sftp, WinSCP, FileZilla, PSCP, or similar, copy the repository to the following location: /etc/yum.repos.d.
Test the repo file with this command:
Run the install command:
Type y
to download and install the Clarity LIMS RPM core components.
NOTE: The installation of Clarity LIMS creates 3 operating system users:
glsai - User created to run the Automation Worker node
glsftp - User to access the SFTP file store
glsjboss - Runs the application server
These users are created by the RPM installation process, and should not be created before starting the installation steps. The user home directories are created in the directory /opt/gls/clarity/users
The operating system passwords for each of the above users should be set by the root user.
As the glsjboss user, change directory to /opt/gls/clarity/config/pending with the following command:
Run the first script listed sequentially in the directory listing with the following bash command:
This script configures the Secret Utility password management tool so that secrets and passwords are accessible. It is recommended that you store application secrets in vault. If that is not possible, the configuration script supports file-based storage. For more information about the prompts, see Guide to Secret Management.
Run the following script:
Run the next script to initialize the database and overwrite any existing data:
NOTE: This script requires that you enter the password for the glsftp user. Entering the password does not set the password for this user.
If your database server is standalone or remote, update the /opt/gls/clarity/tomcat/current/lib/activity-management-ui-config.groovy file with the following code snippet.
NOTE: Substitute the Remote DB IP and Remote DB Port placeholders with the information for your server.
If you do not update the script, log and database connection errors can occur.
Change to the root user, and then run the following script in the sequence to configure RabbitMQ:
As the root user, install the Apache proxy with the following script:
LabLink v2.4 is compatible with Clarity LIMS v6.2.
Before installing LabLink v2.4, make sure that a database named LabLink is created with the same database user as the Clarity LIMS database.
Turn off all Clarity LIMS services using the following command:
Install the LabLink RPM with the following yum command. Make sure that you have the correct repo enabled:
As the glsjboss user, run the pending initialization script using the following command:
Restart all Clarity LIMS services using the following command:
Make sure that LabLink is accessible at https://<your-Clarity-FQDN>/lablink
Clarity LIMS includes the run_clarity.sh script. This script starts (or stops) all Clarity LIMS services (Elasticsearch, RabbitMQ, Search Indexing, Tomcat, httpd/Apache proxy, Automation Worker) in the required order, with one command.
Run the following script as the root user:
If an error occurs starting any service, subsequent services will not be started. Stop all services before trying to start them again.
Start the system as follows.
Switch to the root user.
Make sure that no Clarity LIMS services are running.
Run the script with the following start command:
After the script has completed, all Clarity LIMS services should be ready for use.
If any services are running, the script exits and provides a list of services to stop. In this scenario, complete the following steps:
Use the script with stop command to stop services.
Open a supported browser window and make sure that you can access the Clarity LIMS client at the following URL: https://<your-Clarity-FQDN>/
Published: Aug 30, 2023
Last Updated: Aug 30, 2023
Latest Patch: 6.2.0
These Release Notes describe the key changes made to software components for Clarity LIMS v6.2.
Clarity LIMS v6.2 is deployable in both cloud hosted and on-premise environments.
The operating system (OS) changes from CentOS7 to Oracle Linux v8.8.
Lab account management that allows you to create, update, and delete accounts in Clarity LIMS.
You can now set and update account custom field values. These configured fields are available in the new Account management screen when you view, create, or update an account.
A red dot alert that displays when a required field without a default value is added to Clarity LIMS. This alert shows up LabLink next to the Configuration menu, the drop-down list in the Custom Fields menu, and next to Custom Fields in the Configuration tab.
Configure client custom fields in LabLink through the Custom Fields Configuration page after they have been created through the Custom Fields Configuration Page in Clarity LIMS. The Users page in LabLink shows the configured custom fields when creating, editing, viewing, and reviewing a user.
The Queue View page performance has been optimized.
Reviewed and addressed potential vulnerabilities for cross site scripting (XSS).
Fixed a bug where the upload of the sample submission spreadsheet with leading or trailing spaces in any of the header tags (eg, </TABLE HEADER>) resulted in import errors.
Fixed a bug where the LIMS ID (Process) in the Record Details step disappeared when you select No preset.
Fixed a bug where the custom term for Projects did not propagate to LabLink.
Fixed a bug where samples were not placed correctly when you drag and drop from the plate GUI. This bug happened when a step was configured to have samples ordered by Column and placed by Row in the placement stage.
Fixed a bug where false errors occured when the Config-slicer tool was used to import automation template files. In this update, the Config-slicer tool ignores any differences in the automation file XML tag, but logs a reminder in the configslicer-issues.log file. The file lists the different automation names that the users should check against so that the automation files for those names are uploaded properly. The file also shows warnings or errors logged.
Fixed a bug where the Config-slicer command-line interface (CLI) tool showed false negative reports with control types and reagent kits when the values were empty during the import or validation of the slice.
Fixed a bug that affected the UI for the File Attachment Method for the Demultiplexing step in Master Step and Step during the Configuration stage.
Updated UI and API application logs so that they contain only useful log information.
There is a bulk indexing error due to the memory space limitation for search-indexer when the indexing batch size is too large.
There can be additional or missing search results when using Date Received of a sample as one of the search criteria in Advanced Search when the client and server are in different time zones.
Multiple accounts that share the same email address only receive a reset password email for one of the accounts when a reset password request is made using Email information.
Clarity LIMS v6.2 supports both Python2 and Python3 versions. However, there is no symbolic link created for /usr/bin/python to Python2 and there will not be a default link. Scripts have to be updated specifically to either Python2 or Python3.
This section describes the installation steps required for installing Secret Utility and integration packages.
As of Clarity LIMS v5.4, the method used for managing passwords (secrets) for Clarity LIMS integration modules has changed.
The following diagram shows the installation steps required for installing Secret Utility and integration packages with Clarity LIMS v5.4 and later.
For details on Compatibility of Releases with Integration Modules, see
Install Clarity LIMS v6.2.
Install Clarity LIMS-App 6.2 and complete pending script.
Secret Utility (secretutil) is installed as part of Clarity LIMS-App 6.2 dependency, and the Secret Utility configuration is part of Clarity LIMS pending scripts.
Install Secret Utility.
In the automated installation tooling for Clarity LIMS v6.2, the installation and configuration of Secret Utility is included. No further action is necessary.
For manual installation:
Install Clarity LIMS-SecretUtil.
Configure Secret Utility by running the following script: /opt/gls/clarity/config/pending/05_configure_claritylims_secretutil.sh.
Check usage of custom API username.
Check for any needs for custom API username, if any (eg novaseq_user). The documentation for the integration package provides the requirements for API username.
NOTE: In a typical installation, a default API username (apiuser) is used. It is not necessary to add the default apiuser username, because it is configured as part of Clarity LIMS v6.2 installation. If no custom API username is required, skip step 4 and proceed to step 5.
Configure custom API username.
If a custom API username is not required, proceed to step 5.
If a custom API username is required, configure the user/password with Secret Utility as follows. Substitute the values enclosed in double quotes with your own values, keeping the double quotes.
java -jar /opt/gls/clarity/tools/secretutil/secretutil.jar -n=INTEGRATION -u="password" "key"
Example:
java -jar /opt/gls/clarity/tools/secretutil/secretutil.jar -n=INTEGRATION -u="mypassword" "apiusers/novaseq_user"
For a custom api username, set the key to apiusers/{custom api username}
Install integration package.
There is no change to the installation of the integration package. Follow existing installation instructions.
NOTE:
The new configuration script in the new integration package retrieves passwords directly from Secret Utility.
For IAP access token and BSSH access token, follow the existing setup guide. There is no change in the configuration step.
The core Clarity LIMS product includes the rename_claritylims_hostname.sh script, which allows you to change the hostname to which Clarity LIMS responds.
Clarity LIMS must be fully installed and configured. If it is not, the script instructs you to complete the installation.
The script stops all Clarity LIMS services. Make sure that all automation jobs are complete.
If you are not using a wild card SSL Certificate, purchase a certificate for the new hostname.
Update the hostname returned by the operating system to match the new name. Refer to for more information.
Running the renaming script requires root access.
The script does not update the Automation Worker installation. After you have completed the renaming steps, you must reconfigure all local and remote Automation Workers.
You might need to reconfigure additional services, such as the Reporting and Sequencing services.
We recommend that you back up the database before performing the following renaming steps.
The following table lists the settings changed by the rename_claritylims_hostname.sh
script.
Property | Global | Description | Location |
---|
*The ftp.host is only updated if it matches the previous IP address/hostname. This intended behavior accounts for the scenario in which the ftp host is on a different server.
Change the internal hostname
Before running the rename_claritylims_hostname.sh script, change the internal hostname for the instance - that is, /etc/hosts and related areas. There is no need to change any other LIMS-related components.
The new internal hostname will be used in the renaming process.
To verify the internal hostname, use the following command:
NOTE: If the hostname command does not return the correct new name, consult with your IT department to correct the name.
Verify SSL Certificate path:
The script may prompt you for the SSL Certificate path. Be sure to have that ready.
Use the following command to change to the root user:
Navigate to the /opt/gls/clarity/config directory.
Run the rename_claritylims_hostname.sh script:bash rename_claritylims_hostname.sh
If prompted for your SSL Certificate path, enter this information.
The script prompts you to confirm that you have changed the internal hostname.
If you enter no, you will be prompted to manually change the hostname (output shown below).
If you enter yes, the script proceeds to modify Clarity LIMS-related components to use the new hostname.
When the renaming is complete, the script prompts you to restart Clarity LIMS by running the run_clarity.sh script:
When all Clarity LIMS services have restarted, make sure that the hostname has been changed successfully. Complete the following steps:
Connect to all system components.
Log in and test the LIMS user interface in your web browser.
These Release Notes describe the key changes made to software components for Clarity LIMS v6.2.
Released versions:
This section provides an overview of the components of Clarity LIMS. For technical requirements, refer to .
Clarity LIMS is built on a platform that is easy to customize and supports off-the-shelf hardware, common database systems, and industry-standard data formats.
We offer both on-premise and Illumina cloud hosted deployment models. Both offerings use the same underlying software and security model as explained in the Clarity LIMS Security and Privacy technical note, available for download from the Illumina website.
For on-premise deployments, we provide flexibility in selecting tools that suit your needs and existing IT infrastructure. The Clarity LIMS environment includes:
Application server
Database
File server
Web client
Depending on the details of your contract with Illumina, the environment can also include:
Automation Workers
For cloud hosted deployments, the hardware is scaled upward by Illumina as required.
For on-premise deployments, you can separate application and file server functions from the database functions by installing them on discrete hardware platforms. Alternatively, you can combine them on a single server hardware platform. Base your decision on the size of your installation and how much data your laboratory processes.
Whichever solution you choose, Illumina requires that the Clarity LIMS server environment reside on dedicated hardware, free from other Illumina or third-party server products. The server environment must also be on a 1 Gbs or faster network. The network should not contain links with a capacity lower than 100 Mbps on any network-connected devices, such as routers, firewalls, and switches.
The Clarity LIMS installation installs Apache Tomcat 9.0, Apache Webserver v2 (used as a proxy), ElasticSearch and RabbitMQ to support search, and Open JDK 8.0. These software versions are the only versions that Clarity LIMS supports. The software packages are supplied by Illumina.
For cloud hosted deployments, Illumina fully manages the system deployment and maintenance.
Clarity LIMS uses a web application served by the Apache Tomcat server to manage the creation, collection, and retrieval of data and results. This core server is built on a Java architecture and allows for rapid deployment and custom configuration for other on-premise and Illumina cloud hosted deployments.
For both on-premise and Illumina cloud hosted deployments, the application requires the following standard secure ports for web and file communications:
Web communications: Port 80, 443
File communications: Port 22
Depending on included instrument integrations, additional open ports may be required.
Additionally, for cloud hosted deployments, a site-to-site VPN connection, using IPSEC, might be required for instrument integrations. The most current list of required open ports is included in the preinstallation documentation that is provided during an implementation project.
Clarity LIMS supports PostgreSQL database to record data generated by the client, and references to file locations on the Clarity LIMS file server.
If the system needs to export a file, it issues a call to the database to find the file location on the file server. We recommend that you store files on a server or file system separate from the Clarity LIMS application server, such as a Network Attached Storage (NAS) appliance.
When handling data, Clarity LIMS saves files in their original format. The advantages of saving files in their original format are as follows:
The size of files is restricted only by the size of your file system.
You benefit from the built-in error-correction and integrity-checking features included in the file system.
The amount of storage space required by Clarity LIMS depends on the following:
The number of samples your laboratory processes each day.
The instruments you use.
The number of files you save to the Clarity LIMS system.
Illumina will work with you to recommend an appropriate amount of storage space.
NOTE: We do not recommend that you place the Clarity LIMS file server on a remote mount to a Windows server. We recommend you discuss other network storage devices, such as high availability NAS, with your hardware and supported operating system vendors.
User-centric, goal-based design has become the new standard in software interfaces. Clarity LIMS has a clean, helpful, easy-to-use interface. It is a lightweight web application that provides:
A simple, fast, and efficient way for lab scientists to identify work they need to complete.
The tools necessary to complete and record that work quickly and efficiently.
The Clarity LIMS Automation Worker allows specifically designed scripts to automate and extend the functionality of Clarity LIMS. You can integrate a wide variety of laboratory instruments and software.
The Automation Worker runs as a Windows Service or as a Linux daemon. You can install the Automation Worker on any computer with a supported OS on your Clarity LIMS network. When you install multiple copies on different machines on your network, Clarity LIMS automatically distributes work across the machines to improve system performance.
NOTE: Only one Automation Worker node can be installed on a Windows server.
Mixpanel™ is a system that provides Illumina with information about how users interact with the Clarity LIMS web client. We do this by tracking which features are being used, and how often. Gathering this information allows us to determine which interactions are most common, and how our users proceed through protocol steps and tasks. We can then use this information to improve system performance and ultimately enhance the user experience.
All data are collected anonymously. We collect data on Clarity LIMS usage only. We do not collect specific sample names, projects, or values entered. We do track total usage (number of samples selected, how many protocol steps executed, and so on). Data are collected across all customers for analysis in one group. We do not directly track which site is doing which work. If you require more information about Mixpanel, contact the Illumina Support team.
All client traffic is encrypted over secure HTTP (HTTPS). To ensure the security of the transactions between Clarity LIMS and clients, on-premise deployments require a purchased certificate. The certificate should be from a well-known vendor such as DigiCert, Entrust, or QuoVadis. For information on the policies, processes, and controls enacted for security and privacy of data in cloud hosted deployments, see the Clarity LIMS Security and Privacy Technical Note, available for download from the Illumina website.
Refer to for details on configuring Secret Utility.
namingProviderHost | Yes | Configures appropriate endpoint for the Automation Worker | /opt/gls/clarity/tomcat/current/lib/ activity-management-ui-config.groovy |
api.uri | Per tenant setting | Base URI used by integrations for API calls | Property Table |
ftp.host | Per tenant setting | Location of FTP host for this tenant* | Property Table |
ServerName | Per tenant setting | Server name reference in lookup database | Lookup Database |
This document describes the steps required to update the Clarity LIMS application configuration.
Two levels of user passwords are created in the Clarity LIMS system: one at the operating system level and one at the Clarity LIMS level.
Following are details on the user passwords, instructions for changing them, and instructions for updating Clarity LIMS with the new database connection details.
The following steps are only required if the passwords for glsftp and/or apiuser have been changed.
The user passwords created at the operating system level are for the glsai, glsjboss, and glsftp users.
glsai and glsjboss users:
These users have no configuration associated with them.
You may change their passwords at any time.
glsftp user:
After installation of Clarity LIMS, you can change this password. However, you must also update it in the file/vault secret store, using the Secret Management Util tool.
Change the glsftp user's password on the server.
Log in to the server as the root user.
Stop Clarity LIMS using the following command:
Go to /opt/gls/clarity/tools/secretutil.
Update the password in the secret store.
For vault-based secret storage, use either the Vault command line interface (CLI) or Vault user interface (UI) to update the password.
For file-based secret storage, use Secret Management Util to update the password as follows:
Start Clarity LIMS using the following command:
The user passwords created at the Clarity LIMS level are for the admin, facility, and apiuser users.
admin and facility users:
These users have no configuration associated with them.
You may change their passwords at any time.
apiuser user:
After installation of Clarity LIMS, you can change this password. However, you must also update it in the file/vault secret store, using the Secret Management Util.
Check for any remote Automation Workers, and take note of their locations in your network. You will need to restart these after changing the password.
Log in to the server as the root user.
Stop Clarity LIMS using the following command:
Go to /opt/gls/clarity/tools/secretutil.
Update the password in the secret store
For vault-based secret storage, use either the Vault command line interface (CLI) or Vault user interface (UI) to update the password.
For file-based secret storage, use Secret Management Util to update the password as follows:
Start Clarity LIMS using the following command:
In some circumstances (such as security breaches/compromises), the database connection details (eg, database password) are updated, which prevents Clarity LIMS from connecting to the database. You can correct this issue by updating Clarity LIMS with the new database connection details as follows.
Check for any remote Automation Workers.
Update the existing tenant with the new details.
Restart any Automation Workers.
To update database connection details:
Check for any remote Automation Workers, and take note of their locations in your network.
Log in to the server as the root user.
Stop Clarity LIMS using the following command:
Go to /opt/gls/clarity/tools/secretutil.
Update existing tenant with new details.
For vault-based secret storage, use either the Vault CLI or Vault UI to update the password.
For file-based secret storage, use the Secret Management Util to update the database password as the root user.
Start Clarity LIMS using the following command:
Before installing Clarity LIMS, you must purchase hardware and software that meet the minimum requirements (see Technical Overview). Following those purchases there are several components that you must organize, install, or configure.
The following sections discuss these components, and how to install and configure them. These sections apply to on-premise customers only. Before completing the steps described, make sure that the server has the minimum requirements. See Technical Requirements for details.
Before the Clarity LIMS support team can install Clarity LIMS, the items listed above must be set up and configured as described in this document. Confirm the completion of this work with the support team.
All instances of Clarity LIMS must have a purchased SSL / TLS certificate installed.
Certificate Authorities will no longer issue SSL / TLS certificates for internal server names. As a result, to obtain a certificate you must have a valid, public DNS entry for your server.
Before installing or upgrading Clarity LIMS, do the following:
Purchase an SSL / TLS certificate.
Save the certificate files on the server on which the Clarity LIMS server is installed.
Provide the Clarity LIMS Support team with the private key and password for the SSL / TLS certificate.
For instructions on obtaining a certificate, see Install a Purchased SSL/TLS Certificate.
Security-Enhanced Linux (SELinux) is not supported for use with Clarity LIMS. Make sure that SELinux is set to either permissive or disabled mode.
For instructions, see the following sections of the Red Hat documentation:
5.4.1.2 Permissive Mode
5.4.2 Disabling SELinux
You can find additional documentation on users at /opt/gls/clarity/documentation/users/
Clarity LIMS is installed using industry standard RPM packaging. The Clarity LIMS support team requires root credentials to the server during the installation process.
The following sections discuss the system user accounts that the support team sets up during the installation process. It is important that you do not change these system users.
The production server must be configured in US locale.
After installing a supported database, Clarity LIMS requires certain changes to the default database configuration.
Additional tablespace names and user profiles may be needed, depending on the configuration of your system.
For more information or for assistance with your database configuration, contact the Clarity LIMS Support team.
To access the Clarity LIMS server via DNS, make sure that the following apply:
The server local host file /etc/hosts does not contain an entry for that hostname bound to its loopback address.
Any hostname entries correspond to their entries in DNS.
The command hostname -f must return the fully-qualified domain name of the server.
For client systems:
Users should use the fully-qualified domain name (FQDN) when logging on to the system. Using the FQDN ensures persistence of the session ID.
Clarity LIMS requires the environment variable TZ be set on the Clarity LIMS server to your correct timezone. If the value is not configured, a default of GMT is configured by Clarity LIMS in the file /etc/profile.d/clarity.sh.
This file might update on upgrade. Any changes must be manually applied across upgrades.
To allow proper system communication, the following ports on the Clarity LIMS server must be accessible by the LIMS clients:
TCP/IP Port 22 (SFTP) for file transfers between the client and server
TCP/IP Port 443 (HTTPS) for Apache proxy
TCP/IP Port 80 (HTTP) used to forward any unknown unsecured requests over SSL / TLS and port 443
The following ports are required on the local Clarity LIMS server:
TCP/IP Port 4369 for Epmd for RabbitMQ
TCP/IP Port 5432 for PostgreSQL database communications *
TCP/IP Port 9009 for Tomcat
TCP/IP Port 9200 for Elastic Search
TCP/IP Port 9300 for Elastic Search
TCP/IP Port 5672 for RabbitMQ
TCP/IP Port 15672 for RabbitMQ
The database ports are configurable and might be different in your organization.
Computers running an automation worker must be able to reach the Clarity LIMS server via the following ports:
TCP/IP Port 22 (SFTP) for file transfers between the client and server
TCP/IP Port 443 (HTTPS) for Apache proxy
TCP/IP Port 80 (HTTPS) used to forward any unknown, unsecured requests over SSL / TLS and port 443
To facilitate instrument integrations, a site-to-site IPSEC VPN connection can be set up between your facility and the hosted instance.
There are two ports that must be opened: 4500/udp and 500/udp.
If a VPN is required, you must provide more detailed setup information to the Clarity LIMS Support team. Upon request, the Clarity LIMS Support team will provide the additional form required to do this.
Clarity LIMS uses an Apache proxy and the Clarity LIMS installation process installs and configures it automatically. If the server already has an Apache proxy installed and configured, the installation process overwrites the current configuration. If that configuration is important, you must back it up before running the Clarity LIMS installation process. Any settings that are important to your organization must be reconfigured manually after an install or upgrade of Clarity LIMS.
In Clarity LIMS v6.0.0 and later, you can choose to install and configure a HashiCorp Vault to store Clarity LIMS-related passwords and secrets safely.
For more information, refer to Configure Your HashiCorp Vault.
This section describes the steps for removing projects, samples, workflows, protocols, steps, and other artifacts that were created during training but are not needed for production, from the system.
After initial user training, but before the lab starts to use Clarity LIMS in production, a database cleanup is recommended.
This process removes any projects, samples, workflows, protocols, steps, and other artifacts from the system that were created during training but are not needed for production.
Contact the Clarity LIMS Support team to schedule a time for this cleanup procedure.
In preparation for the clean-up, complete the following steps:
Delete all unwanted custom fields and master steps.
Set the status of all unwanted workflows to Archived. If there is an Archived workflow that you would like to keep, temporarily set its status to Active. Note the following:
Protocols that are part of an Archived workflow, or set of Archived workflows will be deleted.
Protocols that belong to an Active or Pending workflow, in addition to an Archived workflow, will not be deleted.
After these steps are completed, a Clarity LIMS Technical Support Analyst (TSA) helps to perform the cleanup procedure. This procedure takes approximately 15–20 minutes to complete.
This section explains how to install purchased SSL/TLS certificates into Clarity LIMS v5 and later.
Clarity LIMS can work with Named or WildCard certificates.
Typically, the process to install the certificates into Clarity LIMS is as follows:
Request a certificate from your IT organization, or purchase a certificate from a third-party SSL/TLS vendor.
Install the certificate using the script installCertificates.sh provided with Clarity LIMS. This script prompts for the required inputs and helps you to configure Clarity LIMS to use your SSL/TLS certificate.
Some IT organizations have preexisting certificates issued by an internal organization, typically referred to as an 'internal CA.' These internal CA certificates are not fully compatible with Java, and prevent the automation worker—and all integrations—from properly communicating with the Clarity LIMS server. Internal CA certificates are therefore not supported in Clarity LIMS.
You will need your organization or the third-party SSL/TLS vendor to provide you with the following:
An Apache 2.4-compatible SSL/TLS Certificate
The Certificate private key
The corresponding certificate chain, properly prepared for Apache 2.4. This component may not be required, depending on the organization that signs your certificate.
Your IT organization might provide you with a WildCard certificate. Clarity LIMS can use WildCard certificates, as long as the Apache 2.4-compatible certificate, private key, and certificate chain files are provided.
If purchasing from a third-party vendor, make sure that the vendor provides you with an Apache 2.4-compatible bundle that includes the components listed above. To purchase from a vendor, refer to their documentation.
By default, a private key has a password associated with it. On startup, Apache requests a passphrase to access the private key. You can use either of the following methods to resolve this issue:
Method 1 — Place a passphrase file on the system and reference it in your clarity.conf file.
Create a passphrase file in a directory that has read, write, and execute permissions for only the root or apache user.
Edit the clarity.conf file. The clarity.conf file is in the /etc/httpd/conf.d directory.
Add the following line to your clarity.conf file, before the section:
Method 2: Removing passphrase from an OpenSSL key
Removing the passphrase from an OpenSSL key is a security risk. Only remove the passphrase if you know that this risk is acceptable.
Remove the password from an OpenSSL key using the following command:\
Assumptions and Prerequisites
You have installed BaseSpace Clarity LIMS and run the 40_install_proxy.sh script.
You have OpenSSL (installed by default on the Clarity LIMS Linux server when you install Clarity LIMS). OpenSSL is used by the installCertificates.sh script.
You have the files listed in the following table (obtained from the process described previously) available on the Clarity LIMS server. In the example shown below, these files are located at /tmp/certs.
On the Clarity LIMS server, as the root user, run the installCertificates.sh script:
The following steps use the HashiCorp Vault user interface (UI) to guide you through the configuration of your HashiCorp Vault instance.
These configurations are mandatory for on premise Clarity LIMS deployments. For Illumina cloud hosted deployments, this configuration is completed by Illumina.
Detailed information and instructions for HashiCorp Vault are available on the HashiCorp website: www.hashicorp.com.
You are planning to install Clarity LIMS v6.0.0 and newer.
You have installed the latest version of either HashiCorp Vault Open Source or Enterprise.
You have read the Getting Started tutorials for Vault on the HashiCorp website and/or possess a basic knowledge of HashiCorp Vault.
You have system administrator permissions to perform the necessary operations to your HashiCorp Vault instance.
You have allowed the necessary port 443 from the Clarity LIMS instance to your HashiCorp Vault instance.
You have access to all the passwords required to be configured in your HashiCorp Vault instance.
To enable a new KV Secret Engine, refer to the Versioned Key/Value Secrets Engine tutorial provided on the HashiCorp Vault website.
The following table lists the secrets required for Clarity LIMS. To use the paths shown in the table, replace $host
with your fully qualified domain name (FQDN).
When configuration is complete, these secrets are listed in the Vault user interface.
AppRole is the recommended authentication method to use with the Clarity LIMS Secret Utility tool.
To enable the AppRole authentication method, refer to the AppRole Pull Authentication tutorial provided on the HashiCorp Vault website.
When AppRole is enabled, create an AppRole with the appropriate Access Control List (ACL) policy (see the following section).
Make a note of the Role ID and Secret ID. You need these IDs when configuring Secret Utility.
Secret Utility does not manage your Role ID and Secret ID for you (eg, renewing, revoking, and so on). It accepts the Role ID and Secret ID as-is, and attempts to authenticate with Vault.
Alternatively, the Clarity LIMS Secret Utility tool also works with the token authentication method.
To learn more about tokens, see the Tokens documentation on the HashiCorp Vault website.
Secret Utility does not manage your tokens for you (eg, renewing, revoking, and so on). It accepts the token as-is, and attempts to authenticate with Vault.
After enabling the AppRole authentication method, create ACL policies to access the Secret Engine.
IMPORTANT: Replace "claritylims" with your Secret Engine path.
You might need to update or create additional ACL policies for your System Administrator to rotate the credentials, when required.
To create the ACL policy, refer to the Vault Policies tutorial provided on the HashiCorp Vault website.
SSH into the Clarity LIMS instance.
After installing Clarity LIMS and configuring Secret Utility using the instructions provided in Guide to Secret Management, run the command to read an existing password/secret from HashiCorp Vault.
\
If you use, or would like to use, an LDAP server to consolidate directory services, it is possible to integrate LDAP with Clarity LIMS.
The Clarity LIMS LDAP solution allows for the following features:
User name and password authentication against LDAP to govern access to Clarity LIMS.
Ongoing unidirectional synchronization of user information (such as first name, last name, title, phone, fax, and email) from LDAP to Clarity LIMS. For example, if your telephone number is changed in the LDAP directory, the information is pushed down to Clarity LIMS, keeping contact information current.
Automated unidirectional provisioning of user accounts from LDAP to Clarity LIMS. For example, adding a user to a particular group within the LDAP directory automatically results in a local account with LDAP authentication being added to Clarity LIMS.
Our Field Application Specialist (FAS) team meets with you to discuss the current LDAP implementation. In preparation for this meeting, collect the following information:
The type of provisioning you would like to use to synchronize Clarity LIMS with LDAP (automatic or manual).
A list of the LDAP attributes the current system uses to record the following user properties: first name, last name, title, phone number, fax number, and e‐mail address.
NOTE: When integrating Clarity LIMS with LDAP, the LIMS database and the LDAP directory remain as separate and distinct entities.
Clarity LIMS is tested with the following LDAP servers:
ApacheDS 1.5 and later
Microsoft Active Directory (Windows Server 2003 or later)
OpenLDAP 2.3.35 and later
While user provisioning and authentication are handled with LDAP, a Clarity LIMS system administrator completes the following steps:
Determine the level of access that a user requires.
Modify the userʹs account within the LIMS to provide that access.
Once an LDAP integration with Clarity LIMS is established, all changes to user profiles must be made from the LDAP server.
Only automatic user provisioning is available.
With automatic user provisioning, Clarity LIMS users are created automatically by a provisioning tool that periodically synchronizes the LDAP server with the LIMS.
To make use of the LDAP directory services, Clarity LIMS maps to specific LDAP attributes within a defined schema.
However, the directory structure used can vary among installations. Our Field Applications Specialist (FAS) team work with you to complete the following items:
Analyze a specific LDAP solution and directory organization or assist with the selection and initial configuration of an LDAP service.
Discuss the user elements that will be synchronized between the LDAP service and Clarity LIMS systems.
Configure LDAP to connect to your Clarity LIMS systems.
User authentication is handled in the Clarity LIMS.
In previous versions of Clarity LIMS, a few customers reported slow response time for the REST API when using LDAP users for authentication. As of Clarity LIMS v5.2.x / v4.3.x, the REST API response time has improved by introducing a new feature that caches user authentication results through a new property (api.session.timeout).
To make use of the new feature, do the following actions:
Make sure that api.session.timeout property is set.
Include the HTTP Connection & Authorization request headers and session cookie in the HTTP request.
Stored in the Clarity LIMS database table, the api.session.timeout property allows you to specify the period of time for which a user's session should persist, after they have been authenticated.
This property is set during installation or upgrade of the LIMS. The default value is 5 minutes. If necessary, update the value using the omxprops-ConfigTool.jar tool at the following location:
For example:
For this configuration to take effect, stop and restart Tomcat:
To persist user authentication, the HTTP request must contain the following HTTP request headers:
Request Header
Connection: Keep-Alive
Authorization: Basic <credentials>
The HTTP request headers are required for the initial request, and for any subsequent request to get a valid JSESSIONID. Additional scenarios are described in the following table.
To make sure that a valid authenticated session is provided if the cookie in the request has expired, also provide the following JSESSIONID cookie:
Cookie
JSESSIONID=<a valid JSESSIONID from the initial request>
The following table lists the various combinations of HTTP Authorization request header and JSESSIONID cookie and their expected result. It assumes that the HTTP Connection request header is provided for all scenarios.
This section provides instructions for upgrading existing on premise Clarity LIMS deployments to cloud hosted deployments. For assistance with upgrade steps, contact the Clarity LIMS Support team.
This section provides the steps required to upgrade an existing on-premise deployment of Clarity LIMS to a RedHat Enterprise Linux/Oracle Linux compatible Illumina cloud hosted deployment of Clarity LIMS v6.2.
For installation requirements, see Technical Requirements.
The following table shows the applicable migration paths.
From | To | Notes |
---|---|---|
Before Illumina can proceed with the upgrade, complete the following prerequisite steps.
Illumina provisions an instance installed with the latest qualified Oracle Linux version in the cloud.
Upgrades are only supported from Clarity LIMS versions 4.2, 4.3, 5.0, 5.1, 5.2, 6.0, and 6.1 (on-premise).
Custom configurations: If you have made any additional configurations that are not part of the Clarity LIMS preinstallation requirements, apply these configurations to the new instance.
Passwords: Configure all passwords to be same as the existing instance. After you have verified the new instance, you can change passwords as needed.
Make sure that all user accounts have email addresses associated with them. User passwords must be reset after the upgrade is complete.
To assist with validating the system before an upgrade, install the UpgradePreValidation RPM on the source server.
This RPM is installed temporarily, and provides tools to help check the system before an upgrade.
If validation is successful, you can remove this RPM and proceed with the upgrade.
Install the UpgradePreValidation RPM. Make sure you have the correct repo enabled.
On the source server, as the root user, run the following command:
[Optional] Set up Secret Utility.
If ClarityLIMS-SecretUtil was installed previously, run the following command to set up Secret Utility as the glsjboss user:
NOTE: Using a vault is the safer way of storing application secrets. If using a vault is not possible, the configuration script supports file-based storage.
For more information on the prompts, see #claritylimsv6.2guidetosecretmanagement-configurationscript section of Guide to Secret Management.
Run the validation script as follows.
Make sure that the Clarity LIMS server is running.
As the root user, run the following command:
Review the output of the script to determine if you can proceed with the upgrade. If the script outlines any issues with the potential upgrade, review the generated log files and contact the Clarity LIMS Support team for further assistance.
Remove the PreValidation RPM.
Remove the PreValidation RPM only after you confirm that you can upgrade. If you are unsure, consult the Clarity LIMS LIMS Support team.
As the root user, run the following command:
Archive the backup in case a rollback is required.
Before performing the backup, stop Clarity LIMS. The following command stops all Clarity LIMS components, including Automation Worker and integration services.
Stop Clarity LIMS:
On the command-line interface, run the following command as the root user:
Back up Postgresql database:
On the PostgreSQL server, best practice recommends backing up the database using the pg_dump utility.
The following example assumes the following:
The database server and the application server are on the same server.
The pg_dump utility is accessible to the glsjboss user.
Example
The Postgres DBA uses the following commands to create a database backup in the glsjboss home directory. Substitute the variables as appropriate for the specific environment.
As the glsjboss user:
Make sure that the following items, and any other files and configurations, are backed up safely:
crontab -l
custom scripts
OS configuration files
firewall rules
network configuration
etc.
If there are custom changes to any application configurations (to increase performance, security, etc.), restore/configure these items manually later by referencing the backup.
We recommend that you back up the items into a single zip file and transfer them to the new Cloud instance.
Directories
/opt/gls/clarity/users/glsftp or /home/glsftp (Clarity LIMS file store location)
/opt/gls/clarity/customextensions
/opt/gls/clarity/glscontents
Additional configurations:
rpm -qa | grep "BaseSpace\|Clarity" > clarityrpms.txt
Secret Utility (secretutil) is a password management tool used to store, manage, and retrieve passwords. Secret Utility returns the passwords in plain text.
The following sections describe the configuration of Secret Utility, which is installed as part of the Clarity LIMS-SecretUtil RPM.
Integration Package | Clarity LIMS Version | Secret Util Mode | Installation Steps |
---|---|---|---|
If Secret Utility has not been configured, the 05_configure_claritylims_secretutil.sh script is created in the /opt/gls/clarity/config/pending folder.
To reconfigure Secret Utility:
Remove the hidden file /opt/gls/clarity/tools/secretutil/.configured
Run the Secret Utility configuration script as follows: /opt/gls/clarity/config/configure_claritylims_secretutil.sh
The following table describes the entries prompted by the configure_claritylims_secretutil.sh script.
Prompts | Default | Description |
---|---|---|
If Secret Utility is configured as Vault Mode, the passwords are stored and retrieved from Vault Enterprise.
To use Secret Utility and perform the following steps, you must first remote into the instance before performing any of the following steps.
To use the Vault user interface (UI) and perform the following steps, you must have the appropriate role and access control list (ACL) policies.
If Secret Utility is configured as File mode, the passwords are encrypted and stored in /opt/gls/clarity/tools/secretutil/conf/secrets.properties. Encryption is based on the CLARITYSECRET_ENCRYPTION_KEY environment variable.
To manage the passwords and perform the following steps, you must first remote into the instance.
This section provides instructions for upgrading an existing on premise Clarity LIMS deployment. For assistance with upgrade steps, contact the Clarity LIMS Support team.
This section provides the steps required to upgrade an existing on-premise deployment of Clarity LIMS to a RedHat Enterprise Linux/Oracle Linux compatible on-premise deployment of Clarity LIMS v6.2.
The installation procedure includes provisioning and configuring the new instance, and installing and then verifying the new Clarity LIMS version.
For installation requirements, see Technical Requirements.
If you have questions about the upgrade procedure, contact the Clarity LIMS Support team.
The following table shows the applicable migration paths.
From | To | Notes |
---|---|---|
Before Illumina can proceed with the upgrade, complete the following prerequisite steps.
We recommend you provision an instance with similar or higher specifications to the current Clarity LIMS instance.
Note the following:
Your system must meet the requirements listed in Technical Requirements
All standard operating system (OS) security updates must have been applied.
Upgrades are only supported from Clarity LIMS v4.2/5.0/5.1/6.0/6.1, and v4.3/5.2.0 (Oracle).
The command hostname -f must resolve to the fully qualified domain name (FQDN) of the server. For details, see the #h_3725fb56-8faa-4c75-b76f-53493f5e9636 section of Pre-Installation Requirements.
Before installing Clarity LIMS on the new instance, make sure that the instance has the same FQDN as the existing production instance. If your new instance cannot have the same FQDN as the production instance, contact the Illumina Support team.
To configure the new instance, follow the instructions provided in the Pre-Installation Requirements and see also Technical Overview.
Custom configurations: If you have made any additional configurations that are not part of the Clarity LIMS pre-installation requirements, apply these configurations to the new instance.
Passwords: Configure all passwords to be same as the existing instance. After you have verified the new instance, you can change passwords as needed.
Make sure that all user accounts have email addresses associated with them. Users must reset their passwords after the upgrade is complete.
To assist with validating the system before an upgrade, install the UpgradePreValidation RPM on the source server.
This RPM is installed temporarily, and provides tools to help check the system before an upgrade.
If validation is successful, you can remove this RPM and proceed with the upgrade.
Install the UpgradePreValidation RPM. Make sure you have the correct repo enabled.
On the source server, as the root user, run the following command:
[Optional] Set up Secret Utility.
If ClarityLIMS-SecretUtil was installed previously, run the following command to set up Secret Utility as the glsjboss user:
NOTE: Using a vault is the safer way of storing application secrets. If using a vault is not possible, the configuration script supports file-based storage.
For more information on the prompts, see #claritylimsv6.2guidetosecretmanagement-configurationscript section of Guide to Secret Management.
Run the validation script as follows.
Make sure that the Clarity LIMS server is running.
As the root user, run the following command:
Review the output of the script to determine if you can proceed with the upgrade. If the script outlines any issues with the potential upgrade, review the generated log files and contact the Clarity LIMS Support team for further assistance.
Remove the PreValidation RPM.
Remove the PreValidation RPM only after you confirm that you can upgrade. If you are unsure, consult the Clarity LIMS LIMS Support team.
As the root user, run the following command:
Archive the backup in case a rollback is required.
Before performing the backup, stop Clarity LIMS. The following command stops all Clarity LIMS components, including Automation Worker and integration services.
Stop Clarity LIMS:
On the command-line interface, run the following command as the root user:
Back up Postgresql database:
On the PostgreSQL server, best practice recommends backing up the database using the pg_dump utility.
The following example assumes the following:
The database server and the application server are on the same server.
The pg_dump utility is accessible to the glsjboss user.
Example
The Postgres DBA uses the following commands to create a database backup in the glsjboss home directory. Substitute the variables as appropriate for the specific environment.
As the glsjboss user:
Make sure that the following items, and any other files and configurations, are backed up safely:
crontab -l
custom scripts
OS configuration files
firewall rules
network configuration
etc.
If there are custom changes to any application configurations (to increase performance, security, etc.), restore/configure these items manually later by referencing the backup.
We recommend that you back up the items into a single zip file and transfer them to the new instance.
Directories
/opt/gls/clarity/users/glsftp or /home/glsftp (Clarity LIMS file store location)
/opt/gls/clarity/customextensions
/opt/gls/clarity/glscontents
/etc/httpd/conf.d
/etc/httpd/sslcertificate
Files
.pgpass
/opt/gls/pgsql/9.x/pg_hba.conf
/opt/gls/pgsql/9.x/postgresql.conf
Additional configurations:
rpm -qa | grep "BaseSpace\|Clarity" > clarityrpms.txt
Make sure that the repository file exists in the following location:
As the root user, install the RPMs required on the new instance by referencing the content of clarityrpms.txt.
If you are upgrading from Clarity LIMS v4.x, some RPMs (eg, Server RPM) are now included under other RPMs.
Install any other required RPMs (eg, Python packages) which are not part of the Clarity LIMS setup.
Do not install NGS, Illumina Preset Protocols (IPP), and Sequencing RPMs during this step. You install these RPMs later in the installation process.
Configure and validate the new system, following the procedure outlined in the Installation Procedure
When prompted for user passwords, enter the passwords used in the previous instance.
Stop Clarity LIMS. To do so, on the command-line interface, run the following command as the root user:
/opt/gls/clarity/bin/run_clarity.sh stop
The following steps are only required if you are restoring from a previous instance. If you are installing on a testing environment, proceed to #claritylimsv6.2upgradeprocedureforonpremisetoonpremisefrom4.2-4.3-5.0-5.1-5.2-6.0-6.1-4.3-5.2-oracle-28 section.
LabLink v2.4 is compatible with Clarity LIMS v6.2.
If upgrading from Clarity LIMS v4.x, Illumina migrates your LabLink-related data with the following exceptions:
sample submission templates
customized UI CSS
lablink property table configurations
Before completing the following steps, make sure that a database named lablink is created with the same database user as Clarity LIMS database.
Install the LabLink RPM. Make sure that you have the correct repo enabled.
On the new instance, as the root user, run the following command:
Run the pending initialization script.
As the glsjboss user, run the following command:
The script prompts for a Google reCAPTCHA URL, site key, and secret key.
Google reCAPTCHA URL: https://www.google.com/recaptcha/
Google reCAPTCHA site key and secret key: View these keys from the Google reCAPTCHA Admin Console, under Settings.
NOTE: If you prefer not to use reCAPTCHA, leave the site-key and secret-key fields blank when running the configuration script. LabLink does not display the reCAPTCHA when these fields are left blank. You can also use your own reCAPTCHA accounts when configuring LabLink.
To reconfigure LabLink (without initializing the database), run the following command as the glsjboss user:
bash /opt/gls/clarity/config/configure_lablink.sh
Extract the backup zip file into a suitable location, e.g. /tmp/restore.
NOTE: If using Oracle database, skip the following steps 2 and 3. Contact the Clarity LIMS Support team for assistance in performing the migration from Oracle to PostgreSQL.
If using PostgreSQL database, the DBA imports the database dump into the new database instance. Dropping and recreating the database might be necessary. If you need to do this, use the following command:
dropdb -U postgres <OL8DB>
createdb -U postgres <OL8DB>
psql -U postgres -d <OL8DB> -c 'ALTER DATABASE "<OL8DB>" OWNER TO "<OL8User>"'
Restore the database exported from the previous instance.
Extract and restore the following directories to the same directory on the new instance:
Clarity LIMS file store: /opt/gls/clarity/users/glsftp or /home/glsftp
/opt/gls/clarity/customextensions
/opt/gls/clarity/glscontents
/etc/httpd/conf.d
NOTE: As of RedHat Enterprise Linux/Oracle Linux 8.8, Apache v2.4 is installed. There are several configuration changes in this version. You can use the new configuration, or merge your previous configuration file cautiously to the new configuration file. For details on the changes, refer to: httpd.apache.org.
Copy the SSL certificate files to the following location (create the directory if it does not exist):
/etc/httpd/sslcertificate
To configure the certificates, run the following command on the command-line interface, as the root user:
/opt/gls/clarity/config/installCertificates.sh
For more information, see Install a Purchased SSL/TLS Certificate
Restore files and configurations.
Copy any custom scripts into their folder locations.
For PostgreSQL database, copy / merge the database configuration file, i.e. pg_hba.conf and postgresql.conf.
Restore crontab from file.
Copy / Merge any additional application configurations.
This step is required only if the new instance hostname is different from the old instance hostname.
For details, see Change the Clarity LIMS Hostname
This step is only required if the passwords for glsftp and / or apiuser have changed.
To update the application configuration, complete the following steps:
Update glsftp password.
Update apiuser password.
Update database connection details.
For details, see Update Server Passwords and Database Connection Details.
In this step, the clarity-migrator tool is used to perform the changes required to make the database compatible with the new Clarity LIMS version.
On the command-line interface, run the following commands as the glsjboss user:
This step is required only if the RPMs have been installed on the existing Clarity LIMS instance.
Install the latest NGS, IPP, and Sequencing RPMs compatible with the new LIMS version, as listed in clarityrpms.txt.
All existing workflows, protocols, steps, and master steps are restored during the restoration process. After installing the NGS and IPP RPMs, you do not need to install the workflows again.
If the automation/External Program Plugin (EPP) scripts installed on the new instance are of a later version (e.g. /opt/gls/clarity/extensions/ngs-common/v5/EPP ), to your old instance (e.g. /opt/gls/clarity/extensions/ngs-common/v4/EPP), you must manually update the script location in your automation / EPP command lines. You can do update the script location in the Clarity LIMS interface or the Operations interface.
If you intend to install NovaSeq API-based integration, NextSeq integration, or MiSeq integration, use the latest package to ensure OS compatibility.
On the command-line interface, run the following commands as the root user:
Make sure that ElasticSearch is running:
When ElasticSearch service is running, remove the ElasticSearch indexes:
Restart Clarity LIMS:
The following verification steps are the minimum required to confirm that the various services are up and running properly.
To conduct a thorough verification, perform your verification steps or daily routine on the new Clarity LIMS instance.
Log in to BaseSpace Clarity LIMS via https://<FQDN>/clarity and perform a basic search.
Check that search results are returned.
If no search results are returned, try again later as the search indexes are still building.
Run a sample through a QC protocol step. Create a temporary workflow if necessary.
Make sure that the automation executes successfully and the log files are accessible.
Open a browser window and access https://<FQDN>/api/v2/projects.
Log in with api user credentials.
Check that all projects are returned in the response.
Open a browser window and access https://<FQDN>/lablink/.
Log in with administrator credentials.
On the Projects page, make sure that all data are properly displayed.
To make sure that the service is running properly, you must initiate an actual sequencing run on the instrument.
This section provides information and instructions to support Clarity LIMS and administration tasks. Topics covered include
If you require assistance with Clarity LIMS administration, contact the Illumina Support Team.
This section explains how to use the LDAP Checker tool a script (ldap-checker.jar) that checks and reports on an LDAP configuration. Instructions for use are also provided in the README.txt file that accompanies the tool.
The ldap-checker script is included with the Clarity LIMS installation and is available at the following location:
/opt/gls/clarity/tools/ldap-checker
The ldap-checker script performs numerous checks of the LDAP configuration and reports on any incorrect items found.
Point the script to one or more files containing (at a minimum) the database connection properties. Alternatively, set these properties from the command line.
The script loads properties from the following sources and in the following order:
Any JDBC properties files specified with -f (see the table for options).
If multiple properties files specify the same property, the last file is used.
Any Java system properties specified on the command line using
Properties specified on the command line are only checked if they do not appear in the properties files.
The properties table in the database.
The properties table is only checked if the same property is not already specified in the properties file or on the command line.
After the script has the basic database connection properties, it loads further settings from the corresponding Clarity LIMS database.
The following JDBC properties are required:
jdbc.driverClassName
jdbc.url
jdbc.username
jdbc.password
Options:
Change to the directory containing ldap-checker tool:
Run the script. To specify a properties file, use the following example:
The tool includes an example database.properties file. This example shows a properties file that is specified with the -f option.
The following options are available:
Edit this file and use it.
Provide properties on the command line, using: -D.
For example:
Specify and provide the path to the keystore:
To check a set of specific users (even those that have not been provisioned), use the following script:
To override properties that are typically loaded from the properties table, use command-line system properties or one or more properties files.
Using system property ( -D options must be specified before the -jar option):
Using multiple properties files:
In this example, Custom-ldap.properties might resemble the following:
If necessary, the Clarity LIMS Support team can provide a backup of your Clarity LIMS data. The data are contained in an encrypted file, which can be downloaded from a secure SFTP server.
To receive the backup data file, provide the Clarity LIMS Support team with a GNU Privacy Guard (GPG) public key.
For instructions on generating a GPG public key, see the following documentation:
For Microsoft® Windows®, see .
For Linux® or Mac®, see .
After the Clarity LIMS Support team has received the GPG public key, they do the following actions:
Create a backup file encrypted with the key.
Place the backup file on the LIMS SFTP server at sftp.clarity-lims.com.
Provide a username and password so you can access your data. The backups are added to the FTP server weekly.
After downloading the backup file, there are several tools available to decrypt the data.
For Windows, use the gpg4win tool. For details, see the .
For Mac/Linux, use the GPG command on the command line. For details, see .
This section provides instructions for upgrading existing cloud hosted Clarity LIMS deployments to on premise deployments. For assistance with upgrade steps, contact the Clarity LIMS Support team.
This document provides details on the steps required to upgrade an existing Clarity LIMS to RedHat Enterprise Linux/Oracle Linux compatible Clarity LIMS v6.2.
For installation requirements and Oracle Linux compatibility, see .
The following table shows the applicable migration paths.
From | To | Notes |
---|
Before Illumina can proceed with the upgrade, complete the following prerequisite steps.
We recommend you provision an instance with similar or higher specifications to the current Clarity LIMS instance.
Note the following:
All standard operating system (OS) security updates must have been applied.
Upgrades are only supported from Clarity LIMS v4.2/4.3/5.0/5.1/5.2/5.3/5.4/6.0/6.1.
Before installing Clarity LIMS on the new instance, make sure that the instance has the same fully-qualified domain name (QFDN) as the existing Production instance. If it is not possible to have the same QFDN, contact the Clarity LIMS Support team.
Custom configurations: If you have made any additional configurations that are not part of the Clarity LIMS pre-installation requirements, apply these configurations to the new instance.
Passwords: Configure all passwords to be same as the existing instance. After you have verified the new instance, you can change passwords as needed.
Make sure that all user accounts have email addresses associated with them. Users must reset their passwords after the upgrade is complete.
To assist with validating the system before an upgrade, install the UpgradePreValidation RPM on the source server.
This RPM is installed temporarily, and provides tools to help check the system before an upgrade.
If validation is successful, you can remove this RPM and proceed with the upgrade.
Install the UpgradePreValidation RPM. Make sure you have the correct repo enabled.
On the source server, as the root user, run the following command:
[Optional] Set up Secret Utility.
If ClarityLIMS-SecretUtil was installed previously, run the following command to set up Secret Utility as the glsjboss user:
NOTE: Using a vault is the safer way of storing application secrets. If using a vault is not possible, the configuration script supports file-based storage.
Run the validation script as follows.
Make sure that the Clarity LIMS server is running.
As the root user, run the following command:
Review the output of the script to determine if you can proceed with the upgrade. If the script outlines any issues with the potential upgrade, review the generated log files and contact the Clarity LIMS Support team for further assistance.
Remove the PreValidation RPM.
Remove the PreValidation RPM only after you confirm that you can upgrade. If you are unsure, consult the Clarity LIMS LIMS Support team.
As the root user, run the following command:
Archive the backup in case a rollback is required.
Before performing the backup, stop Clarity LIMS. The following command stops all Clarity LIMS components, including Automation Worker and integration services.
Stop Clarity LIMS:
On the command-line interface, run the following command as the root user:
Back up Postgresql database:
On the PostgreSQL server, best practice recommends backing up the database using the pg_dump utility.
The following example assumes the following:
The database server and the application server are on the same server.
The pg_dump utility is accessible to the glsjboss user.
Example
The Postgres DBA uses the following commands to create a database backup in the glsjboss home directory. Substitute the variables as appropriate for the specific environment.
As the glsjboss user:
Make sure that the following items, and any other files and configurations, are backed up safely:
crontab -l
custom scripts
OS configuration files
firewall rules
network configuration
etc.
If there are custom changes to any application configurations (to increase performance, security, etc.), restore/configure these items manually later by referencing the backup.
We recommend that you back up the items into a single zip file and transfer them to the new instance.
Directories
/opt/gls/clarity/users/glsftp or /home/glsftp (Clarity LIMS file store location)
/opt/gls/clarity/customextensions
/opt/gls/clarity/glscontents
/etc/httpd/conf.d
/etc/httpd/sslcertificate
Files
.pgpass
/opt/gls/pgsql/9.x/pg_hba.conf
/opt/gls/pgsql/9.x/postgresql.conf
Additional configurations:
rpm -qa | grep "BaseSpace\|Clarity" > clarityrpms.txt
Make sure that the repository file exists in the following location:
As the root user, install the RPMs required on the new instance by referencing the content of clarityrpms.txt.
If you are upgrading from Clarity LIMS v4.x, some RPMs (eg, Server RPM) are now included under other RPMs.
Install any other required RPMs (eg, Python packages) which are not part of the Clarity LIMS setup.
Do not install NGS, Illumina Preset Protocols (IPP), and Sequencing RPMs during this step. You install these RPMs later in the installation process.
When prompted for user passwords, enter the passwords used in the previous instance.
Stop Clarity LIMS. To do so, on the command-line interface, run the following command as the root user:
/opt/gls/clarity/bin/run_clarity.sh stop
LabLink v2.4 is compatible with Clarity LIMS v6.2.
If upgrading from Clarity LIMS v4.x, Illumina migrates your LabLink-related data with the following exceptions:
sample submission templates
customized UI CSS
lablink property table configurations
Before completing the following steps, make sure that a database named lablink is created with the same database user as Clarity LIMS database.
Install the LabLink RPM. Make sure that you have the correct repo enabled.
On the new instance, as the root user, run the following command:
Run the pending initialization script.
As the glsjboss user, run the following command:
The script prompts for a Google reCAPTCHA URL, site key, and secret key.
Google reCAPTCHA URL: https://www.google.com/recaptcha/
Google reCAPTCHA site key and secret key: View these keys from the Google reCAPTCHA Admin Console, under Settings.
NOTE: If you prefer not to use reCAPTCHA, leave the site-key and secret-key fields blank when running the configuration script. LabLink does not display the reCAPTCHA when these fields are left blank. You can also use your own reCAPTCHA accounts when configuring LabLink.
To reconfigure LabLink (without initializing the database), run the following command as the glsjboss user:
bash /opt/gls/clarity/config/configure_lablink.sh
Extract the backup zip file into a suitable location, e.g. /tmp/restore.
NOTE: If using Oracle database, skip the following steps 2 and 3. Contact the Clarity LIMS Support team for assistance in performing the migration from Oracle to PostgreSQL.
If using PostgreSQL database, the DBA imports the database dump into the new database instance. Dropping and recreating the database might be necessary. If you need to do this, use the following command:
dropdb -U postgres <OL8DB>
createdb -U postgres <OL8DB>
psql -U postgres -d <OL8DB> -c 'ALTER DATABASE "<OL8DB>" OWNER TO "<OL8User>"'
Restore the database exported from the previous instance.
Extract and restore the following directories to the same directory on the new instance:
Clarity LIMS file store: /opt/gls/clarity/users/glsftp or /home/glsftp
/opt/gls/clarity/customextensions
/opt/gls/clarity/glscontents
/etc/httpd/conf.d
NOTE: As of RedHat Enterprise Linux/Oracle Linux 8.8, Apache v2.4 is installed. There are several configuration changes in this version. You can use the new configuration, or merge your previous configuration file cautiously to the new configuration file. For details on the changes, refer to: httpd.apache.org.
Copy the SSL certificate files to the following location (create the directory if it does not exist):
/etc/httpd/sslcertificate
To configure the certificates, run the following command on the command-line interface, as the root user:
/opt/gls/clarity/config/installCertificates.sh
Restore files and configurations.
Copy any custom scripts into their folder locations.
For PostgreSQL database, copy / merge the database configuration file, i.e. pg_hba.conf and postgresql.conf.
Restore crontab from file.
Copy / Merge any additional application configurations.
This step is required only if the new instance hostname is different from the old instance hostname.
This step is only required if the passwords for glsftp and / or apiuser have changed.
To update the application configuration, complete the following steps:
Update glsftp password.
Update apiuser password.
Update database connection details.
In this step, the clarity-migrator tool is used to perform the changes required to make the database compatible with the new Clarity LIMS version.
On the command-line interface, run the following commands as the glsjboss user:
This step is required only if the RPMs have been installed on the existing Clarity LIMS instance.
Install the latest NGS, IPP, and Sequencing RPMs compatible with the new LIMS version, as listed in clarityrpms.txt.
All existing workflows, protocols, steps, and master steps are restored during the restoration process. After installing the NGS and IPP RPMs, you do not need to install the workflows again.
If the automation/External Program Plugin (EPP) scripts installed on the new instance are of a later version (e.g. /opt/gls/clarity/extensions/ngs-common/v5/EPP ), to your old instance (e.g. /opt/gls/clarity/extensions/ngs-common/v4/EPP), you must manually update the script location in your automation / EPP command lines. You can do update the script location in the Clarity LIMS interface or the Operations interface.
If you intend to install NovaSeq API-based integration, NextSeq integration, or MiSeq integration, use the latest package to ensure OS compatibility.
On the command-line interface, run the following commands as the root user:
Make sure that ElasticSearch is running:
When ElasticSearch service is running, remove the ElasticSearch indexes:
Restart Clarity LIMS:
The following verification steps are the minimum required to confirm that the various services are up and running properly.
To conduct a thorough verification, perform your verification steps or daily routine on the new Clarity LIMS instance.
Log in to BaseSpace Clarity LIMS via https://<FQDN>/clarity and perform a basic search.
Check that search results are returned.
If no search results are returned, try again later as the search indexes are still building.
Run a sample through a QC protocol step. Create a temporary workflow if necessary.
Make sure that the automation executes successfully and the log files are accessible.
Open a browser window and access https://<FQDN>/api/v2/projects.
Log in with api user credentials.
Check that all projects are returned in the response.
Open a browser window and access https://<FQDN>/lablink/.
Log in with administrator credentials.
On the Projects page, make sure that all data are properly displayed.
To make sure that the service is running properly, you must initiate an actual sequencing run on the instrument.
Path | Purpose |
---|---|
ACL Policies |
---|
Clarity LIMS version | Authorization | JSESSIONID | Expected Result |
---|
-f | --files | Property files to process |
---|
Your system must meet the requirements listed in
The command hostname -f must resolve to the fully qualified domain name (FQDN) of the server. For details, see the section of .
To configure the new instance, follow the instructions provided in the and see also .
For more information on the prompts, see section of .
Configure and validate the new system, following the procedure outlined in the
The following steps are only required if you are restoring from a previous instance. If you are installing on a testing environment, proceed to section.
For more information, see
For details, see
For details, see .
PostgreSQL:
v4.2 On-premise
v5.0 On-premise
v5.1 On-premise
v6.2 Cloud hosted
Upgrading from CentOS6 to Oracle Linux v8.8 and v8.9 (on-premise to cloud hosted).
PostgreSQL:
v4.3 On-premise
v5.2 On-premise
v6.0 On-premise
v6.1 On-premise
v6.2 Cloud hosted
Upgrading from CentOS7 to Oracle Linux v8.8 and v8.9 (on-premise to cloud hosted).
Oracle:
v4.3 On-premise
v5.2 On-premise
v6.2 Cloud hosted
Migrating from Oracle to PostgresQL.
v6.2 supports PostgreSQL only.
Packages that require Clarity LIMS-SecretUtil.
Illumina Cloud Hosted v5.4 and later
Vault
Secret Utility would have been configured during installation of Illumina cloud hosted deployments of Clarity LIMS v5.4 and later.
For details on installing and configuring the integration package, see the related installation guide.
Packages that do not require Clarity LIMS-SecretUtil.
Illumina Cloud Hosted v5.4 and later
Vault
Clarity LIMS v5.4 and later do not support these integration packages.
Packages that require Clarity LIMS-SecretUtil.
Illumina Cloud Hosted/On Premise v5.3 and earlier
File
Clarity LIMS-SecretUtil installs Secret Utility. Before continuing with the configuration of the integration package, complete the following steps:
In: /opt/gls/clarity/config/pending run the following script: 05_configure_claritylims_secretutil.sh
Configure the secret utility in file-mode.
Refer to Install/Upgrade Secret Management for Integration Modules for the passwords required to be configured in File mode.
Packages that do not require ClarityLIMS-SecretUtil.
Illumina Cloud Hosted/On Premise v5.3 and earlier
-
Refer to the integration package installation guide for more information on installing and configuring the integration package.
Enter required value for Secret Utility Mode.
vault
Configure the mode for Secret Utility. Possible values: vault, file
Enter required value for Clarity Tenant Hostname.
localhost
Vault mode only
Configure the Tenant hostname to be used as part of the vault path.
Enter required value for Vault Engine Path.
secret
Vault Mode only
Configure the secret engine path.
Enter required value for Vault URI.
Vault Mode only
Configure the Vault Server target.
Vault Enterprise (Y/N)
N
Vault Mode only
Configure whether the Vault Server is an enterprise version.
Enter required value for Vault Namespace.
Vault Enterprise only
Configure the Vault namespace.
Enter required value for Vault Authentication Mode.
Vault Mode only
Configure the authentication method. Possible values: token, approle
Enter required value for Vault Token.
Token Authentication only
Enter required value for Vault AppRole Role-Id.
AppRole Authentication only
Configure the AppRole role-id to use. Refer to Role ID noted during HashiCorp Vault configuration. (See #approle)
Enter required value for Vault AppRole Secret-Id.
AppRole Authentication only
Configure the AppRole secret-id to use. Refer to Secret ID noted during HashiCorp Vault configuration. (See #approle)
Enter required value for app.ftp.password
Enter required value for app.ldap.managerPass
Enter required value for app.rabbitmq.password
Enter required value for db.tenant.password
Enter required value for db.clarity.password
Enter required value for db.lablink.password
Enter required value for db.reporting.password
File Mode only
Sets the secrets (encrypted with CLARITYSECRET_ENCRYPTION_KEY env variable) into conf/secret.properties
Enter required value for Username for API user
apiuser
File Mode only
Sets the username of the API user to be used when applications require an API user.
Enter required password for API user
File Mode only
Sets the password for the API user configured.
File description
Example file name (used in examples below)
Apache private key
private.key
Signed SSL/TLS Certificate
customer_domain.crt
Intermediate chain file (optional)
intermediate.crt
$host/clarity/app.ftp.password
Password for GLSFTP user on the Clarity LIMS instance.
$host/clarity/app.rabbitmq.password
Password for RabbitMQ admin on the Clarity LIMS instance.
$host/clarity/db.clarity.password
Password for the configured Clarity LIMS database user
$host/clarity/db.lablink.password
Password for the configured LabLink DB database user.
$host/clarity/db.tenant.password
Password for the configured Tenant Lookup DB database user.
$host/clarity/app.ldap.managerPass
[Optional] Password for the User DN configured for Clarity LIMS LDAP integration
$host/integration/apiusers/apiuser
Password for the apiuser user account that is used by Automation Worker to authenticate with Clarity LIMS API.
If you have configured a different user account, create it under $host/integration/apiusers/$username
# For clarity instance to read from hashicorp vault
path "claritylims/data/+/clarity/
"
{
capabilities = ["read"]
}
# For integration to write to hashicorp vault
path "claritylims/data/+/integration/
"
{
capabilities = ["read","create","update"]
}
PostgreSQL:
v4.2 On-premise
v5.0 On-premise
v5.1 On-premise
v6.2 On-premise
Upgrading from CentOS6 to Oracle Linux v8.8 and v8.9.
PostgreSQL:
v4.3 On-premise
v5.2 On-premise
v6.0 On-premise
v6.1 On-premise
v6.2 On-premise
Upgrading from CentOS7 to Oracle Linux v8.8 and v8.9.
Oracle:
v4.3 On-premise
v5.2 On-premise
v6.2 On-premise
Migrating from Oracle to PostgresQL.
v6.2 supports PostgreSQL only.
v5.2.x and later, and v4.3.x | Present | Present (Valid) | Open API does not perform the user authentication and responds with requested resources. |
Present | Present (Invalid) | Open API performs the user authentication depending on whether the account is in the database or LDAP server, and responds with requested resources. |
Absent | Present (Valid) | Open API does not perform the user authentication and responds with requested resources. |
Absent | Present (Invalid) | Open API responds with HTTP Status 401 - Unauthorized. |
Absent | Absent | Open API responds with HTTP Status 401 - Unauthorized. |
-h | --help | Show usage information |
-u | --users | Usernames to check |
v4.2 Cloud v4.3 Cloud v5.0 Cloud v5.1 Cloud v5.2 Cloud v5.3 Cloud v5.4 Cloud v6.0 Cloud v6.1 Cloud | v6.2 On-premise | Changing environment from Cloud to On-premise |
Clarity LIMS creates various log files to help with the resolution of issues. During support request investigation, the Support team may ask for the following types of log files:
Automation Worker creates history and log files, and stores them on laboratory computers in the logs folder of the Automation Worker installation directory.
If Automation Worker is installed on a Windows machine using the program default, find the logs folder at the following location:
If Automation Worker is installed on a Linux server, find the \logs folder at the following location:
The following log files are available:
wrapper.log - This log file outputs information on the starting, running, and stopping of the Automatic Informatics service.
automatedinformatics.log - This log file outputs messages from installed plug-ins, such as automation commands and ADC directory scans.
Log on to the server using the glsai user ID and run the following command:
If the Automation Worker is installed on a server other than the Clarity LIMS server, use the appropriate user credentials.
In the web browser, if the LIMS interface does not display items/elements correctly, provide the information and error messages to the Clarity LIMS Support team.
Instructions for finding error messages within the browser console are described in the following sections.
To Start the Chrome Console:
Right-click on an element in the browser and select 'inspect element.'
A sub window opens below the main window in Chrome, showing the source HTML.
Select the Console tab, and reload the troublesome page - any JavaScript errors will be reported there. Include these errors in the Support Request ticket.
NOTE: Between stages in a protocol step you may see errors of the following type:
Such messages are expected. This is the EPP trigger checking that there is no EPP transition to fire on the page change. (This can be annoying for debug purposes, but feel free to include these in the Support Request ticket.)
To Get the JavaScript version:
Open up the Console as described in the previous section.
Go to the Network tab.
Select 'scripts' from the options listed at the bottom of the tab.
A script named isis-all.js?v=XXXXX displays.
Determine the version build number. (In the previous example, XXXXX represents the version build number).
To Start the FireFox Console:
Right-click on an element in the browser and select 'inspect element.'
A sub window opens below the main window in Firefox, showing the source HTML
Select the Console tab, and reload the troublesome page - any JavaScript errors will be reported there. Include these errors in the Support Request ticket.
NOTE: Between stages in a protocol step you may see errors of the following type:
Such messages are expected. This is the EPP trigger checking that there is no EPP transition to fire on the page change. (This can be annoying for debug purposes, but feel free to include these in the Support Request ticket.)
To Get the JavaScript version:
Open up the Console as described in the previous section.
In the Filter options Search box and type 'isis'.
A script named isis-all.js?v=XXXXX appears.
Hover over this script with your mouse to find the V (version) build number.
If you are experiencing problems and need to submit a support request, use the following guidelines to determine which log files to send to the Clarity LIMS Support team:
basespace-lims-*.log: Include if experiencing slowness in the application. (Default path: /opt/gls/clarity/tomcat/current/logs/)
automatedinformatics.log: Include if you are experiencing problems with an integration or if a process using an EPP string does not work as expected. (Default path: /opt/gls/clarity/automation_worker/)
wrapper.log: Include if the Automation Worker is unable to start (rarely needed).
search-indexer.log: Include if there is issue with search feature. (Default path: /opt/gls/clarity/search-indexer/logs/)
claritylims.log: Include if there is issue with search feature. (Default path: /var/log/elasticsearch/)
Browser Console and LIMS JavaScript version: Include for any web interface display issues. A simple refresh of the browser page may resolve the issue. However, the Support team would prefer receiving the console log and JavaScript version to investigate and make product improvements.
Create a Configuration Package File for Import to Another System (or for Backup Purposes)
Use a configuration package file to copy a configuration set from one server to another or to back up a particular working configuration at a particular time.
Required steps:
Create a configuration manifest file.
Export to configuration package file.
Import (install) a Configuration Package File on a Production or Test Server
This process involves copying a configuration set from one server to another, by importing a configuration package file.
For example, to move a configuration set to a different environment for testing or troubleshooting purposes, or copy a new configuration set (created and tested on one system) onto another system.
Required steps:
On the source server, create a configuration manifest file, and then export to configuration package file.
On the destination server, import the configuration package file.
Compare Differences between a Working Configuration and a Broken Configuration
There are two approaches:
Comparing configuration manifest files provides a way to determine if there are processes or UDFs missing from a system. The information in the manifest files only allows comparing process and UDF names, not the specific way in which a process or UDF is configured.
Comparing configuration package files helps check how specific processes are configured. If the systems being compared are meant to be identical, this method is more appropriate to use.
Required steps:
On each system, create a configuration manifest file, or a configuration package file.
Run a diff comparison on the two files.
Edit the broken manifest file, export it, and import the resulting configuration package file into the system to add the missing entities.
There are several tools that available to compare files:
Meld (graphical), for Linux, port to MacOS
Standard Unix diff (Linux, MacOS) (use -q for a quick check).
FileMerge (OSX with XCode installed) - /Developer/Applications/Utilities/FileMerge.app
WinMerge (graphical), for Windows - http://winmerge.org/
Merge the Configuration Sets from Multiple Test Systems to a Production System
Combine configuration sets from multiple systems, merge them into a single configuration package file, and then import the file into a new system.
Required steps:
On each source server, create and edit a configuration manifest file.
Merge the entities from all files into a single manifest file.
Export the resulting file to a configuration package file.
On the destination server, import the merged configuration package file.
Back up and Restore a Configuration Set
Copy a configuration set to restore it on another server and use it for testing/troubleshooting purposes.
Required steps:
On the server containing the 'broken' source system, create a full manifest file, containing all of the LIMS system configuration.
Export the manifest to a configuration package file. Save file to media/disk.
On the target server, import the configuration package file created on the source system.
Back up and Upgrade a Configuration Set
To upgrade or add to a configuration set already installed on a server, two configuration package files are needed: one to back up the working configuration set and one containing the new updated configuration that have been created on a test server.
Required steps:
On the server you want to upgrade, create a full manifest file and export this to a configuration package file. Save this file as a backup.
On the test server, create a manifest file and edit it so that it only includes the entities you want to import.
On the server you want to upgrade, import the configuration package file.
Deploy a New Configuration Set from Test to Production Server
You may want to take a configuration that has been created and tested on one system/site (referred to as the source system in the steps below), and deploy it on another system/site (destination system).
Take a configuration that has been created and tested on one system/site (referred to as the source system in the following steps) and deploy it on another system/site (destination system).
As a best practice, make sure that the configuration is backed up by creating a full manifest file and exporting to a configuration package file (see Step 2). The process is also described in Back up and Restore a Configuration Set.
Required steps:
On the source system, create a configuration package file containing the tested configuration to import.
On the destination system, create a full manifest file and export this to a configuration package file. Save this file as a backup.
On the destination system, import the configuration package file that was created in step 1.
If running Config Slicer v3.0.x against a configuration package or manifest file that was generated with a pre-3.0 version of Config Slicer, an error message displays.
Scenario 1: Out-of-Date Configuration Package
In this scenario the error message will resemble the following:
Copy
SOLUTION:
Upgrade the configuration package file by running upgrade-config-slice.jar against it.
This jar file should be located in the same directory as Config Slicer (/tools/config-slicer).
The upgrade script will save an upgraded copy of the configuration package file (to the same directory), which can be inspected and imported.
Example:
Copy
To upgrade a manifest file at the same time, simply add it to the script as a second argument:
Copy
Scenario 2: Out-of-Date Manifest File with no Corresponding Configuration Package
In this scenario the error message resembles the following:
Copy
SOLUTION:
This upgrade process is a little more involved:
Extract the list of process types from the manifest file .
Format them as parameters to Config Slicer's custom manifest generation for process types.
For example, assuming we extracted the process types Process Type 1 and Process Type 2 in step 1, the following command would be used:
Copy
Run the custom manifest generation.
Take the list of analyte UDFs and ResultFile UDTs from the manifest file that was generated and add them to the old manifest file .
Add the following line of text to the top of the old manifest file:
Copy
Turning Container Name Uniqueness ON
This constraint is set at the database level. Container Uniqueness can be turned ON by using the migrator tool, and running the optional migration step called "ContainerNameUniqueConstraints". This can be done by calling this command:
Notes:
Of course, edit the migrator.properties file to ensure that the mode is set to "migrate" not just "validate".
If the migrator has carried out its work successfully, the database should now have a new index present called 'unique_cnt_name'. (In psql, do \di to see indexes)
You do not need to restart Tomcat services in order for this change to take effect
Turning Container Name Uniqueness OFF
Container Uniqueness can be turned OFF by running this SQL command:
Copy
This statement will fail if there are already any non-unique container names in the database. If it fails, you can find all non-unique names with this statement:
Copy
For each non-unique name found, you can iterate through all instances of it, and rename them so that they will be unique.
Copy
Copy
Resolving Non-Unique Container Names
When you run the above query and determine there are too many containers to hand-edit, what now?
Are any of the containers that have non unique names in a depleted or discarded state? If so, you can probably delete them once the customer gives the all clear:
Copy
now, what do the values for the container stateid mean? You won't find them in the database, they are hard coded as follows
Copy
Copy
Copy
Copy
Copy
Copy
Copy
Copy
Copy
Armed with this knowledge, we can refine the query to highlight containers that may be deleted with little consequence:
Copy
Copy
Copy
Notes:
Containers can only be deleted if they are empty
SQL 'IN' statements normally have a limit of 255 records, so if the the sub-select - the one in parenthesis - returns more than 255 records, your mileage may vary
You can configure an alias for the short and long, singular, and plural forms of the term Project, as displayed in the Clarity LIMS interface.
Renaming is achieved by configuring the following properties, using the omxprops-ConfigTool.jar tool at:
Copy
Properties
clarity.alias.project.short.singular - The short term for "Project"
clarity.alias.project.short.plural - The short term for "Projects"
clarity.alias.project.full.singular - The full term for "Project"
clarity.alias.project.full.plural - The full term for "Projects"
Rules for alias
clarity.alias.project.short.singular—Maximum of eight characters.
clarity.alias.project.short.plural—Maximum of nine characters.
If multiple words are used, capitalize each word (eg, Test Requests).
Short form singular and plural aliases are truncated to eight characters and nine characters respectively. An ellipsis is used to indicate truncation.
Saving passwords in encrypted format is recommended. Use the omxprops to do this action.
Use the following command:
This command returns a text string resembling the following example:
Set the password by enclosing the text string in the ENC() wrapper.
Consider the following examples:
When setting an encrypted password in a configuration file:\
When using the property tool to set an encrypted password:\
Using ENC() is not needed when setting the password in Automation Worker:\
The Dashboards, Projects & Samples, and Lab View screens facilitate the day-to-day tasks of the lab manager and lab scientist.
Overview Dashboard—Provides summary information about lab activity. Use this dashboard to view workflow status, sample status, and alerts.
Projects Dashboard—Provides a dynamic view of current lab activity. Use this dashboard to help manage projects and the flow of samples through the lab.
The Overview Dashboard displays summary information about the current activity in the lab. It provides lab managers with at-a-glance information about workflow, sample status, and alerts across projects. The dashboard is dynamic and updates automatically as changes occur in the lab.
By default, the system administrator and facility administrator roles have access to the Overview Dashboard. However, access is a configurable role-based permission. For details, see Configured Role-Based Permissions.
The Overall Status bar provides a snapshot of lab activity.
Projects—The number of open and pending projects currently in Clarity LIMS. This number does not include closed projects.
Samples—The number of samples in open and pending projects.
Workflows—The number of workflows in which the samples are currently being worked through in the lab.
Alerts—The number of unresolved alerts in the system. To trigger alerts, select Request manager review as the next step for a sample (in the Assign Next Steps screen).
This section shows information for the lab workflows that currently have samples uploaded to them. Workflows are listed in alphabetical order.
For each workflow, the following information is shown:
Projects—The total number of projects containing samples that are currently in the workflow. This number includes samples that are in progress in the workflow and samples that have completed the workflow.
In progress samples—The number of samples that are currently in progress in the workflow.
Completed samples—The number of samples that have completed the workflow.
Protocols—The names of the protocols that are included in the workflow.
Samples—The number of samples in each protocol of that workflow.
If configuring the sample capacity of a protocol (see #configure-protocols), Clarity LIMS uses color coding to indicate the percentage of the protocol capacity that is being used.
The percentage of capacity is calculated according to the total number of samples that are currently assigned to all occurrences of the protocol (ie, across all workflows).
Blue indicates that the number of samples currently in the protocol is less than or equal to the configured capacity.
Yellow indicates that the number of samples currently in the protocol is between 100% - 200% of the configured capacity.
Orange indicates that the number of samples currently in the protocol is greater than 200% of the configured capacity.
While the protocol for a particular workflow may have few, or even zero, samples assigned to it, its capacity might still display in yellow or orange. This display would occur if the number of samples in other occurrences of the protocol exceeds its configured capacity.
Hover over a protocol to view more information about its configured capacity and the total number of samples assigned to it.
If there is an API version mismatch, Config Slicer will log a message at the beginning of an import:
Copy
Generally, this message is not a cause for concern.
However, if there are warnings about configuration differences during the import, changes that have been made to the API in between the package version and the server version may be responsible.
In addition, if the package being imported is from v4.x, a log message may appear at the beginning of an import:
Detected package is from a pre-5.0 Clarity system. The package will be upgraded to match new configuration requirements.
In this case, the package is upgraded and the resulting configuration is output to a folder in the same location as the package being imported. This upgraded configuration package is the v5.x representation of the v4.x configuration and can be used directly to troubleshoot error occurring on import. The updated packaged can also be imported directly.
The following changes have been included in the latest release of Config Slicer (v 3.0.51):
General changes:
The entity summary is now shown after the individual differences have written to the log file, as opposed to before these differences.
Support was dropped for the following process type attributes which are no longer used:
SupportsExternalProgram,
ShowInExplorer,
ShowInButtonBar,
OpenPostProcess,
IconConstant
Support was dropped for the show-in-tables
property on Custom Fields (UDFs) because it is no longer used.
There is now support for the new Clarity LIMS 5.0 distinction between Protocol Step names and Master Step (ProcessType) names.
It is now possible to slice in and out all the new Master Step settings added in Clarity 5.0, including:
Default Process Template
Instrument Types
Container Types
Reagent Kits
Reagent Types
Control Types
Sample Fields
Queue Fields
Ice Bucket Fields
Step Fields
Step Properties
When importing/validating a slice from 4.x into 5.x, if the package contains any Master Steps (process types) or Protocols, it is upgraded to be compatible with the new configuration available in 5.x. This process is automatic, and the updated package is written out to the directory containing the package being imported or the directory that log file is being written to. If errors occur while importing, this updated package can be manipulated directly and imported to fix them.
Note: If upgrading the package fails, import/validation will fail. In general, this will reflect a mistake in the configuration package being imported.
Changes in support of slicing from 4.x to 5.0:
The following changes were specifically added to support slicing between Clarity LIMS 4.x and BaseSpace Clarity LIMS 5.0. All of these changes will be saved into a new configuration package that will be written to the same location as the package being sliced:
Backwards compatibility for Protocol Step setup configuration.
Support for updates of parent entities after both the parent and child entities have been imported.
This change is required for updating the defaultProcessTemplate and step-fields step properties on Master Steps. Both require that the Master Step (ProcessType), ProcessTemplate, and Master Step Custom Fields (ProcessType UDFs) exist before they can be set on the Master Step.
Support for setting the qcProtocolStep flag on Master Steps, allowing the correct Master Step type to be displayed in the Lab Work Configuration UI. The setting is propagated up from the Protocol Step to the Master Step.
After slicing, it is recommended to examine the configuration closely for any QC Protocol Steps that previously shared a Master Step with nonQC Protocol Steps. This setting may not transfer as expected and there is potential for misconfigured Protocol Steps. In particular, if the Master Step produces measurements, and even just one child step of a Master Step has qcProtocolStep=true, then the Master Step will get qcProtocolStep=true set on it. Thus, all other steps that use that Master Step have qcProtocolStep=true, whether or not it was set before.
Support for setting the default container on Steps and Master Steps in such a way that all behaviour is maintained from 4.2
If every child step of a Master Step has the default container that was defined as a permitted container on the master (through the 'OutputContainerType' process-type-attribute
) then the default will be added as a permitted container on the master step
If any child does not have the default container from the Master Step, then the default is removed from the Master and each child that had the container is updated so that the default is the first permitted container (and hence default)
Extra containers and any step properties that are no longer valid on a step will be migrated to new properties or removed.
Step-setup file configuration has been moved to the Master Step and it not possible to have a different set of step-setup files on a step than are specified on the Master. The master step owns the list of files. Both the search-result-file-index attribute on each file element and the message element for each file are defined by the Master and must be duplicated on every step. When moving a 4.x configuration with step-setup files to a 5.x configuration, the following events will occur:
The set of all the step-setup files found on all the child steps in the slice are added to to the Master Step and each child step.
If more than one step in the slice defines step-setup files for the same search-result-file-index, then all the messages for that search-result-file-index will be concatenated by newlines.
The enabled
attribute will be set to true for the step-setup on all child steps.
The locked
attribute will be set to false for the step-setup on all child steps.
In the case of importAndOverwrite
, the step-setup of any existing child steps for an overwritten master step will be included.
Upon validation after import, the following differences have been reconciled:
UDFs that differ by STYLE only
missing defaultProcessTemplate
missing attemptAutoPlacement
missing autoAttachFiles
missing qcWithPlacement
Clarity LIMS provides Audit Trail, a robust data-tracking system that allows for tracking of the following:
All user activity (ie, who did what, and when).
Every action that is written to the database.
Audit Trail has two capture systems, Event Log and Detail Log.
Event Log—Records familiar BaseSpace Clarity LIMS user actions and presents this information in a format that is easy to read and understand.
Detail Log—Records exacting information about changes resulting from actions recorded in the Event Log. This includes both updated values and previous values.
Enabling Audit Trail may result in a small performance hit due to the overhead of writing the entries to the database. It is recommended that you periodically archive the Audit Trail database so that it does not become too large.
NOTE: Audit Trail is enabled by default.
Audit Trail feature is available to Enterprise customers only. To enable, validate, or disable audit trail on your Clarity LIMS cloud instance, contact Illumina Technical Support for assistance.
By default, Clarity LIMS allows duplicate sample names within the same project. If you would like to enforce sample name uniqueness within a project, you can do so.
Two scripts have been developed to support this requirement:
SampleNamePerProjectUniqueConstraintStep: Apply this uniqueness constraint to enforce unique sample names within a project.
CleanupDuplicatedSampleNamesPerProjectStep: Prior to running the uniqueness constraint, use this optional cleanup script to clean up a database that already contains duplicate sample names.
Both scripts are available via the clarity-migrator.jar tool.
If your database contains projects in which duplicate sample names exist, run the CleanupDuplicatedSampleNamesPerProjectStep script to clean up sample names that would violate the sample uniqueness constraint property. The cleanup script searches the database for sample names that are not unique and renames them.
The cleanup script also renames the corresponding original submitted sample name - since there is a one-to-one correspondence between submitted sample and derived sample names in the LIMS interface.
To clean up the database:
As the glsjboss user, change to the clarity-migrator directory:\
Copy
Run the clarity-migrator.jar tool, providing the name of the cleanup step as a parameter:\
Copy
The step will run and no validation errors should be reported.
Once cleanup has been performed successfully, you can apply the sample uniqueness constraint.
After you have cleaned up the database (if this step was required), you can apply the uniqueness constraint.
Applying the sample uniqueness constraint results in a change at the LIMS database / schema level. Once you have applied this change, there is no script available to revert it.
If you need to remove the uniqueness constraint, you will need to submit a request to the Illumina Support team.
To enforce sample uniqueness:
As the glsjboss user, change to the clarity-migrator directory:\
Copy
Run the clarity-migrator.jar tool, providing the name of the uniqueness constraint step as a parameter:\
Copy
The step will run and no validation errors should be reported.
After enforcing sample uniqueness, if a user attempts to accession or update sample names that already exist in the project - via the user interface or the API - an error message displays. The message describes the problem and advises the user to rename the duplicate samples.
The following sections describe and illustrate what happens if an accessioned or updated sample name conflicts with an existing sample name within the same project.
Postgres database
If an accessioned or updated sample name conflicts with an existing sample name within the same project:
Upload/modify via a sample sheet will result in the error shown below.
The sample with duplicate name is named within the parenthesis of the Detailed error message.
The quoted string in the Detailed error is the database name of the constraint being violated (uk_sample_name_per_project = unique key on sample table for name and project).
Sample management accession/modify will result in the error shown below.
If a user attempts to accession/modify a sample name under similar circumstances via an API operation, the results received would be similar to the content of this error message.
The Projects dashboard helps with managing projects and the day-to-day flow of samples through the lab. This dashboard provides a dynamic, project-centric view of current lab activity. It allows for the management of projects and the day-to-day flow of samples through the lab.
The Overall Status bar provides a snapshot of current lab activity.
Projects—The number of open and pending projects currently in Clarity LIMS. This number does not include closed projects.
Samples—The number of samples in open and pending projects.
Workflows—The number of workflows in which the samples are currently being worked through in the lab.
Alerts—The number of unresolved alerts in the system. To trigger alerts, select Request manager review as the next step for a sample (in the Assign Next Steps screen).
Additional information provides a snapshot summary of projects and samples, including the workflows to which samples are assigned.
No Workflow Assigned
The number of projects containing samples that do not have a workflow assigned to them.
The number of samples that do not have a workflow assigned to them.
In Progress
The number of projects containing samples that are in progress and being worked on in the lab.
The number of samples that are in progress and being worked on in the lab.
Workflow Complete
The number of projects in which all samples have completed all workflows assigned to them.
The number of samples that have completed all workflows assigned to them.
By default, all open projects display in the Projects table and are sorted by creation date (most recent first).
Select any project in the table to view its details on the right.
Projects that have unresolved alerts display an icon in the upper-right corner.
The Projects table allows filtering to display projects that meet certain criteria.
The drop-down list provides several filters. Consider filtering for the following use cases:
Completed projects—Prepare data and invoices to be sent to clients.
Projects containing samples that do not have a workflow assigned to them—Find the projects containing these samples, assign workflows to the samples, and start working on them in the lab.
Projects assigned to a particular workflow or to a particular project—View and manage their progress in the lab. In this case, type the workflow/project name into the filter box and press the Enter key.
Select a project to display summary information about the workflows and samples related to that project. The summary includes the following information:
The number of samples in the project that have not been assigned a workflow.
The workflows to which samples in the project are assigned. If needed, scroll to see all the workflows.
The number of samples involved in each workflow.
Any unresolved alerts in the project. Select the alert to view and resolve it.
The project completion percentage.
Below the Projects table and summary, the workflows for the selected project are listed.
Directly under each workflow name, the number of in-progress and completed samples currently in the workflow.
Select In-progress samples to view all in-progress samples across all protocols in the Samples table.
Select the Completed samples count to view those completed samples in the Samples table.
To the right of the workflow name, select the arrow.
The expanded workflow details area shows the following information:
All protocols included in the workflow, and the number of samples in each of those protocols.
The percentage of project samples currently assigned to each protocol (represented by blue shading). Hover over the shading to see this percentage. The percentage is derived from the number of samples in the protocol divided by the total number of samples in the workflow.
Select a protocol to see the steps it includes and the samples assigned to those steps.
Select a protocol to see the following information display in the Samples table:
The steps included in the protocol, and the submitted and derived samples that are in each of those steps.
The sample name and the first three sample custom fields.
In the Samples table, the following features are available:
View Alerts—Select an alert to go to the step in which the manager review was requested.
Run automations on derived samples directly from the Samples table.
Select one or more derived samples (Ctrl + click to select multiple).
In the upper-right corner of the Samples table, expand the drop-down list and select an automation.
If the automation requires input, a prompt to enter a value displays. Enter a value and select Continue.
NOTE: If the parameter name is truncated, hover over it to view the full name.
As the automation runs, the status is shown in the samples list.
The Config Slicer tool is used to move small incremental configuration changes, contained in a configuration set, between Clarity LIMS systems. For example, it moves changes between a test system and a production system.
This configuration tool provides granular export/import functionality that allows the management of configurations that support experimental workflows.
Use this tool to back up, copy, deploy, and restore configuration sets. By making small incremental changes, make sure that the modifications made to the production system are minimal.
Review the following key concepts:
Configuration set—This item may be created by the Illumina Support team or by the customer. It comprises the items (know as entities) that are added to a Clarity LIMS system to allow for customization for a particular scientific experiment or workflow. The Illumina NGS Extensions Package is a good example. See #supported-entities.
Configuration manifest file—This text file determines the configuration set to be exported from a system. The manifest file does not contain the actual configuration data. It only drives the extraction of configuration from a system.
Configuration package file—This XML file contains the top-level entities selected by the configuration manifest file, plus any related child entities. For example, for process types, it includes process type UDFs, process templates (protocols), and output UDFs.
The Config Slicer tool uses an export/import process to transfer configuration sets from a source to a destination server. This process breaks down into the following tasks:
On the source Clarity LIMS server, use the Config Slicer tool to create a configuration manifest file.
Edit the manifest file so that only the required custom configuration set is preserved.
Use the Config Slicer tool to export the edited manifest file to a configuration package file.
On the destination Clarity LIMS server, use the Config Slicer tool to import the configuration package file into the system.
A configuration package file can be imported into multiple systems. Use this feature to create and import multiple custom configuration sets, such as the Illumina TruSeq integration. This functionality also provides the Illumina Support team with a scalable way to keep up with constantly changing protocols.
A configuration set comprises the entities added to a Clarity LIMS system that allow for customization of the system for a particular scientific experiment or workflow. The following entities are currently supported by the Config Slicer tool:
Sample UDFs and UDTs
Container UDFs and UDTs
Project UDFs and UDTs
Artifact groups (experiments)
Reagent types
Control types
Reagent kits
Process types (any configured processes – for example, Pool Samples and Add Multiple Reagents)
Process UDFs and UDTs
Output UDFs
Process templates - UDFs, UDTs, and parameter strings only (other entities such as instruments and researchers are not supported)
Protocols
Workflows
When performing custom manifest generation by workflow, protocol, or process type, the following entities are not exported: Sample, Container, Project UDTs, and UDFs for Project. Account (Lab) and Client (Researcher) UDFs are never exported by config slicer. These are known issues.
Config Slicer does not export/import nonstep automation, nor does it preserve the order of protocols.
When working with Config Slicer on the Clarity LIMS application server, there are no additional prerequisites. The latest version of the config-slicer.jar file is installed as part of Clarity LIMS on the Clarity LIMS server. In a default installation, find the file in the following location:
Copy
To work with Config Slicer on a machine other than the Clarity LIMS server, do the following:
Make sure that Java is installed.
Copy the /opt/gls/clarity/tools/config-slicer directory, and its contents, from the Clarity LIMS server to the machine. The config-slicer directory and the config-slicer package should contain the following:
config-slicer-<version>.jar
libs subdirectory (which includes all the libraries referenced by config-slicer-<version>.jar, including groovy-all-2.4.8.jar)
upgrade-config-slice.jar
Refer to #import-export-mode and #guidelines-and-semantics-for-creating-manifest-files.
Option | Description |
---|---|
* The importAndOverwrite option lets Config Slicer update existing configuration, rather than create new configuration. This option is only available in LIMS 3.4.
Usage
Copy
Prerequisites
Created and validate a configuration set on the source server.
Have access to the Config Slicer tool and the libs subdirectory.
Step 1: Create configuration manifest file
Create Simple manifest file or Custom manifest file.
Simple manifest file:
On the source server, copy and paste following code to the command line. Edit the variables (version, server IP address, username, password, and manifest file name) to match those in your system.
Copy
A command with the variables filled in might look like this:
Copy
This step produces a manifest file containing information about the entire system configuration. For best practice, copy this file and rename the copy in a way that reflects the configuration (we'll use newconfiguration.txt for this example). Use the copied file for the next steps.
A manifest file is used as an intermediary step to produce an XML configuration package file.
The manifest file is only relevant to the data that exists in the system at the time it is created. Discard it after creating the configuration package file or save the manifest file for historical auditing purposes. It can provide a record of a known working configuration set on a particular system.
Custom manifest file:
To create a manifest file for only specific workflows, protocols, or process types, follow the steps outlined previously, using -o custom instead of -o example.
When using this operation provide additional parameters (-w, -pr, and/or -pt) specify the exact entities for which to create a manifest. For example:
Copy
Specifying every option is not required. It is also possible to specify more than one of each kind. For example, create a manifest file for a workflow with the following command:
Copy
Or for two protocols like so:
Copy
Or for two process types and a protocol with this command:
Copy
Step 2: Edit Manifest File
The next step is to edit the manifest file, removing unnecessary information and preserving only the custom configuration to import into the destination system.
Example 1: In this example, everything is deleted from the manifest file, except for the two new process types to export.
Copy
Example 2: In this example, the manifest file contains definitions for some new reagent types:
Copy
Step 3: Export to XML Configuration Package File
Copy and paste the following code onto the command line. Edit the variables (version, server IP address, username, password, and manifest and package file names) as required.
Copy
An edited command might look like the following example:
Copy
This step generates a data file in an XML format (newconfiguration.xml in our example) that is compliant with the Rapid Scripting API.
Prerequisites
A configuration package file has been exported. This example uses a file named newconfiguration.xml.
Access to the Config Slicer tool on the destination server has been granted.
There are no in progress steps for any of the protocols that are going to be sliced in, otherwise the import of the protocol fails.
Step 1: Import Configuration Package File
On the destination server, copy and paste the following code to the command line. Edit the variables (version, package file name, server IP address, username, and password) as required.
Copy
A command with the variables filled in might look like this:
Copy
About duplicate entities
If any of the configuration entities that are about to import, exist in the destination system, Config Slicer either logs a warning or attempts to update them. It depends on the mode being run (see #import-export-mode).
If Clarity LIMS has maintained an internal record of deleted items, the previous information may also apply to deleted entities. This situation may occur if those entities have created outputs that currently still exist in the system.
When running in import mode, entities that exist and are different from the version in the package have a warning and full diff logged.
When running in importAndOverwrite mode, Config Slicer attempts to update entities that exist and are different from the version in the package.
In this scenario, back up configuration package containing copies of the updated entities before they were changed is saved to the directory where the configuration package is located. If that directory is not writable, the backup package is saved to the same directory as the log file.
If the version in the package is identical to the version on the server, no errors are logged and Config Slicer considers that entity successfully imported.
To avoid changing existing configuration (which could possibly break historical data), another option is manually renaming the old entities. Add an extension or a prefix and continue with importing the new configuration package.
Step 2: Validate the import
Use the following methods to validate whether an import has completed successfully:
Check the Import Log:
For each specific type of configuration that is being imported (e.g. container types, process types, workflows), Config Slicer will log a set of messages. The messages look similar to the following examples:
Copy
Before it begins to process a specific entity, the file logs how many entities were found. Any errors or warnings about this set of entities always appear between Found 4 $Entities and Summary of $Entities.
Every entity that is found in the configuration package always appears in the summary, in one category or another. If a scenario occurs where this isn't true, or where the initial count of entities does not match the number in the summary, something has gone wrong and a bug report should be filed.
Validate with Config Slicer:
Running Config Slicer with the validate operation checks every entity in the package to see if it exists on the destination server. It reports results in a format similar to the log format shown previously.
Run the validate operation before or after importing:
Before importing—checks if there could be any problems when importing a configuration from a package. This feature is its primary use as it makes sure that during import, "configuration exists in package but not on server" is not considered an error case.
After importing—makes sure that the results are what was expected.
Example of validate output:
Copy
Check Configuration on the Destination Server:
The ultimate test of whether configuration has imported successfully is to check the configuration on the destination server itself. Make sure it looks and behaves as expected.
Configuration can be checked either via the Configuration screen in the Clarity LIMS user interface, or via the configuration endpoints in the API.
Top-level entities
To be included in a configuration package file, the top-level entities of the custom configuration set must be explicitly enumerated.
Some of the top-level entities are discrete 'self-contained' units, and do not include other units (for example, container types, reagent types, artifact groups, and non-artifact/non-process type UDFs).
Some top-level entities (process types, for example) automatically include other units (refer to #non-top-level-entities for more information).
For process types, only configured processes, vanilla Transfer processes, Pool Samples (since 7.5), and Add Multiple Reagents (since 7.6) processes are supported. All process type details are exportable/importable.
Non-Top-Level Entities
Some entities are only included as part of other entities. For example, process templates, process type UDFs, and artifact UDFs are only included when included in a top-level process type. (The latter is a special case, given that the same artifact UDF can be used by multiple process types.)
Required Entities
Some entities may be required by other entities. In these cases, make sure that these entities are exported/imported in the correct order. For example, because process types may require the existence of a container type, create the container type first.
Required entities are not automatically included. If they do not exist in the destination system, explicitly include them in the manifest file. For example, suppose that a process type declares a particular container type as a default output plate. If that container type does not exist in the destination system, include that container type in the manifest file.
Import modes affect the transactability of the tool, allowing it to make incremental changes if errors occur or provide an all-or-none option. For example, use the operation validate mode to determine if errors were encountered.
Best-Effort Mode
This is the default import mode.
This mode attempts to import as many units as possible. Any failures are logged, but the import operation is not interrupted.
For example, a failing container type import will not prevent other container types from being processed.
Similarly, if a process type fails import because it already exists, any UDFs and process templates for that process type will still be processed.
There is no need to enter an option for this mode.
Strict Mode
If this mode is used, the import operation is aborted if it encounters a failure.
For example, if there is an API version mismatch, the operation will abort and no further imports will be executed. Note that any changes already performed are not reverted.
Use the -Strict option to enable this import mode.
Validate Mode
Use the validate operation (instead of import) to enable this mode.
This mode produces a report listing showing the following items:
Entities that would be successfully imported because they do not exist on the target server.
Entities that already exist on the target server and are identical.
Entities that already exist on the target server but are different.
Validate mode can only detect a limited set of errors. For example, it can check if a particular piece of configuration already exists. If so, it checks if it is identical to the one included in the configuration package.
This information can help determine if the importAndOverwrite option is needed instead. For details, see #command-line-options-and-usage.
Example of console output:
Copy
Example Mode
Use this mode to generate a manifest file if the configuration you want to export is not tied to a specific set of workflows, protocols, or process types.
Use the example operation (instead of import) to enable this mode.
All Clarity LIMS installations include the installation of a separate component known as an Automation Worker (AW) node (formerly known as Automated Informatics (AI) node).
When writing automation-based triggers, the code invoked by an automation runs on the AW node.
While the AW node is a critical component, it typically does not require much attention. However, there are many other options that must be considered. These options include how many AW nodes a Clarity LIMS system uses, and where they are placed. This section discusses some of these options.
The AW node is installed adjacent to the Clarity LIMS application. Its original purpose was to enable remote computing.
To illustrate these features, assume that a need requires Clarity LIMS to produce a file in a specific location. The file is then processed and an action occurs.
A good example is label printing via the BarTender application. The BarTender application picks up the new file, associates it with a printer and a template, and causes a label to be printed.
The infrastructure for this file storage and processing likely occurs on a separate server from the one that contains the Clarity LIMS application and the AW node.
The Clarity LIMS application invokes the command to create the file on the AW node. After the file is in the file store, it is processed by the file processing application.
How does the AW node get the file to the file store, even if it is on a different server and possibly on a different network? The solution is to install an AW node on the external server.
The addition of the second AW node, which is local to the external server but remote as far as Clarity LIMS is concerned, provides a solution to the problem. This demonstrates how AW nodes can support remote computing..
Clarity LIMS invokes the production of the file via the remote AW node.
The remote AW node copies the file to the local file store.
The file processing application processes the file.
An AW node is a Java-based application and can run on most PCs/servers.
The AW node and the Clarity LIMS application must be able to communicate through networking firewalls.
When the Clarity LIMS application has the choice of sending the task to multiple AW nodes, the channel name property is used to determine which one receives the job. For example, the AW node installed on the Clarity LIMS server has the default channel name of limsserver. This is why you must specify the limsserver value when defining an automation command.
When defining the automation, the following two items are defined:
Which command should be run by the AW node.
Which AW node should receive the job via the channel name property.
Typically, for the AW node that runs on the Clarity LIMS server, the convention is to place scripts in the /opt/gls/clarity/customextensions
folder. The log file is stored in /opt/gls/clarity/automation_worker/node/logs/
.
For the remote AW node, store the scripts in any folder, and choose where its log file gets stored (it is running on your hardware).
For the external AW node to run Clarity LIMS toolkits, such as the Lab Logic or Lab Instrument toolkit, make copies of the JAR files that contain these toolkits. Place them on the external server, so the AW node can access them.
There is no real limit to how many AW nodes you can have. Place them wherever they are needed.
Consider the AW node that is installed on the Clarity LIMS server for a cloud-based implementation.
Although cloud-based Clarity LIMS servers contain an AW node, the best practice is not to run your code on it.
Why not? It is a question of security policies for both Illumina and our cloud-hosting provider. If we provide customers with command-line access to the AW node, we are allowing them command-line access to the Clarity LIMS server and, as a consequence, the Clarity LIMS application itself. If using Clarity LIMS in a clinical environment, this makes it more difficult to pass security and access audits.
If the Clarity LIMS instance is cloud-hosted, and you need to run custom code via an AW node, there is a solution.
You could install an AW node on your local architecture and have it interact seamlessly with Clarity LIMS, as illustrated in Figure 1. You can control access to the remote AW node, the infrastructure of the Clarity LIMS server is safely hidden behind firewalls, and security policies remain intact.
However, for some customers, part of the attraction of a cloud-based system is not having to maintain mission-critical hardware. To address this, we can offer an external AW node that does not live on local hardware but is also in the cloud.
Thus, we can provide an external AW node that lives on a separate machine, known as the Automation Worker host. This Software-as-a-Service (SaaS) model gives access to only those parts of the system that require it. You can access the Automation Worker host, and its AW node can interact with Clarity LIMS. However, you cannot access the Clarity LIMS server.
Because the AW node is running on a separate machine to the Clarity LIMS server, it needs its own copy of the toolkit JAR files. For some, this feature has an additional bonus in that the Automation Worker host hardware supports Python 3 for scripting.
For customers who are cloud-based and need a true local AW node, this is not a problem. They can have an AW node in the Automation Worker host and as many local AW nodes as needed. Place them wherever they are needed.
In Clarity LIMS v5.4 and later, you can install the Automation Worker Node onto the Windows server. Before you begin the installation, make sure that you have met the following requirements:
The clarity-aiinstaller-x-deployment-bundle.zip file must be retrieved from the server where Clarity LIMS is installed. You can find this file at /opt/gls/clarity/config/.templates/automation_worker/
.
For Windows 10 users, the command prompt must be specified in the Automation command line (eg, cmd.exe / c echo 'Hello World'
).
If VISTA-SETUP.bat does not display in the installation window after Run as Administrator is selected, start the installation window as follows.
Launch the command prompt as an Administrator.
Change the directory where VISTA-SETUP.bat
is located with cd C:<DIRECTORY>
.
Execute the java -jar .\GLSAutomatedInformatics-Installer.jar
command.
Install the Automated Worker Node as follows.
Copy the SecretUtil deployment bundle ZIP file to the remote Automation Worker node.
Extract the contents of the ZIP file to a folder named secretutil. You can add this folder to C:\opt\gls\clarity\tools
or another location you choose.
Edit vault.properties of the file in the conf folder to update application.mode to file.
Make sure that the following System Environment Variables are set:
CLARITYSECRET_HOME (eg, C:\opt\gls\clarity\tools\secretutil
)
CLARITYSECRET_ENCRYPTION_KEY (minimum 24 characters)
Using secretutil.jar, set the required secrets. For a basic installation of AutomationWorker, you must set the passwords for apiuser and glsftp using the following commands:
# For glsftp
java -jar C:\opt\gls\clarity\tools\secretutil\secretutil.jar -u=<secret> app.ftp.password
# For apiuser
java -jar C:\opt\gls\clarity\tools\secretutil\secretutil.jar -u=<secret> -n=integration apiusers\<username of the API user, e.g. apiuser>
After setting the secrets, attempt to retrieve them with the following command:
java -jar C:\opt\gls\clarity\tools\secretutil\secretutil.jar app.ftp.password
Restart the Automation Worker service.
Use the following steps to help troubleshoot the installed Automation Worker framework. A flowchart is provided as a reference.
1.1 Checking the connection
The first step is to check the connection between the Clarity LIMS server and the Automation Worker node.
Use the -n option of the ai-monitor.jar tool script to see if the Clarity LIMSserver is currently able to communicate with the AI node.
To check the status of ai-monitor.jar:
As the glsjboss user open a SSH session to the Clarity LIMS server.
Run the following command:
Copy
If the Clarity LIMS server cannot connect to any of the AI nodes the response will be as follows:
Copy
In this scenario, proceed to Step 2. Verify Windows Service or Linux Daemon.
If the Clarity LIMS server can connect to the Automation Worker nodes, the response will resemble the following:
Copy
Determine if the Windows service or Linux daemon for the Automation Worker is running.
2.1 Starting and stopping the Windows service / Linux daemon
To start, stop, or restart the Windows service:
From the Start menu, select Run.
In the Open text field, type ‘services.msc’ and select OK.
In the Services dialog, locate the Automation Worker service.
Right-click the service and select Start, Stop, or Restart. If the service is stopped, start the service.
If the service is running, stop and start it again.
Wait for a minimum of three minutes, and then check if the AI node is communicating with the Clarity LIMS server by running the ai-manager.sh
script with the status argument, as described in Step 1.
To start or stop the Automation Worker Linux daemon:
To verify current status:
Copy
To restart a running daemon:
Copy
To stop a running daemon:
Copy
To start a stopped daemon:
Copy
Once Started/Restarted:
Wait for a minimum of three minutes, and then check if the AI node is communicating with the Clarity LIMS server by running the ai-manager.sh script with the status argument, as described in Step 1.
If the daemon is not recognized, list out the contents of the /etc/init.d directory and determine the exact name of the Automation Worker daemon.
The name typically contains 'automation_worker', but may vary—particularly if there is more than one daemon on the same Linux server, or if the Automation Worker is installed on a server other than the Clarity LIMS application server.
3. Automation Worker Log Files
Automation Worker creates history and log files and stores them on laboratory computers in the logs folder of the Automation Worker installation directory.
3.1 Reviewing Automation Worker log files
After performing the steps described above, reviewing these log files may help to determine the cause of the issue.
For details on the Automation Worker log files, and instructions on how to view them, refer to Clarity LIMS Log Files.
3.2 Turning on debug logging
After reviewing the log files, if the cause of the issue is not evident, the next stage is to turn on debug logging. This outputs DEBUG messages to the log files.
Contact the Clarity LIMSsupport team for instructions on turning on DEBUG mode.
Review the log files to determine if the DEBUG messages help to find resolution.
After turning on debug logging, ensure that you restart the Windows service or Linux daemon.
This topic provides guidelines and tips to help you compile a sample list, in Microsoft Excel format, for importing into BaseSpace Clarity LIMS.
Import up 3456 samples from a single sample list file. To import more than 3456 samples, divide the samples into multiple files.
If an error is detected in the spreadsheet, the import process aborts. No sample imports until the error condition is resolved. See .
An asterisk indicates a *mandatory field.
Regular font, without an asterisk, indicates an optional field. See for details.
Text enclosed in angle brackets indicates a placeholder custom field name to replace with a value.
Italicized text indicates a group of fields that depend on each other. All headers in the group must be either all present or all absent.
The following column headers can be used in the sample list:
*Sample/Name—Specify the name of the sample. If the system has the unique sample name option enabled and there are duplicate sample names in the spreadsheet, expect an error message. The message provides information on this error condition. No sample is imported until duplicate names are resolved.
Sample/Volume—Specify the volume of the sample.
Container/Type—Specify the name of the container type to use for this sample. When specifying a value for this column, verify it exists in the system already. For instance, you can specify 100 well MALDI plate as a value, provided this container type is configured in the system.
Container/Name—Specify the name of the container to place this sample. If the name does not match a container already in the system, a new container by this name (and of the specified type) is created.
Sample/Well Location—Specify the well location for the sample. Values for this column are formatted like the following examples: A:1, B:12, or 1:10.
The import process validates this location against the container type specified. If the location is out of range, the process rejects it. If placing a sample into an existing container, the process checks if the well location is already occupied and rejects it if occupied.
Sample/Reagent Label—Specify the reagent label name to use for this sample. Values for this column are optional. They can exist in the system or, if the reagent label is not found, a new one is created. Only one reagent label is supported per sample via batch sample import.
Custom Field/<Name of Custom Field>—Add a custom field instance to the sample. The name of the custom field must exist in the system. If the custom field name specified is not in the system, an error message displays.
You can specify a value for this custom field in the remaining cells of this column. For example, if there is a custom field by the name Clinical Source, you may have a column header Custom Field/Clinical Source and a value of hospital for the sample. See for details.
Container/Custom Field/<Name of Custom Field>—Add a custom field instance to container. This field functions in a similar way to the sample custom field previously described, except the values must be defined on each row (per container). If not, an error message displays.
Take Custom Field as an example. Its optional nature in the spreadsheet is contingent on whether a particular field is defined as optional or required in Clarity LIMS.
If there is a required custom field for a sample, there must be a column in the spreadsheet for this field. A value must be specified for it, before any sample can be imported.
Custom field values are validated against their type defined in Clarity LIMS, as follows.
Date type—For example, if there is a custom field type of Date and the name is Completion Date, the best practice method is to have a column header Custom Field/Completion Date. The remaining values for the cells in the file should be formatted using one of the Excel date formats.
Numeric type with a range defined—The value is first validated as a number. Then, the number is validated against the range defined in Clarity LIMS.
If the validation fails, error messages display.
No sample is imported until the value is validated.
Custom field with values defined in a group of defaults—If a custom field is configured to have one or more values defined in a group of defaults and is only allowed to have one of these defined values, Clarity LIMS validates the value entered against the defined values.
If the value in the spreadsheet does not match one of the default values, validation fails.
If the validation fails, an error message displays. No sample is imported.
If Clarity LIMS expects certain information to be provided and this information is not included in the sample list, the samples are not imported.
Consider the following examples:
If an option or field is mandatory, the sample list must contain a column to capture that information. The sample list must use the appropriate column title.
If the samples must contain unique names, the sample list must not contain duplicate names or the names of samples already recorded in Clarity LIMS.
Containers types referenced in the sample list must already be defined in Clarity LIMS. For example, 96-well plate or tube.
The sample list must not place a sample into a container well that is already populated with another sample.
Any custom fields referenced in the sample list must be defined in Clarity LIMS.
Projects section of the Clarity LIMS Documentation discuss how to create and work with projects.
Clarity LIMS uses projects as the basis for all work performed in the system. All samples must be added to an existing project.
A project stores the following information:
The client and account associated with the project.
The priority of the project (Low, Standard, High).
The samples submitted to the project.
Project status (Pending, Open, Closed).
The date the project was opened and closed.
Files associated with the project.
Any configured custom fields.
Before adding samples to Clarity LIMS, create a project to store them.
By default, you can create projects. However, this role-based permission is configurable. For details, see .
On the Projects and Samples tab, select New Project.
On the Properties tab, in the Project Details section, complete the following tasks:
Type a descriptive name for the project.
If creating a new account, type the name directly into the field. Otherwise, select an existing account from the drop-down list.
If creating a new client, type the name directly into the field. Otherwise, select an existing client from the drop-down list.
By default, the project opened date is set to the current date. To change this date, select the Opened field and select a date from the calendar.
If necessary, edit the project priority. The default is Standard.
On the Custom Fields tab, complete the additional details for this project. Mandatory fields are indicated with yellow shading.
[Optional] To upload a file to the project:
Select the Files tab, then select Upload File.
Select Choose File, browse for and select the file, and select Upload.
Select Save. The new project displays in the Projects list. Samples can now be added.
To view, modify, or delete a project, complete the following steps:
Navigate to the Projects and Samples tab.
In the Projects list, select the project. The Project Details screen displays the details for the selected project.
To modify project details, select the field and edit as required (see Project Creation).
Select Save.
To delete the project, select Delete.
Before deleting a project, consider the following details:
Deleting a project also deletes any samples it contains.
By default, projects can be deleted provided no work has been recorded (or is in progress) on the samples. However, this role-based permission is configurable.
If the samples contained in a project have recorded or in-progress protocol steps, the project cannot be deleted without special user permissions.
To view and update the project status, complete the following steps:
Navigate to the Projects and Samples tab.
In the Projects list, select a project.
On the Properties tab in the Project Details area, the Status slider indicates the status of the project.
To move the slider and change the project status, select the desired status.
To view and modify custom fields, complete the following steps:
Navigate to the Projects and Samples tab.
In the Projects list, select a project.
In the Project Details area, navigate to the Custom Fields tab.
Select Save.
To download, view, and upload project files, complete the following steps:
Navigate to the Projects and Samples tab.
In the Projects list, select a project.
In the Project Details area, navigate to the Files tab. Files currently associated with the project are displayed.
To download and view a project file, select the file.
To upload a file:
a. Select Upload File.
b. Select Choose File.
c. Browse for the file, select it, and select Upload.
Projects and Samples —Allows for project and sample management. Use this screen to create and manage , , and .
Samples must be assigned to an active workflow before they can be accessed and worked on from .
On the main menu, navigate to the Projects and Samples tab.
In the Projects list, select the project containing the desired samples.
In the Samples and Workflow Assignment area, the Submitted Samples and Derived Samples tabs list all the samples included in the project. Select the tab that lists the desired samples. If no work has been performed on the samples, no derived samples are listed.
In the chosen samples list, select the samples to be assigned to a workflow. Use the following methods to select or deselect samples:
Select a group to select all samples in the group (the button label changes to Deselect Group).
Expand the group and select samples to add them individually. Use Shift + select to select multiple adjacent samples or Ctrl + select to select multiple nonadjacent samples.
To deselect samples, select them or select Deselect Group to deselect all samples in the group.
When finished selecting samples, select Assign To Workflow and select the desired workflow from the drop-down list.
The list displays all the workflows that are currently active in the system.
In the samples list, a label displays showing the samples that are now assigned to a workflow. Select the X inside a label to remove that sample from the workflow.
In the Workflows area on the right, the number of samples assigned to the workflow displays. Select the X in the upper-right corner to remove all samples from the workflow.
Repeat steps 3 to 6 to assign other samples to workflows, as required.
Assigned samples now display in Lab View, in the Available Work area, listed under the first protocol step of the selected workflow.
To locate samples in the queue quickly, filter on the sample name.
By default, anyone signed in can remove or unassign samples from a workflow, provided no work has been recorded (or is in progress) in that workflow for those samples. However, this role-based permission is configurable. If the samples have recorded protocol steps or are in progress in the workflow, they cannot be removed from it without special user permissions. For details, see .
When you add samples to Clarity LIMS, you must add them to a project. Clarity LIMS uses projects as the basis for all work performed in the system.
There are two ways to add samples to projects:
Add samples individually to one or more projects. See .
Upload a sample list (Excel spreadsheet) to a project. See .
The Sample Management screen allows for convenient sample accessioning. On this screen, add multiple samples to a single project or to multiple projects, and modify samples already added to the system.
There are two ways to access the Sample Management screen:
The main menu bar, which allows for adding samples to one project or to multiple projects.
The Project and Samples tab, which allows for adding samples to a project or modifying samples already added to a project.
On the main menu bar, hover over Projects and Samples tab and select Add Samples when the option displays.
In the Sample Management screen, select a project from the Project Name drop-down list. This selection auto-populates the fields in the Project Details area.
Edit the project details if necessary.
To upload files to the project, select Upload File.
In the Sample Details area, provide the following information:
Enter the name of the sample.
Choose a container from the drop-down list.
Enter the container name.
Complete any other applicable fields (mandatory fields are marked with an asterisk).
NOTE: LIMS ID (Submitted Sample), Date Submitted, and LIMS ID (Container) fields are automatically populated after the sample is saved.
To upload sample files, select Upload File at the bottom of the Sample Details section.
To add another sample, select Sample +. Add this sample to the same project or select a different project from the Project Name drop-down list.
Repeat steps 3 to 5 as required.
When all samples are added, select Submit Samples. Clarity LIMS validates, saves the samples, and returns to the Project and Samples tab.
On the Project and Samples tab, the projects with recently added samples are automatically selected.
While adding samples, note the following details:
Copy values across to adjacent samples by selecting the arrows to the left of the fields. Clarity LIMS automatically populates the Well field.
If editing well information, make those changes last (before submitting the samples). Changes to Container, Container Name, LIMS ID (Container), and Sample Name may reorder the well locations.
Use the paging buttons to scroll pages of samples.
In this method, only add samples to the project selected. It is not possible to select a different project as described in the previous section of the documentation.
On the Project and Samples tab, select a project to add samples.
In the Submit Samples section, select Add Samples.
On the Sample Management screen, because the project has already been specified, the project details are automatically completed.
Follow steps 3 to 7 of the previous section to add the sample details.
On the Project and Samples tab, select a project to add samples.
In the samples list, select the submitted samples to modify (derived samples cannot be modified).
Select Modify Samples.
Modify the sample and project details as required (the Project Name or Container Name fields cannot be modified).
To save the changes, select Submit Samples.
Clarity LIMS validates the modifications, saves the samples, and returns to the Project and Samples tab. On the Project and Samples tab, the projects containing the modified samples are automatically selected.
On the Sample Management screen, hovering over a sample displays a small Remove from view (X) button in the upper-right corner.
The effects of selecting this button differ depending on the circumstances.
If there are many samples to process, add them to the system by uploading a Microsoft Excel spreadsheet file (*.xls or *.xlsx). Use the same method to modify information for multiple samples.
By default, a maximum of 3456 samples can be uploaded from a single sample list file. To upload more than 3456 samples, divide the samples into multiple files.
Navigate to the Projects and Samples tab.
In the Projects list, select a project to add samples. The Project Details area updates to show the details for the selected project.
Select Upload Sample List.
In the Upload File dialog, select Choose File and browse to and open the sample list file.
Select Upload File.
As part of the upload process, Clarity LIMS validates the file to make sure the custom field data it contains meet the requirements, presets, and restrictions that apply to submitted samples. If the file contains invalid data, an error message displays.
When the upload process completes, the samples display in the Submitted Samples list for the project.
The Submitted Samples list allows the following actions:
Hover over the Information icon for a sample to view the details associated with it.
Modify sample details.
Add samples to a workflow.
If samples have been created in error, delete them from the sample list and the project (provided no work has been done on them). To complete this action, select the samples and select Delete.
After uploading, the submitted samples can be assigned to a workflow. When they are assigned, the samples are available for lab scientists to work on.
Navigate to the Projects and Samples tab.
In the Projects list, select the project containing the samples to modify.
Select Modify Samples.
Clarity LIMS generates a sample list containing all samples in the project and downloads it.
Open the file in Excel. It contains the Clarity LIMS IDs of all samples and all custom field data.
Update the sample information as required.
A sample list can be uploaded/imported in which custom field values have been changed, removed, or added.
While sample custom fields can only be updated, the sample list can contain other columns of data. The original data from the sample list does not have to be removed.
If the system does not require the custom field, leave the cell blank or enter NULL.
A ‘blank’ value will leave existing data in the system intact.
A NULL value will clear any existing data in the system for that field.
Clarity LIMS uses the information defined in the sample list (sample name, container type, and so on) and looks for matching samples in the project. If a matching sample is found, the system updates the sample with the values specified in the sample list.
This section describes how to add a large number of samples to the LIMS by importing a sample list - a Microsoft® Excel® *.xls or *.xlsx spreadsheet file.
To process many samples, add them to Clarity LIMS by importing a Microsoft Excel spreadsheet file (*.xls or *.xlsx). This method also applies to updating sample information.
The sample list must be in *.xls or *.xlsx format.
The sample list's column header names must match the default fields in the LIMS.
The following column header names cannot be changed: Sample/Name, Container/Name, and Sample/Well Location.
The following columns must be populated: Sample/Name and any sample-level custom fields that the system administrator requires.
By default, import up to a maximum of 3456 samples from a single sample list file. To import more than 3456 samples, divide the samples into multiple files.
Download a sample list template from the Projects and Samples view in Clarity LIMS.
Open the sample list template.
By default, the template contains the following information:
<TABLE HEADER> and <SAMPLE ENTRIES> tags (red/purple text).
These identifying tags are required by the LIMS import process. Do not edit these tags.
Column headers (white text on blue background)
These headers must reference the names of the fields containing the information to capture for a sample. If editing the column headers or creating additional headers, make corresponding changes to the fields in Clarity LIMS. See .
Populate the columns with the information associated with the samples. Enter the data into the rows between the <TABLE HEADER> and <SAMPLE ENTRIES> tags. Insert additional rows as needed.
Save the file and import it into a project (see ).
If the sample list specifies a container name that does not exist in Clarity LIMS, the system creates the container.
Enter dates using Excel date cell formatting.
To preserve currency characters (e.g. $), currency is best entered as a string (rather than using the Excel currency category).
Numbers can be entered either as numeric or string values.
If there are drop-down lists of values in Clarity LIMS, enter these exact values in the sample list.
Container well locations are always Row:Column. The actual dimensions depend on the container type configuration.
Excel may sometimes automatically alter values, depending on the type of data being entered.
For example, for Boolean fields such as Stored On Site? below, numeric values of zero and false will evaluate to FALSE whereas non-zero numeric values and true will evaluate to TRUE. Other values will result in an error on import.
Spreadsheet programs like Microsoft Excel contain features for increasing usability and speed when entering data. For example, the following configurations are available:
Add drop-down lists of options that correspond to options available in Clarity LIMS. Use the Named Range and Data Validation Excel features.
Hide header columns required by the system but not required.
The Clarity LIMS support team can create custom, efficient sample list templates.
Search for samples—Select a submitted or derived sample name to search for that sample (see also ).
For details on configuring an automation, see .
For information on role-based permissions, see .
Select a field to modify and edit as required (see ).
See on how to configure Project Automation.
To view a subset of samples, remove selected samples from view. For details, see .
NOTE: To avoid modifying or seeing all the samples selected, remove them from the view. See .
In the sample list, specify container placement and include values for standard and user-defined options and fields (for details, see ).
Samples can be added individually in the Sample Management screen. For details, see .
If a sample list is not readily available, select the Download Example Sample List link to download a sample list template. Open the template file in Excel, populate it with the sample details, and save the file. For details, see .
Return to the LIMS and upload the modified sample list. Follow the steps outlined in .
Refer also to .
-a,--apiuri <apiuri>
The BaseSpace Clarity LIMS REST API base URI (ends in "/api") (Either this or --server must be provided)
-k,--package <package file>
File to be imported from or exported to (Required if operation is import, importAndOverwrite*, export, or validate). If file is not local a full path is required.
-f,--force <force>*
Force update without prompt when running in importAndOverwrite mode (Optional)
-m,--manifest <manifest>
Manifest file (Required if operation is export or example). If file is not local a full path is required.
-o,--operation <operation>
The operation mode for the Config Slicer tool.
Options are import, export, validate, example, importAndOverwrite, and custom (Required)
-p,--password <password>
The BaseSpace Clarity LIMS REST API password, if encrypted, use "ENC(<encrypted-password>" (Required)
-pr,--protocols <protocols>
The protocols to include in the custom manifest (Optional)
-pt,--processTypes <processTypes>
The process types to include in the custom manifest (Optional)
-s,--server <server>
The BaseSpace Clarity LIMS REST API server (either this or --apiuri must be provided)
-S,--Strict
Strict mode for import (fail fast - default mode is best-effort) (Optional)
-u,--username <username>
The BaseSpace Clarity LIMS REST API username (Required)
-w,--workflows <workflows>
The workflows to include in the custom manifest (Optional)
Custom Field/Sample Material | Custom Field/Stored On Site? | Custom Field/Sample Location | Custom Field/Date Received |
Tissue | 1 (TRUE) | Freezer #1 | 06/01/2014 |
Serum | true (TRUE) | Freezer #1 | 06/01/2014 |
Tissue | 0 (FALSE) | Fridge #2 | 07/15/2014 |
Serum | false (FALSE) | Fridge #2 | 07/15/2014 |
Tissue | yes (ERROR) | Fridge #2 | 07/15/2014 |
When working on a step, you can create multiple aliquots of each sample, move one aliquot to the next step in the workflow, and store the others for later use.
Add samples to the Ice Bucket.
In the Ice Bucket, create the required number of sample aliquots.
To create the same number of aliquots for all samples, select the number of derivatives to create and select Apply All. (The number adjacent to each sample updates automatically.)
To create a different number of aliquots for each individual sample, adjust the number adjacent to each sample.
In the Assign Next Steps screen, all the sample aliquots are listed.
To store an aliquot for later use, select Store for later from the next step drop-down list.
In the Search drop-down list, select Sample, type the sample name, and press the Enter key.
In the search results, all the sample aliquots are listed.
Select a sample aliquot to expand the details.
A prompt appears to confirm this step.
Select Confirm.
After it is confirmed, the search results update and the sample are shown as queued for the next step in the workflow.
In Lab View, the sample aliquot is now ready to be worked on.
To requeue and rework samples, specific role-based permissions are required. For details, see Configured Role-Based Permissions.
Occasionally, a sample must be rerun through a particular step. For example, there may have been a technical error in the lab. More sequencing may be needed at the end of a workflow if there are not enough samples.
To solve this problem, return samples to the queue and repeat the step.
There are several ways to requeue samples:
Search for the step with samples to requeue, view all samples that have completed this step, and choose the ones to requeue. See #requeue-samples-from-a-completed-protocol-step
Search for a specific sample to requeue. See #requeue-a-specific-sample
In addition to requeueing samples for the same step, you can also rework samples from a previous step. For example, an action is needed if there is an insufficient quantity of a particular sample to meet the required target concentration level.
If a sample has been flagged for manager review, the manager can select Rework from an earlier step directly from the Review Samples screen.
Consumables are the instruments, reagents, containers, and other equipment used in the lab. Configure these items in Clarity LIMS and associate them with specific master steps. When the steps are run in the LIMS, the consumables used are recorded.
This section describes how to add and configure the reagent kits and lots used in your lab, and enable them for use on specific master steps.
Add the reagent kits and lots used in the lab to Clarity LIMS and enable them for use on specific steps. When lab scientists run samples through a step, they can record the reagent lots used.
All users logged into Clarity LIMS can access the Reagents configuration screen. However, their user permissions determine what they are allowed to do in this screen.
By default, only the Administrator role has the Configuration:update permission. For more on user roles and permissions, see User Roles and Configured Role-Based Permissions.
Enabling kits on steps makes them available for use in the lab. When running those steps, the reagent lots used are recorded
When a new reagent kit is added, it is not a requirement to enable it on a step. It can be enabled at any time.
Reagent kit/step configuration is bidirectional. Enable a reagent kit on a step in the following situations:
When adding reagents on the Reagents configuration screen (described in this section).
When configuring a master step or step. For details, see #add-and-configure-master-steps-and-steps.
NOTE: The Configuration:update permission is required to add new reagent kits to Clarity LIMS.
On the main menu, select Configuration.
On the configuration screen, select the Consumables tab, then select Reagents.
On the Reagents configuration screen, select New Reagent Kit.
Type a name for the reagent kit. This name displays in queues of steps on which use of the reagent kit is enabled.
Enter details for the reagent kit to help with future reordering:
Supplier: Enter the commercial vendor name.
Cat. #: Enter the catalog number.
Website. Enter the website of the commercial vendor. When viewing details for the reagent kit, select the link to open the web page in a new browser window.
In the Reagent Kit Use section, the status of the new kit is set to Active. This status means that it is available to be used in the lab (after it is enabled on steps).
Select Save to add the new reagent kit.
The new kit displays in the Reagent Kits list.
On the Reagents configuration screen, in the Reagent Kits list, select the kit to enable on steps.
In the Reagent Kit Use area on the right, compete the following actions:
Select a protocol from the drop-down list. Type the first few letters of the protocol to filter the list.
In the adjacent list, select the step on which to enable the reagent kit.
Enable the control on additional steps, if necessary.
Select Save. The Reagent Kits list now indicates that the kit has been enabled on the selected steps.
On the Reagents configuration screen in the Reagent Kits list, select the kit for which you would like to add a new lot.
Below the Kit Details in the Lots area, select New Lot.
In the Lot Details area, enter the lot name.
[Optional] Enter additional details about the lot, such as the lot number and expiry date.
Specify a storage location and add notes about the reagent lot. For example, use this field to note why a lot is being archived).
Clarity LIMS automatically populates the LIMS ID and Created and Modified dates.
Select Save.
The new lot displays in the Reagent Kits list.
By default, when a new lot is added, the status is Pending. The Status of Reagent Lot slider at the bottom of the Lot Details area controls the lot status.
The status of a reagent kit may be Active or Archived. The status of a reagent lot may be Pending, Active, or Archived.
Note the following details about reagent kit status:
Active reagent kits are in use, or are available for use, in the lab workflows. By default, when a new kit is added, the status is saved as Active.
Archived reagent kits are kits that are not currently in use, or available for use, in the lab workflows. Configuration details for archived kits are saved, so reactivation is easy.
Archived kits are listed at the bottom of the main Reagent Kits list in the Archived Reagent Kits group.
Note the following details about reagent lot status:
Active reagent lots are in use, or are available for use, in the lab. Select the active reagent lots as they record work for a step.
Pending reagent lots have been ordered but not yet received in the lab. They are not available for selection by lab users running steps in Clarity LIMS. By default, when a new lot is added, the status is saved as Pending.
Archived reagent lots have typically expired or been used up. They are not available for selection by lab users. Note the following information:
When the expiry date for a lot has passed, Clarity LIMS automatically archives the lot.
Archived lots that have passed their expiry date cannot be reactivated.
Archived lots display in an Archived Reagent Lots group within the Reagent Kit details list.
In the Reagent Kits list, select the kit to be archived or reactivated.
In the Reagent Kit Use area on the right, select Archived / Active.
Select Save.
In the Reagent Kits list, select the kit containing the lot to be activated or archive
At the bottom of the Lot Details area on the Status of Reagent Lot slider, select Active / Archived.
Select Save.
In the Reagent Kits list, select the reagent kit or lot to delete.
In the Kit Details/Lot Details area on the right, select Delete.
When deleting reagent kits and lots, keep the following in mind:
Only reagent kits and lots that have not been used in a step can be deleted.
If a kit or lot has been recorded in a step, or is being used in a step, it cannot be deleted. The Delete button is not enabled.
As lab scientists work with samples in the lab, they may request a manager to review a sample at a certain step in the workflow. When a request for review occurs, an alert notification displays in Lab View, in the Recent Activities area.
In Lab View, in the Recent Activities area, select an alert to go directly to the step containing the sample to be reviewed.
Review the sample, add a comment, and select Finish Review.
NOTE: You can also view and resolve alerts from Projects Dashboard
Use the Lab Work configuration screen to model the workflows, protocols, and master steps used in the lab on the Lab Work configuration screen.
To access the Lab Work configuration screen, the Configuration:update permission is required. Without this permission, the Lab Work tab is not visible.
By default, only the administrator role has the Configuration:update permission. For more on user roles and permissions, see User Roles and Configured Role-Based Permissions.
On the main menu, select Configuration.
Select the Lab Work tab.
The main navigation panel lists the workflows, protocols, steps, and master steps configured in Clarity LIMS. From here, complete the following actions as needed:
View the relationships between workflows, protocols, steps, and master steps.
View workflow, protocol, step, and master step configuration in the form beneath the navigation panel.
See the status of workflows (pending, active, or archived).
Add and modify workflows, protocols, steps, and master steps.
Select a workflow, protocol, step, or master step to view related configuration items in the other lists.
Selecting a protocol highlights the following items:
All workflows that include the selected protocol are highlighted.
All steps in the selected protocol are highlighted.
All master steps from which the steps are derived are highlighted.
Selecting a workflow highlights the following items:
All protocols in the workflow, which display sequentially at the top of the Protocols list.
All steps in those highlighted protocols.
All master steps from which the highlighted steps are derived.
Zoom out in the browser to maximize the number of items visible in the lists. Drag the lower edge of the panel to see more list items.
The best practice method for creating and configuring lab work components in Clarity LIMS is as follows.
Create and configure master steps.
Create and configure protocols.
Create and configure steps, adding them to the appropriate protocols, and using the master steps Create and configure workflows, adding required protocols.
Create and configure workflows, adding required protocols.
While these are the recommended steps, you can create protocols first, or create workflows and add the protocols later. However, before creating a step, you must select the protocol in which to add the step, and the master step on which to base its configuration.
When working with workflows, protocols, steps, and master steps, there are some restrictions you should be aware of. These restrictions are summarized below, and are also described in the articles that discuss the configuration details of each component.
The following section also lists the restrictions associated with custom fields and automations.
Custom fields are configured on the Custom Fields configuration screen. Refer to Custom Fields.
Step automations are configured on the Automations configuration screen. Refer to Automations.
Lab View is the main screen in Clarity LIMS.
Lab View shows the protocols and steps used in the lab, and the number of samples queued for each step. Use this screen to do the following:
See recent lab activities.
See in-progress steps and steps that are ready to be worked on.
Start or continue working on samples.
View and follow up on Alert Notifications.
When users run a step in the LIMS, they typically select a destination container type from a preconfigured list in the Ice Bucket screen, and then proceed to the Record Details screen - where they scan the barcode of a new container to add it to the LIMS.
However, users may sometimes want to place samples into an existing container - that is, a container whose barcode has already been entered into the LIMS. This is easily achieved in the Ice Bucket screen.
In Lab View, open the step containing the samples you want to place into an existing container.
Select the samples and add them to the Ice Bucket.
In the Ice Bucket screen, in the Container Options panel:
In the Destination Container drop-down list, select the desired container type.
In the Find Existing Container field, type the name or Clarity LIMS ID of the container in which to place the samples. While typing, Clarity LIMS presents a filtered list of containers with matching names or Clarity LIMS IDs. Select the appropriate container from the list.
Click Begin Work to proceed to the Record Details screen.
Occasionally, you must remove samples from a workflow queue in Clarity LIMS. Only administrator users can perform this operation.
In Clarity LIMS, open the step containing the samples to remove.
Select one or more samples, expand the Options drop-down list and select Remove.
The Options drop-down list is available to administrator users only.
If necessary, complete the following actions:
Search for the samples in Clarity LIMS.
Requeue them for a previous step.
Usually, samples move through the system according to the sequence of protocols and steps defined in a workflow.
However, sometimes samples are moved into the next step manually.
For example, suppose you must delete a step from a particular protocol. If there are samples queued for the step, they are not able to delete it. In this case, you can move the samples forward into the next step and proceed with the step deletion (see #configure-protocols).
In Clarity LIMS, open the step containing the samples to move.
Select one or more samples, expand the Options drop-down list and select Move.
The samples move into the subsequent step in the protocol.
In Clarity LIMS, the outcome of the next step selection for sample outputs depends on whether those outputs are derived samples, pools, or derivatives.
The following tables summarize the expected results when choosing next steps for these three output types.
In Clarity LIMS, a workflow is a set of protocols arranged in a sequence that corresponds to the way in which work is performed in the lab. This page explain how to create and configure your workflow.
After protocols are created and configured, add them to workflows that represent how samples move through your lab.
In Clarity LIMS, use the Lab Work configuration screen to view, add, and configure the workflows used in the lab. For an overview of this screen, see Lab Work.
To access the Lab Work tab and configure workflows, the Configuration:update permission is required. By default, only the Administrator role has this permission. For details, see User Roles and Configured Role-Based Permissions.
The Lab Work screen provides an at-a-glance view of all workflows configured in the LIMS, along with the protocols and steps they contain. You can quickly see which workflows are active, which are archived, and which do not yet have protocols assigned to them.
To view workflow details:
On the main menu, select Configuration.
Select the Lab Work tab.
In the Workflows list, select a workflow to highlight it.
The Protocols list updates to show all protocols included in the workflow. These are highlighted and displayed sequentially at the top of the list. A dashed line separates these workflow protocols from the comprehensive list of all protocols in the system.
The Steps list updates, highlighting the steps included in the highlighted protocols.
The Master Steps list updates, highlighting the master steps on which the highlighted steps are based.
Note: You can also select a protocol, step, or master step to view the related workflows.
Below the main navigation panel, review the workflow configuration form. This displays the name of the workflow and its status.
The status of a workflow may be Pending, Active, or Archived. The following table provides an overview of each status setting and describes the implications of each.
The following section shows how to add a workflow to the LIMS and add protocols to it. When configuring workflows, keep the following in mind:
You are not required to add protocols immediately. If you prefer, you can create empty Pending workflows and assign protocols to them later.
You can only activate a Pending workflow if it contains at least one protocol.
When adding protocols to a workflow, reordering protocols within a workflow, or removing protocols from a workflow, your changes are autosaved. You do not have to select Save after every modification.
You cannot add empty protocols to a workflow. The protocol must include steps.
On the Lab Work configuration screen, in the upper-right corner of the Workflow list, select Add.
Below the main navigation panel, the workflow configuration form displays.
To begin, type a name for your new workflow.
Select Save.
The workflow is saved in a Pending state, and displays in the Workflow list of the main navigation panel.
In the Workflow list, select the workflow.
In the Protocol list, locate the first protocol to include and select Add.
The protocol is added to the workflow and displays at the top of the Protocol list. The 1 indicates that this is the first protocol in the workflow.
Repeat step 2 until you have added all required protocols to the workflow.
To remove a protocol from the workflow, select its Remove button.
Drag and drop to reorder protocols within the workflow.
To drag and drop on a mobile or touch-screen device, touch and hold the item you wish to drag. After a moment, the item appears to lift off the page and its color changes to white. You can then drag the item and drop it into its new position.
To view or modify a protocol, select the protocol to display its configuration form below the main navigation pane.
You can now save the workflow as a Pending workflow.
- or -
Select Activate to use this workflow immediately.
NOTE: After you activate a workflow, you cannot modify or delete it.
On the Lab Work configuration screen, in the Workflow list, select the Pending workflow to modify or activate.
Make your changes and select Save to save the workflow as a Pending workflow.
-or-
Select Activate to change the workflow status to Active and begin using the workflow.
When modifying or activating workflows, keep the following in mind:
You cannot activate empty workflows.
You can only modify a workflow while it remains in the Pending state. That is:
You cannot add a protocol to an Active or Archived workflow.
You cannot remove a protocol from an Active or Archived workflow.
You cannot rename or reorder protocols in an Active or Archived workflow.
You cannot delete protocols included in Active or Archived workflows.
While you cannot delete workflows after they have been activated, you can archive them. This makes them temporarily unavailable for use in the lab. You can reactivate an Archived workflow at any time.
To archive a workflow:
On the Lab Work configuration screen, in the Workflow list, select the Active workflow to archive.
In the Workflow Settings area, select Archive. Select Save.
To reactivate an archived workflow:
On the Lab Work configuration screen, in the Workflow list, select the Archived workflow to reactivate.
In the Workflow Settings area, select Activate. Select Save.
On the Lab Work configuration screen, in the Workflow list, select the pending workflow to delete.
On the workflow configuration form, select Delete.
Confirm the workflow deletion:
To proceed with the deletion, select Delete.
To cancel the deletion, select Cancel.
You can only delete a workflow while the workflow remains in the Pending state. You cannot delete Active or Archived workflows.
In Clarity LIMS, a protocol is a set of steps that must be performed in a specific sequence, as part of a lab's workflow. This section explain how to create and configure your lab protocols.
Clarity LIMS includes preconfigured protocols, each containing a series of steps through which a sample must pass. You can create custom protocols, adding steps that represent the steps that are run in your lab. You can then add the protocols to workflows so that lab users can work with them in Lab View.
Use the Lab Work configuration screen to view, add, and configure the protocols used in the lab. For an overview of this screen, see #lab-work-overview
To access the Lab Work tab and configure protocols, the Configuration:update permission is required. By default, only the Administrator role has this permission. For details, see User Roles and Configured Role-Based Permissions.
On the main menu, select Configuration.
Select the Lab Work tab.
The Workflows, Protocols, Steps, and Master Steps navigation panel displays.
In the Protocols list, select a protocol to highlight it:
The Workflows list updates, highlighting the workflows that contain the selected protocol.
The Steps list updates, highlighting the steps included in the selected protocol.
The Master Steps list updates, highlighting the master steps on which the highlighted steps are based.
Below the main navigation panel, review the protocol configuration form.
This displays the name of the protocol and its settings.
The Protocol Settings area captures important information about the protocol—the date it was created, the date it was last modified, and other settings that determine how the protocol is used in the lab. The following table summarizes these settings.
When adding and configuring protocols, note the following details:
When adding steps to a protocol, reordering steps, or removing steps from a protocol, changes are autosaved. You do not need to select Save after every modification.
When #configure-next-steps, you must select Save to save your changes.
To add a protocol:
On the Lab Work configuration screen, in the upper-right corner of the Protocols list, select Add.
Below the main navigation panel, the protocol configuration form displays.
To begin, type a name for the new protocol.
Select the settings for this protocol (For details, see #protocol-settings):
Select whether this is a QC or Non-QC protocol.
Select the Protocol Type:
If you are adding a QC protocol, this automatically is set to QC.
If you are adding a Non-QC protocol, select the type from the drop-down list.
In the Capacity field, enter the sample capacity of this protocol.
[Optional] To temporarily hide the protocol from Lab View, use the Show in Lab View? slider. Change the setting to No.
Select Save. The new protocol displays at the bottom of the Protocols list. You can move it to a different position in the list by dragging and dropping.
To drag and drop on a mobile or touch-screen device, touch and hold the item you wish to drag. After a moment, the item appears to lift off the page and its color changes to white. You can then drag the item and drop it into its new position.
You can also copy a protocol and then modify the copy for use in other workflows. See #copy-protocols.
This section provides an overview of the step creation process. For detailed information on steps and master steps, and step-by-step instructions for configuring them, see#add-and-configure-master-steps-and-steps.
To add a step to a protocol:
In the Protocols list, select the protocol.
In the upper-right corner of the Steps list, select Add.
Below the main navigation panel, the step configuration form displays.
Type a name for the new step.
In the adjacent Master Step list, select the master step upon which to base the new step.
Select Save (this button is not enabled until you have selected a master step).
In the Protocols list, select the protocol again.
The step you added displays at the top of the Step list.
If this is a non-QC protocol, a 1 is in front of the step name, indicating that this is the first step in the protocol.(QC steps are not numbered as they are typically not sequential.)
In the Master Step list, the master step upon which the step is based is also highlighted.
Repeat steps 1–5 to add more steps to the protocol.
To delete a step, select it and select the Delete.
To reorder steps within the protocol, simply drag and drop them.
To drag and drop on a mobile or touch-screen device, touch and hold the item you wish to drag. After a moment, the item appears to lift off the page and its color changes to white. You can then drag the item and drop it into its new position.
Select Save.
You can now configure the order in which the steps are run, and the method used to assign and run 'next steps.' See #configure-next-steps.
When configuring non-QC protocols, the protocol configuration form includes a Next Steps table. This allows you to configure the sequence in which steps are run in the protocol. This table does not display for QC protocols, because the steps in a QC protocol are typically not sequential.
In the table:
Each row represents a numbered step in the protocol.
Each column represents a 'permitted next step' for each of the numbered steps.
The cells at each row/column intersection indicate which steps are potential permitted next steps for the step represented in that row.
If there is an icon in the cell (an X or a checkmark), the step represented by that column may be selected or deselected as a permitted next step.
Previous and current steps cannot be selected as permitted next steps, and are shown as nonselectable cells.
The bottom two rows determine whether the next steps are started and assigned manually or automatically. Manual is the default setting.
To configure next steps
In the Next Steps table, select a cell to select (or deselect) one or more permitted next steps.
In the Start Next Step and Assign Next Step rows, select a cell to switch between Manual and Automatic.
To assign a next step automatically, you also need to configure an automation and add it to the step. For details, see #add-and-configure-automations.
When configuring QC protocols, the protocol configuration form includes a QC Filters section. This section lets you configure QC logic to make sure that only certain samples are queued for each QC step. Typically, QC protocols contain multiple nonsequential steps that culminate in a QC aggregation step.
QC filters are composed of two drop-down lists.
The first list refers to the QC flag assigned at run time:
Passed means that a pass QC flag was assigned to the sample at run time.
Failed means that a fail QC flag was assigned to the sample at run time.
Did not pass means that the sample did not run, or received a fail QC flag, at run time.
Did not fail means that the sample did not run, or received a pass QC flag, at run time.
The second list refers to the master steps from which the steps are derived:
All master steps used in the protocol are included in the list. Together, these form a statement (for example, Failed Bioanalyzer).
Each statement may be followed by an'AND', which allows you to create an additive statement.
Statements are separated by an'OR', which allows you to create mutually exclusive statements.
Together, these AND/OR statements create the QC filter logic for a given step.
For example:
You may want the NanoDrop QC queue to show samples that have not passed NanoDrop QC (ie, they did not run, or received a fail QC flag), and that have passedBioanalyzer QC.
If the procedures dictate that all samples must have passed Bioanalyzer QC and NanoDrop QC, use an 'AND' statement to ensure samples are not queued for a QC aggregation step unless they have passed both of these steps.
If your lab procedures dictate that all samples must have passed Bioanalyzer QC or NanoDrop QC, use an 'OR' statement to ensure samples are not queued for QC aggregation unless they have passed one of these steps.
You may want to rename a protocol, or add or reorder steps. Some modifications are only permitted if the protocol is not included in an active or archived workflow.
NOTE: We recommend that you do not modify or delete the preconfigured protocols without first consulting the Clarity LIMS Support team.
To modify a protocol:
In the Protocols list, select the protocol.
Make your changes and select Save.
Note the following details:
You can rename protocols in pending, active, and archived workflows.
For non-QC protocols, you can modify the protocol type. For example, you can change a Sample Prep protocol to a Library Prep protocol.
You can choose to hide or show the protocol in Lab View.
You cannot change a QC protocol to a non-QC protocol, and vice versa.
You cannot add, reorder, or delete steps if the protocol is included in an active or archived workflow.
To delete a protocol:
In the Protocols list, select the protocol.
On the protocol configuration form, select Delete.
Note the following details:
You cannot delete a protocol if it is included in an active or archived workflow. In this case, the Delete button is not enabled.
If you delete a protocol, the steps it contains, and the master steps on which those steps were based are not deleted.
After you have added and configured a protocol, you can copy it and then modify the copy for use in other workflows. This is useful if you have multiple protocols with similar base configuration, as it saves you having to recreate each one from scratch.
To access the Lab Work tab and configure protocols, the Configuration:update permission is required. By default, only the Administrator role has this permission. For details, see User Roles and Configured Role-Based Permissions.
When you copy a protocol, all of its steps are also copied—along with any step-level fields, automations, reagents, controls, and instruments configured on those steps.
You can also create copies of the master steps, or you can reuse the same master steps.
Copying master steps does copy step-level fields.
Reusing master steps does not copy step-level fields.
If a copied master step has custom field default values that refer to other steps within the protocol, update those values to refer to the copied steps. See #update-custom-field-default-on-copied-master-steps.
On the main menu, select Configuration.
On the LIMS configuration screen, select the Lab Work tab.
In the main navigation panel, in the Protocols list, select the protocol to copy.
Below the navigation panel, select Copy.
The Copy Protocol Options dialog opens. This dialog provides two options:
Append name with—This option lets you specify text to be appended to the protocol name (default is _copy). This text also is appended to the copied step names, and to the master step names if you also choose to copy those. Note: Copied step-level field names do not have the text appended.
Copy Master Steps?—This option lets you choose to reuse the same master steps (this is the default behavior), or create copies of the master steps.
Selected an option and then select Continue to copy the protocol and steps.
The copied protocol displays in the Protocols list, and is selected along with its related steps and master steps.
Below the navigation panel, the protocol configuration form displays. You can work with the protocol and its steps just as you would with any other protocol/steps in the system.
If you have configured a custom field default on the master step you are copying, and the default value refers to the name of another step within the protocol, you must update that default value on the copied master step, so that it refers to the appropriate step in the copied protocol. The default values are not automatically updated to refer to the copied step names.
Similarly, if you have configured a script or logic that uses custom field defaults that refer to another step within the protocol, you must update those default values on the copied master step.
For example, in a QC protocol:
The Aggregate QC step has various 'Copy Task' UDFs defined - eg, Copy Task 1 - Source Step and Copy Task 2 - Source Step.
The values of these fields are determined by other QC steps within the protocol.
The script that is configured on the QC Aggregate step references those QC step names, locates the specified custom field values from the steps, and uses them to determine QC results.
If the QC protocol is copied, the copy of the master step on which the Aggregate QC step is based must be updated so that the custom field default values refer to the appropriate steps within the protocol.
Steps & Master Steps section of the LIMS Documentation explain how to create and configure steps and master steps in the LIMS.
In Clarity LIMS, steps and master steps are techniques or procedures that are performed on a sample. They are the building blocks of the lab work.
Think of master steps as starting points to create the individual steps that are run in the lab.
The master step <--> step relationship is one to many:
Each step is derived from a master step.
A master step may be used as the foundation for multiple steps.
All steps are derived from a master step and inherit any properties configured on the master step.
If you configure properties at the step level, those properties only apply to that particular step.
To understand how properties set on the master step propagate down to the step level, see #rules-for-propagation-of-master-step-properties.
To access the Lab Work tab and configure workflows, protocols, steps, and master steps, the Configuration:update permission is required. By default, only the Administrator role has this permission. For details, see User Roles and Configured Role-Based Permissions.
In Clarity LIMS, all steps are derived from a master step and inherit any properties configured on the master step.
The rules for how properties set on the master step propagate down to the step level apply to all properties. Those configured on the Master Step Settings configuration form and those configured on the step milestones.
By default, master step properties are not set (values are null). Therefore, by default, the property settings do not propagate down to the derived steps. This means that you can set, or not set, the property freely at the step level.
If you set a property on the master step, that property is locked (a Locked icon displays) at the step level, and cannot be modified.
In some situations, you can add to or reorder a locked property at the step level, but you can never remove the property. For example, on the Step Settings form:
You can add and reorder the column headers that display in the Sample table, even if some of those column headers are set on the master step.
You cannot remove column headers that are set on the master step.
When you add a master step property setting, the setting is also added to all steps derived from that master step.
When you update a master step property setting, the setting is also updated on all steps derived from that master step. This overrides any previous values that had been applied at the Protocol Step level.
When you remove a master step property setting, the setting is also removed from all steps derived from that master step. There are a few exceptions to this rule where appropriate defaults must be applied to keep the step in a valid, workable state.
The following table summarizes what happens at the step level when a property setting is removed from the master step.
For details on configuring step and master step property settings, see #add-and-configure-master-steps-and-steps.
To configure each milestone, milestone, see Step Milestones.
In Clarity LIMS, steps are categorized by type, where each type is based on the requirements and goals of the step, and the outputs generated by the step. Some step types have unique interfaces and properties designed to perform specific tasks, such as adding reagent labels or pooling samples.
The step type is set on the Master Step Settings configuration form, and all steps inherit the step type of the master step on which they are based. (To understand the relationship between master steps and steps, see Steps and Master Steps.)
The step type is also displayed on the Step Settings configuration form, but as a read-only property.
All step types must have a submitted sample or derived sample input, and may generate either derived sample outputs, measurement outputs, or no outputs.
Keep in mind that only one output type is permitted. A step cannot generate both a derived sample and a measurement output. The type of step you choose determines which output generation options display. For example:
Selecting the Standard step type only displays settings for derived sample generation.
Selecting the Standard QC step type only displays settings for measurement generation.
Selecting the No Outputs step type only displays settings for no output generation.
The type of step you choose also enables or disables certain functionality downstream. For example:
Selecting the Pooling step type displays the Pooling screen when the step is run, allowing the ability to create pools of samples. Choosing this step type allows you to configure the number of aliquots used to generate the pools.
Selecting the Add Labels step type displays the Add Labels screen when the step is run, allowing the ability to configure reagent label format options.
When creating a master step, you must choose a step type.
When you have saved a master step configuration:
Step type cannot be changed.
The number of outputs generated can be adjusted, or switch from a fixed number to a variable number (Standard, Standard QC, Add Labels, Pooling, Analysis step types).
The following step types are available in Clarity LIMS:
Standard steps can have a fixed or variable ratio of samples entering the step to derived samples being generated from the step. After saving, you can switch between fixed and variable.
Default step output: By default, this step type generates one derived sample for every sample tracked in the step.
Downstream functionality: Choosing this step type disables the Pooling and Add Labels screens. Derived sample outputs require placement.
Example steps of this type: Library Normalization, Fragment DNA
Standard QC steps may be included in QC protocols, and may also be included as inline QC steps in other protocol types.
Standard QC steps generate sample measurements, which can have a Fixed or Variable ratio of samples entering the step to measurements being generated.
Default step output: By default, this step type generates one measurement for every sample tracked in the step.
Downstream functionality: Choosing this step type disables the Pooling and Add Labels screens. You may configure a QC step to display or not display the Placement screen.
Example steps of this type: Bioanalyzer QC, NanoDrop QC, Qubit QC
The No Outputs step type does not generate any outputs. You can use this step type for sorting steps or for aggregate QC steps.
Default step output: This step type does not generate any outputs. (This is not configurable.)
QC aggregation is the final step in a QC protocol. This step aggregates the data from the previous Standard QC steps to determine the overall quality of the samples. At the end of the step, samples either pass QC and proceed to the next protocol, or fail QC and are rerun or removed from the workflow.
At least one aggregate QC step is required in QC protocols.
At a minimum, one Standard QC step must be run before QC aggregation can occur.
To use a No Outputs step type for QC aggregation, enable QC flags on the Record Details milestone. See #configure-record-details-milestone.
Downstream functionality: Choosing this step type disables the Pooling, Placement, and Add Labels screens.
Example steps of this type: Aggregate QC (DNA), Aggregate QC (RNA), Aggregate QC (Library Validation)
This step type is used to apply a reagent label (or molecular barcode) to each sample entering the step. It may be run on multiple tubes and on multiple plates. Running an Add Labels step allows for a permanent reagent label to be added to each sample. The label data appears while running the step, in a new column in the Sample Data table on the Record Details screen.
Add Labels steps generate derived samples, which can have a Fixed or Variable ratio of samples entering the step to derived samples being generated.
Default step output: By default, this step type generates one labeled derived sample for every sample that enters the step.
Downstream functionality: Choosing this step type disables the Pooling screen and enables the Add Labels screen.
Example steps of this type: Add Multiple Reagents, Adenylate Ends and Ligate Adapters, PCR Amplification.
This step type allows for multiple samples to be pooled into a single sample/container for sequencing efficiency. The number of pools is determined while running the step. Samples typically have a label, which is used to differentiate each sample at the demultiplexing stage.
Pooling steps generate pools that are created from a Fixed or Variable number of aliquots.
Default step output: By default, for every sample that enters the step, one aliquot is used to generate pools.
Downstream functionality: Choosing this step type disables the Add Labels screen and enables the Pooling screen.
The number of pools is determined on the Pooling screen.
By default, users are prevented from pooling samples without labels or with identical labels. You can modify this on the Pooling Settings configuration screen.
Example steps of this type: Pool Samples
Analysis steps allow data to be manipulated by scripts, for example, they may be used to trigger secondary analysis or import data post analysis.
Analysis steps behave similarly to Standard QC steps and generate sample measurements. They can have a Fixed or Variable ratio of samples entering the step to measurements being generated.
Default step output: By default, this step type generates one measurement for every sample that enters the step.
Downstream functionality: Choosing this step type disables the Pooling, Placement, and Add Labels screens.
Example steps of this type: Sample History Report, Process Summary Report
This is essentially an Analysis step that deals specifically with labeled samples. It separates pools of samples based on the label assigned to those samples.
Demultiplexing steps have a Fixed ratio of samples entering the step to measurements being generated.
Default step output: By default, this step type generates one measurement for every sample that enters the step.
Downstream functionality: Because samples are placed automatically by a script configured on the step, choosing this step type disables the Placement screen. Choosing this step type also disables the Pooling and Add Labels screens.
Example steps of this type: BCL Conversion and Demultiplexing.
Clarity LIMS includes preconfigured steps and master steps designed to support established lab processes. You can create additional steps and master steps to represent the procedures that are specific to your lab. There are two approaches:
Create steps based on the preconfigured master steps. The steps you create inherit the properties of the configured master steps, and you can then set additional properties on the steps themselves.
Create master steps, and then use them as the foundations on which to create your steps.
You can add the steps to protocols and workflows so that lab scientists can work with them in Lab View.
To access the Lab Work tab and configure steps and master steps, the Configuration:update permission is required. By default, only the Administrator role has this permission. For details, see User Roles and Configured Role-Based Permissions.
The Lab Work screen provides an at-a-glance view of all steps and master steps configured in the LIMS, along with the protocols and workflows in which they are included.
On the main menu, select Configuration.
On the LIMS configuration screen, select the Lab Work tab.
The Workflow, Protocol, Step, and Master Step navigation panel displays. This lists the workflows, protocols, steps, and master steps configured in the LIMS.
In the Master Steps list, select a master step to highlight it:
The Steps list updates, highlighting the steps derived from the selected master step.
If multiple steps are derived from the same master step, the Master Steps list includes duplicate rows, each mapping to a different step, and each representing the same master step. All of these rows are highlighted.
The Protocols list updates, highlighting all the protocols that contain the highlighted steps.
The Workflows list updates, highlighting all workflows that include the highlighted protocols.
In the Steps list, select a step to highlight it: The Master Steps list updates, highlighting the master step on which the selected step is based. If multiple steps are derived from the same master step, the Master Steps list includes duplicate rows, each mapping to a different step, and each representing the same master step. All of these rows are highlighted.
The Protocols list updates, highlighting the protocol that contains the selected step.
The Workflows list updates, highlighting all of the workflows that include the highlighted protocols.
Below the main navigation panel, the step and master step configuration forms display.
Select these tabs to switch between the forms and see which settings are configured on the step and which are configured on the underlying master step.
Table 1 shows which settings must be configured on the master step and which may be configured on the master step or on the step.
Settings configured on the master step are locked at the step level. On the step configuration form, these settings display with a Locked icon.
Table 1: Master Step and Step Settings
On the Lab Work configuration screen, in the upper-right corner of the Master Steps list, select Add.
Below the main navigation panel, the master step configuration form displays.
To begin, type a name for your new master step.
Configure the settings for this master step - see #configure-a-master-step, below.
Select Save to save your master step configuration.
When adding a master step, keep the following in mind:
Each step is created from a master step. You can create multiple steps from the same master step.
Any settings you configure on the master step are inherited by all steps derived from that master step.
To understand how properties set on the master step propagate down to the step level, see #rules-for-propagation-of-master-step-properties.
The following sections describe the settings available when configuring a master step. Note the following:
Any settings saved as part of the master step configuration cannot be configured at step level. On the step configuration form, these settings display with a Locked icon.
Some settings may be configured at the step level, as indicated in Table 1.
The master step configuration form does not show the default setting values (this includes toggle switches).
Step types are configured on master steps.
Steps are categorized by step type, where each type is based on the requirements and goals of the step, and the output generated by the step (derived samples, measurements, or aliquots).
The step type is set on the master step, and all steps inherit the step type of the master step on which they are based. After you have chosen a step type and have saved it as part of a master step configuration, you cannot change it.
If you are not sure which step type to choose, review #about-step-types-and-outputs.
To understand the relationship between master steps and steps, see Steps and Master Steps.
The step type you choose determines which step milestones are available for configuration.
Configured on master step.
A step may generate a derived sample output, a measurement output, an aliquot output, or no output.
The type of step you choose determines which output generation options are available. Usually, you may choose to keep the default setting or modify the output generation configuration.
For details on the output generation options available for each step type, see #about-step-types-and-outputs.
Configured on master step.
By default, the name of the outputs generated by a step follows the naming pattern of the inputs to the step.
You can use tokens to configure the naming convention so that it resolves to other unique attributes of the output. These tokens function as placeholders that are replaced with actual values at run time. For example, for the Standard step type, the default naming convention resolves to the value of the {InputItemName} token.
The following table lists the default naming conventions for each of the step types.
Table 2: Default Naming Conventions for Step Types
The Tokens list provides a list of tokens you can use to configure the naming convention. For descriptions and examples, see Derived Sample Naming Convention Tokens.
To add a token:
Copy the token you want to use from the Tokens list and paste it into the Naming Convention field. If using multiple tokens, add a space between each entry.
Below the Naming Convention field, you can see a preview of how one or more tokens resolve. Some runtime-specific items, such as dates and times, do not preview exactly as they resolve at run time.
Automations are enabled on master step. Automation triggers may be set on master step or step.
A master step can be configured to update sample fields, assign QC flags, generate files, and submit files and command-line parameters to third-party programs, using automations and the Rapid Scripting API.
When you have configured an automation, you can enable it on one or more master steps and set its trigger location and style.
You can enable automations on master steps in two configuration areas of the LIMS:
On the Automations tab, when adding/configuring an automation.
On the master step configuration form.
After it is enabled on a master step, the automation becomes available for use on all steps derived from that master step.
You can set the trigger location and trigger style for an automation on the master step, or on the steps derived from that master step:
On the master step—In this case, all steps derived from the master step inherit the automation and the trigger settings.
On the steps derived from the master step—In this case, all steps inherit the automation from the master step, but you can configure different trigger settings for each step, if necessary.
To enable an automation on a master step, you must have first configured the automation on the Automation tab. For details, see #add-and-configure-automations.
To enable an automation on a master step:
In the Automation section, click the Automation configuration screen link. The Automation configuration screen opens, with the Step Automation tab active.
In the Automation Use section, select inside the Enable on the Master Steps field and select the master step on which to enable the automation. (If you make a mistake, select the X button to remove a master step from the field.)
Select Save.
Return to the master step configuration form. The automations are listed alphanumerically by name.
To set an automation trigger on a master step (or step):
In the Trigger Location drop-down list, select the stage of the step at which to enable the automation.
The list displays all available stages of the step from which the automation can be triggered.
Only valid options for the step are displayed, for example, the Pooling option only displays on Pooling steps, the Step Setup option only displays for steps on which the Step Setup screen is enabled.
To ensure sequence of execution, only one automation can be associated with each trigger location.
In the Trigger Style drop-down list, select how to initiate the automation. For example, automatically on entry to or exit from the screen or manually when a button is selected on the screen.
The trigger location and style are saved with the automation configuration.
Repeat steps 1 and 2 to configure triggers for each automation added.
Save the automation configuration.
Configured on master step or step.
You can specify the instrument/equipment types that may be used in a step. You can do this on the master step or at the step level. At run time, on the Record Details screen, the lab scientist selects from a list of instruments/equipment of that type.
To enable an instrument/equipment type on a master step or step, you must have first added the instrument type to the system. See #add-and-configure-instruments.
Note also that instrument type/master step configuration is bidirectional - when adding an instrument type, you can select master steps to associate with that instrument type.
To enable an instrument type on a master step (or step):
In the Instrument Types section, select Add.
At the right of the screen, a list of instrument/equipment types displays. Select one or more instrument/equipment types and select the checkmark button.
The instrument/equipment types are added to the master step/step configuration.
If necessary, you can remove an instrument type by clicking the X button.
Step configuration form only: You can reorder instrument types by dragging and dropping them. The order is reflected on the step Record Details screen, in the Instrument selection drop-down list.
Save the master step/step.
Configured on master step or step.
You can specify the reagent kits that may be used in a step. You can do this on the master step or at the step level.
Configuring reagent kits on the step/master step enables reagent lot tracking on the Record Details screen at run time.
To enable a reagent kit on a master step or step, you must have first added the reagent kit to the system. See #add-and-configure-reagent-kits-and-lots.
Note also that reagent kit/master step configuration is bidirectional—when adding a reagent kit, you can select master steps to associate with that kit.
To enable a reagent kit on a master step (or step):
In the Reagent Kits section, select Add.
The reagent kits are added to the master step/step configuration.
If necessary, you can remove a reagent kit by clicking the X button.
Save the master step/step.
Configured on step.
You can specify the control types that may be used in a step. This is done at the step level.
Selected controls are then available to add to the Ice Bucket when running the step.
To enable a control type on a step, you must have first added the control type to the system. For details, see #add-and-configure-controls .
Note also that control type/step configuration is bidirectional—when adding a control type, you can select the steps to be associated with it.
To enable a control type on a step:
In the Control Types section, select Add.
At the right of the screen, a list of control types displays. Select one or more control types and select the checkmark. The control types are added to the step configuration.
Remove a control type by clicking the X button.
Save the step.
Configured on master step or step.
When running samples through steps in the LIMS, each screen displayed represents a specific stage, or 'milestone' of the step.
Some screens display on all steps, while others only display on certain step types.
For more on milestones, and instructions on configuring milestone settings, see #step-milestones.
When adding steps to the LIMS, first select a protocol to include the new step, and a master step on which to base it. The new step inherits all settings configured on the master step.
To add a new step:
On the Lab Work configuration screen, in the Protocols list, select the protocol in which to add the new step.
In the upper-right corner of the Steps list, select Add.
Below the main navigation panel, the step configuration form displays.
Type a name for the new step.
In the adjacent Master Steps list, select the master step upon which to base the new step.
If creating a step within a QC protocol, the Master Steps list only displays master steps that are Standard QC and Aggregate QC step types.
Select Save (this button is not enabled until a master step is selected).
In the Protocols list, select the protocol again.
The step added displays at the top of the Steps list.
The '1' indicates that this is the first step in the protocol. (QC protocol steps are not numbered as they are typically not sequential.)
In the Master Steps list, the master step upon which the step is based is also highlighted.
Repeat steps 1–5 to add more steps to the protocol.
To delete a step, select it and select Delete.
To reorder steps within the protocol, drag and drop them.
Select Save.
To drag and drop on a mobile or touch-screen device, touch and hold the item you wish to drag. After a moment, the item appears to lift off the page and its color changes to white. You can then drag the item and drop it into its new position.
On the step configuration form:
Any settings that were configured on the master step are locked. On the step configuration form, these settings display with a Locked icon.
You can configure settings that were not configured on the master step. These settings only apply to the step.
Settings not configured on the master step typically use the default value at the step level, unless those settings are configured on the step.
If not locked on the master step, the following settings can be configured at the step level.
Automation triggers
Instrument types
Reagent kits
Control types
Step Milestones
In the Master Steps list, select the master step you would like to modify.
Make your changes and select Save.
When modifying master steps and steps, keep the following in mind:
You can change the master step on which a step is based, providing the new master step is of the same step type. The list of master steps is filtered to show valid options.
If you remove configured settings from a master step, those settings on the derived steps revert to their default values, except if this would leave the step in an unworkable state. For example, you cannot remove the last container from a step. Exceptions to this revert to default rule are noted where applicable.
If you rename a step, the Recent Activities list in Lab View continues to display the name of the step as it was when the step was run. This is because the step name in this case is derived from the activity record.
In the Master Steps list, select the master step to delete.
On the master step configuration form, select Delete.
When deleting master steps and steps, keep the following in mind:
You cannot delete a step if it is included in an active or archived workflow.
You cannot delete a master step if it is used to generate a step that is in an active or archived workflow.
You cannot delete a master step if it has already been used to create one or more steps. First delete the step, and then delete the master step.
This section explains the relationship between milestones, master steps, and steps; shows how to access milestone configuration settings; and provides an overview of each milestone.
Milestones are the various stages of a step that are presented to lab users as they run samples through steps in Clarity LIMS.
Some screens (such as the Queue, Ice Bucket, and Record Details screens) display on all steps, while others only display on certain step types.
For example, the Pooling milestone only displays on steps of the Pooling type and the Add Labels screen only displays on steps of the Add Labels type. On all other step types, those milestones are disabled on both the master step and step configuration forms.
The following table shows the milestones that are available for display for each step type. For more information on step types, see #about-step-types-and-outputs.
Milestones Displayed for Step Types
Step Type | Queue | Ice Bucket | Step Setup* | Pooling | Placement | Add Labels | Record Details |
---|---|---|---|---|---|---|---|
*While the Step Setup screen is available for display on all step types, it is optional and does not display by default. To enable the Step Setup screen, you must first add file placeholders on the master step. For details, see #configure-step-setup-milestone.
You can configure milestone settings on the master step and step configuration forms.
When switching between the step and master step configuration forms while viewing or editing a milestone, you are returned to the parent step/master step form. You need to select the milestone name again to open its settings form.
Similarly, if you wish to return to the step or master step settings form, select the parent master step or step tab.
To configure a milestone:
Select it to open its settings form.
When configuring milestones on master steps and steps, consider the following details:
If you configure a list of items at the master step level—for example, expanded view fields, instrument types, reagent kits—the order in which they are listed on the master step is overwritten by the order set at the step level. Set the order of any list at the step level. This includes the order of the Sample table column headers.
If you remove configured settings from a master step, those settings on the related steps revert to their default values. The exception is if this would leave the step in an unworkable state. For example, you cannot remove the last container from a step. Exceptions to this 'revert to default' rule are noted where applicable.
Settings configured at the step level only apply to that particular step.
To understand how properties set on the master step propagate down to the step level, see #rules-for-propagation-of-master-step-properties
When running samples through a step, the first screen that displays is the Queue screen. This screen provides a sample table to select samples to be placed into the Ice Bucket, reserving them for use.
The following components of the Queue Sample table are configurable on the Queue Settings form:
The Sample table column headers
The Sample table expanded view fields
Default grouping and well sort order of Sample table
On the Queue Settings form, configure the column headers that display in the Sample table, and the order in which they display.
Note the following:
No default column headers are configured at the master step level.
When configuring column headers on a new step, several default column headers will display. You can remove these, but the table must have at least one field remaining (this may be set on either the master step or the step).
Expanded view fields are hidden by default in the Sample table. These fields contain additional details about the samples in the queue. At run time, choose to display these details by clicking the Show/Hide Details button.
On the Queue Settings form, in the Expanded View Fields section, you can select additional fields to add to the body of the Sample table.
Note the following:
No default expanded view fields are configured at the master step or the step level.
If expanded view fields are configured at the master step level, they display as locked at the step level and cannot be removed. You can modify the order in which the locked fields display.
Expanded view fields are available for display in multiple milestone screens. The configuration options set in each milestone are specific to that milestone. You may choose to configure the expanded view fields differently in other milestones.
On the Queue screen, samples are grouped by Container and sorted by well Row by default.
On the Queue Settings form, in the Defaults section, you can modify these settings if necessary.
Samples move from the Queue into the Ice Bucket, where they are reserved for use for 30 minutes. The Ice Bucket screen displays for all step types. It comprises a Sample table that displays information about the samples entering the step.
The following components of the Ice Bucket screen are configurable on the Ice BucketSettings form:
The Sample table column headers
The Sample table expanded view fields
Default grouping and well sort order of Sample table
On the Ice Bucket Settings form, configure the column headers that display in the Sample table, and the order in which they display.
Note the following:
No default column headers are configured at the master step level.
When configuring column headers on a new step, several default column headers will display. You can remove these, but the table must have at least one field remaining (this may be set on either the master step or the step).
Expanded view fields are hidden by default in the Sample table. These fields contain additional details about the samples in the Ice Bucket. At run time, choose to display these details by clicking the Show/Hide Details button.
On the Ice Bucket Settings form, in the Expanded View Fields section, you can select additional fields to add to the body of the Sample table.
Note the following:
No default expanded view fields are configured at the master step or the step level.
If expanded view fields are configured at the master step level, they display as locked at the step level and cannot be removed. However, you can modify the order in which the locked fields display.
Expanded view fields are available for display in multiple milestone screens. However, the configuration options set in each milestone are specific to that milestone; you may choose to configure the expanded view fields differently in other milestones.
By default in the Ice Bucket screen, samples are grouped by Container and sorted by well Row.
On the Ice BucketSettings form, in the Defaults section, you can modify these settings if necessary.
The Step Setup screen is an optional screen. By default, it does not display at run time.
This screen lets you provide the lab scientist running the step—and provide Clarity LIMS—with access to files before samples are placed. You can then configure step automations that parse these files and use the information to place samples into destination containers, based on the result file specifications.
If you enable the display of the Step Setup screen (you can enable it on any step type), it displays immediately after the Ice Bucket screen.
The Step Setup Settings screen allows you to do the following:
Add file placeholders that will be populated at run time. Configure these on the master step.
After you have added file placeholders, you can then enable the display of the Step Setup Settings screen at run time. You can configure this on the master step or the step. However, as with all master step settings, if you enable the screen on the master step, it displays on all steps derived from that master step.
Configure the attachment method for each file added—manual or automatic. You may configure this on the master step or the step.
To enable the Step Setup screen, you must first configure one or more file placeholders on the master step.
On the Lab Work tab, in the Master Steps list, select the master step on which you would like to configure the file placeholders.
Select the Step Setup milestone.
In the File Placeholders section, select Add.
Type a name for the file placeholder.
[Optional] You can copy and paste tokens from theTokens list into the name field. For details, see Derived Sample Naming Convention Tokens.
Enter instructional text.
To set the attachment method, select the Attachment toggle switch to set the file attachment method.
If you set the attachment method to Auto, configure a step automation to generate and attach the file. For details, see #add-and-configure-automations.
To remove a file placeholder, select the X button.
On the Lab Work tab, in the Master Steps or Steps list, select the master step or step on which you would like to configure Step Setup file placeholders.
Select the Step Setup milestone.
At the top of the Step Setup Settings screen, select the toggle switch to enable the Step Setup screen. The screen now displays at run time.
When the Step Setup screen is enabled, it becomes available for selection as an automation trigger location. If you configure an automation trigger location on the Step Setup screen, the Step Setup screen cannot be disabled.
When you create a step and choose the Pooling step type, the Pooling milestone is enabled. When running the step, the Pooling screen allows the lab scientist to create pools of samples.
The following components of the Pooling screen are configurable on the Pooling Settings form.
Enable and disable label uniqueness to control whether samples with the same labels, or no labels, may be pooled together. This must be configured on the master step.
Configure defaults for sample grouping and well sort order. You can configure these settings at the master step or step level.
Select the Label Uniqueness toggle switch to turn label uniqueness on and off.
When Label Uniqueness is On (default setting), samples with the same labels cannot be pooled together.
When Label Uniqueness is Off, samples with the same label or no labels may be pooled together.
Save your changes.
On the Pooling screen, by default samples are:
Grouped by Container
Sorted by well Row (A1, A2, A3, and so on)
Placed by Column (A1, B1, C1, and so on)
You can modify these settings if necessary.
The Placement screen is used for QC steps and for steps that generate derived samples. When the screen displays at run time, it allows manual placement of samples into the destination container.
Note the following:
In tube-only workflows, the Placement screen is disabled by default and samples are automatically placed. This is true for all step types, except Add Labels steps, in which a tube rack Placement screen displays to allow for manual placement of samples.
In the following step types, no sample placement occurs. The Placement screen is disabled and it does not display at run time. Analysis steps
Aggregate QC steps
Standard steps where derived sample generation is set to None.
The Placement Settings form allows for the following configuration.
Turn off the Placement screen and have samples placed automatically into corresponding wells of the destination container (source and destination containers must be the same).
Disable the Placement screen so that it does not display at run time and cannot be viewed. You can only do this on QC steps where no sample placement is required—ie, where samples remain in the same container throughout the step. The step cannot have destination containers configured.
Configure the destination containers that are permitted on the step.
Configure the sample placement defaults—grouping, well sort order, placement pattern, and whether to skip alternate rows/columns in the container.
When the Placement Screen toggle switch is enabled, the Placement screen displays at run time. The lab scientist manually places samples into the destination container.
To turn off the Placement screen:
Select the toggle switch to turn off the Placement Screen and enable autoplacement of samples.
At run time, bypass the Placement screen. If necessary, the user can return to the screen (by selecting its tab) to view placement details.
When the Placement Screen is disabled, the milestone label changes to Auto-Placement. However, if the source and destination containers are not of the same type, Clarity LIMS determines that autoplacement cannot occur and reenables manual placement so that samples can be placed.
Destination containers are the containers into which samples are placed at run time. These containers display to the user in a drop-down list on the Ice Bucket screen. The selected container is then used to set up the subsequent Placement screen.
On the Pooling screen, by default samples are:
Grouped by Container
Sorted by well Row (A1, A2, A3, and so on)
Placed by Column (A1, B1, C1, and so on)
Placed into all container wells - no rows or columns are skipped.
You can modify these settings if necessary.
When you create a step and choose the Add Labels step type, the Add Labels milestone is enabled. When running the step, the Add Labels screen allows the lab scientist to add a reagent label (also known as index or molecular barcode) to each sample.
Note the following:
To add a label group to a step, you must have first configured the label group on the Consumables > Labels configuration screen. For details, see #add-and-configure-labels-and-label-groups.
When you create an Add Labels step, the first label group configured in the system is added to the step automatically.
There must be at least one label group defined on either the master step or the step.
You cannot remove the last label group from the step/master step.
You cannot remove label groups added on the master step from the step.
Label groups are listed in alphanumeric order. The order is not modifiable.
The well sorting setting configured on the #configure-placement-milestone is also applied to the Add Labels screen.
Label groups are the only configurable components on the Add Labels Settings form.
The Record Details screen is where data are tracked on the step at run time. It includes information about the step, files generated by or uploaded to the step, e-Signature sign-off (if enabled), and information about the samples in the step.
The following components of the Record Details screen are configurable on the Record Details Settings form:
The step-level information (step data) tracked on the step. You can also change the heading of the step data section, and set a default value for the Group of Defaults configured on the master step.
Step file placeholders for files that is attached to the step at run time, and their attachment method.
The sample-level information tracked and displayed in the Sample table
Electronic signatures - this panel only displays if you have enabled the clarity.eSignature.enabled property.
The Step Data section of the Record Details screen allows you to track and display step-level data at run time, specifically the master step fields associated with the step.
On the Record Details Settings form, you can configure the following:
The heading that displays at the top of the section.
The default value for the Group of Defaults that displays in the upper-right corner of the screen.
The step data fields that display, and the order and layout in which they display. Note the following:
The default heading is 'Step Data Table'.
You are not required to set a default value for the Group of Defaults.
You are not required to add master step fields or multiline text fields.
When step fields and/or multiline text fields are added, they are arranged vertically by default.
As you configure the step data, the Preview area on the right updates to show you how the configuration displays at run time on the Record Details screen.
NOTE: Multiline text fields are much wider than step fields and always display below them on the Record Details screen. For this reason, they are configured separately.
*Groups of defaults and master step fields are defined on the Custom Fields > Master Step Fields configuration screen. For details, see #add-and-configure-custom-fields.
Configuring file placeholders allows you to attach sample measurement files to a step at run time. For example, you may want to attach an instrument input files or sample sheets, a QC measurement file, a log file, run report, or lab tracking form. Files may be manually uploaded or automatically generated and attached using a script. The default attachment method is manual attachment.
Note the following:
Configure step file placeholders on the master step. You cannot configure or modify these at the step level. A lock icon on the Step Settings form indicates this.
Create a placeholder for each file to be attached.
Configure the attachment method—Manual or Auto, at the master step or the step level. If the attachment method is set on the master step, it cannot be changed at the step level (lock icon displays on the Step Settings form).
The default attachment method is Auto.
In the Sample Table section, if the File Column Display is set to Hide, the Attachment toggle switch is set to Auto and is disabled. To manually attach files in the Sample table, the column must be visible.
The attachment method applies to the shared sample measurement files generated. If you need to set the attachment method for individual files generated for each sample, you can use the API to do this. For details, see API Reference.
At the bottom of the Record Details form, the Sample Table section lets you view and track data on your samples at run time.
On the Record Details Settings screen, you can configure the following components of the Record Details screen Sample table:
The table heading. (Default table heading is 'Sample Table'.)
The display of the QC flags field (when this field is enabled, mark samples with a QC pass or fail flag.
The default display of the Sample table listing. The default view is Collapsed - for faster loading time of the sample list.
The table columns that display in the Sample table, and the order in which they display.
The File Options Column and the File Attachment Method toggle switch only display on the Record Details Settings screen on steps that generate measurements. These settings allow you to choose if you want to display a column for sample files and choose how these files are attached to the step (manually or automatically).
Note the following:
You can enable QC flags on any step type that allows you to mark a sample with a pass or fail flag.
By default, QC flags are enabled on Standard QC steps. This setting is locked and cannot be changed.
By default, QC flags are disabled on Analysis steps. This setting is locked and cannot be changed.
Enable QC flags on a No Outputs step to use the step for QC aggregation.
Sample groupings are collapsed by default to optimize screen loading time, but can also be expanded by default.
If the step generates measurements, Sample File Options display. These allow you to choose if you want to view a column for Sample Files and choose how these files are attached (manually or automatically).
When configuring the Sample table:
No default column headers are configured at the master step level.
When configuring column headers on a new step, several default column headers display. You can remove these; however, the table must have at least one field remaining (this may be set on either the master step or the step).
The Sample table displays in multiple milestone screens. However, the configuration options set in each milestone are specific to that milestone; you may choose to configure other milestone Sample tables differently. The unique aspect of the Record Details Sample table is that the derived sample and submitted sample fields can be written to (according to their respective step type).
Clarity LIMS provides the ability to configure a step such that it requires sign-off by electronic signature (eSignature) before it can be completed.
Steps that have eSignature enabled display an eSignature enforcement button on the Record Details screen, and require valid eSignature credentials (username and password) to be entered.
Next Steps cannot be viewed until these credentials have been entered with eSignature signing permission.
Until the step has been completed, any changes made to the step will again require an eSignature sign off.
All eSignature events, successful or not, are recorded with the step and in the audit trail.
The eSignatures Review configuration panel displays on the Record Details Settings screen only if the clarity.eSignature.enabled property is enabled.
If the panel is enabled, you can configure electronic signatures (e-signatures) on a step or master step. This means that samples in the step cannot move forward (Next Steps button is disabled) until an e-signature has been entered with the appropriate role-based permission.
To modify the information captured on the Record Details screen of a completed step, the CanEditCompletedSteps permission is required. This functionality allows for modifications to correct errors and/or bad data.
The CanEditCompletedSteps permission is not assigned to any user roles by default. A Clarity LIMS administrator must explicitly assign the permission to each user who requires it. See the User Roles and Configured Role-Based Permissions sections for details.
NOTE: Signature is not enforced again when editing completed steps. For recommended best practice, the role with CanEditCompletedSteps permission should be the same role that electronically signs off on steps. See #rules-and-constraints for details.
Users who have been assigned the CanEditCompletedSteps permission see an Edit button on the Assign Next Steps screen of completed steps. Selecting this button takes them to the Record Details screen, where they may modify certain step details.
The following table lists and describes the step details that can be modified:
To ensure the integrity of data, the following rules and constraints are in place when editing a completed step:
Only steps that were completed in Clarity LIMS v5.1 and later are editable. If the system is upgraded to Clarity LIMS v5.1 or later, steps that were completed in a previous version cannot be edited.
Steps that were executed using the API cannot be edited.
If the configuration of a completed step changes after the step was run, it cannot be modified. This constraint includes any changes made to the protocol or workflow in which the step is included.
The automations enabled on the step.
The automation triggers configured on the step.
An automation cannot be rerun on a completed step. The details of the automation command line cannot be viewed.
Manager review/escalation comments on completed steps cannot be edited.
The eSignature is not enforced again when editing completed steps. The eSignature from the original step execution is retained. There is not a prompt for a new signature when editing the step.
On the Assign Next Steps screen of the completed step, select Edit (upper-right corner).
A prompt displays to confirm the desire to proceed and edit the step details.
Select Yes to confirm.
The Record Details screen displays.
Modify the step details as required.
Review your changes and select Save.
By default, the name of the outputs generated by a step in the LIMS follows the naming convention of the inputs to the step.
When configuring a master step, you can use tokens to configure the naming convention so that it resolves to other unique attributes of the output. These tokens function as placeholders that are replaced with actual values at runtime. For example, for the Standard step type, the default naming convention resolves to the value of the {InputItemName} token (shown below).
The Tokens list provides a list of tokens you can use. You can copy and paste these directly into the Naming Convention text box.
If using multiple tokens, add a space between each entry.
Below the Naming Convention field, you will see a preview of how the token(s) will resolve.
Note that some runtime-specific items, such as dates and times, will not preview exactly as they will resolve at runtime.
NOTE:
Output names are limited to 100 characters. If a name exceeds this limit, the LIMS automatically removes characters from the middle of the name.
To pad a resolved value, add a colon (:) and a whole number to indicate the desired number of digits. For example, if {OutputItemNumber} resolves to 23, {OutputItemNumber:4} will resolve to 0023.
You can use simple tokens that will resolve to system-specified results, such as container location and LIMS ID of an output. These tokens are replaced with the appropriate value of the specified item at runtime. Tokens are case sensitive.
{InputItemName}
The name of the input used to generate the output.
{InputItemNameNoSpaces}
The name of the input used to generate the output, but with spaces removed.
{InputWellLocation}
The location or name of the well where the input resides.
To get a sub-string of the location or name of the well, add a colon (:), and one or two whole numbers to indicate the start index (zero-based, i.e. starts with 0. Inclusive) and end index (Exclusive). {InputWellLocation:<startIndex>,<endIndex>}
Example
If {InputWellLocation} has the value of A:3, the following examples show the derived values with the new format:
Lane {InputWellLocation} -> Lane A:3
Lane {InputWellLocation:0,1} -> Lane A
Lane {InputWellLocation:1,3} -> Lane :3
Lane {InputWellLocation:1} -> Lane :3
{InputContainerIdentifier}
The container identifier in which the input resides.
{InputItemNumber}
The number of the input used to generate the output, such as 7 of 20. You can pad the resolved value to a certain number of digits, and the LIMS will prefix the number with zeros. See note above.
{InputItemTotal}
The total number of inputs used to generate the outputs. You can pad the resolved value to a certain number of digits, and the LIMS will prefix the number with zeros. See note above.
{OutputItemLIMSID}
The LIMS ID of the output.
{OutputItemNumber}
The current output's absolute position within the order of all outputs, such as 9 of 40. You can pad the resolved value to a certain number of digits, and the LIMS will prefix the number with zeros. See note above.
{OutputItemTotal}
The total number of outputs generated. You can pad the resolved value to a certain number of digits, and the LIMS will prefix the number with zeros. See note above.
{OutputItemSubsetNumber}
The current output's relative position within its relative set, such as 1 of 2. You can pad the resolved value to a certain number of digits, and the LIMS will prefix the number with zeros. See note above.
{OutputItemSubsetTotal}
The fixed count of relative outputs per input. You can pad the resolved value to a certain number of digits, and the LIMS will prefix the number with zeros. See note above.
{AppliedReagentLabels}
The type of reagent label applied to the input.
{SubmittedSampleName}
The name of the sample’s related submitted sample - the original parent sample that was submitted to the LIMS.
{ProjectName}
The name of the project that contains the inputs to the step.
{ProcessLIMSID}
The LIMS ID of the step that created the outputs.
{ProcessTechnicianFullName}
The name of the lab scientist who runs the step.
{ProcessTechnicianFirstName}
The first name of the lab scientist who runs the step.
{ProcessTechnicianLastName}
The last name of the lab scientist who runs the step.
{ProcessTechnicianInitials}
The initials of the lab scientist who runs the step.
{DATE:MMM d, yyyy}
The date the step was run, according to the computer's clock.
{LIST:a,b,c}
With this variable, you can specify a comma-delimited list of words that will be used when generating output names. Clarity LIMS will cycle through the words from left to right, applying one word to each output name. When the last word has been used and there are further outputs that require names, Clarity LIMS will restart at the beginning of the list.
Complex tokens provide further flexibility with the use of parameters.
You can combine any alpha-numeric text with simple and complex tokens for highly specialized and unique output names.
When using complex tokens, you must specify parameters that will be used when the token is resolved.
You can only use one LIST and one DATE token per output string, but you can use any combination of parameters within those tokens.
With the DATE token, if you would like to include a word between parameters, enclose the word in single quotes (‘x‘).
Times and dates resolve to the time and date the process was run, according to the computer's clock.
Tokens and parameters are case sensitive.
This section describes how to add and configure the instruments and equipment used in your lab, and associate these items with master steps.
Add the instruments and equipment used in your lab to Local Analysis Software, and associate these items with specific steps. When running steps in the LIMS, lab scientists can record the instruments and equipment used.
All users logged into the LIMS can access the Instruments configuration screen. However, what they are allowed to do in this screen is determined by their user permissions.
By default, only the Administrator role has the Configuration:update permission. For more on user roles and permissions, see and .
When adding instruments to Clarity LIMS, there are two main steps involved:
Add a new instrument type (Configuration:update permission required).
Select an instrument type and then add a new instrument of that type. You cannot add an instrument without first selecting an instrument type
When initially setting up the system, add all the instrument types used in the lab. For example, HiSeq 3000, 2100 Bioanalyzer, NanoDrop 2000. Any logged in user can then add specific instruments to each type.
On the main menu, select Configuration.
On the configuration screen, select the Consumables tab, then select Instruments.
On the Instruments configuration screen, select New Instrument Type.
In the Instrument Type Details area, complete the following required information:
Enter the name of the type of instrument or equipment you are adding.
In the Vendor drop-down list, select an existing vendor from the list, or select Create new and type the new vendor name into the field. After you create a vendor, it is added to the list and can then be selected when creating other instrument types.
In the drop-down list that displays, select one or more master steps on which to enable this instrument type.
To remove a step from this field, select the X to the left of the step name.
The instrument type is made available for use on all steps that are created from the selected master steps. When running those steps in the LIMS, the appropriate instrument can be selected from the Record Details screen. This configuration is bidirectional - when configuring a master step, you can select instrument types to associate with that master step.
Select Save.
The new instrument type displays in the Instrument Types area. The ' zero instruments ' label indicates that no instruments of this type have yet been added
On the Instruments configuration screen, in the Instrument Types area, select the appropriate instrument type.
Select New Instrument.
In the Instrument Details area, enter the details for this instrument.
Instrument Name: Enter the name of the instrument. (This is the only required field.)
Serial number: Enter the serial number of the instrument, or other instrument-specific information.
Expires: Select the expiry date (or calibration date) of the instrument or equipment.
Valid dates are the current date or any date in the future.
After an instrument has been saved, a label displays next to this field. The label shows the number of days, and then hours, remaining before the instrument expires, or warns that the instrument has expired.
Software Name: Enter the name of the instrument software.
Software Version: Enter the instrument software version number.
Select Save.
In the Instrument Types area, the new instrument is nested under its instrument type.
To create another instrument of the same type, select Add Another New Instrument.
When creating instruments, note the following:
A LIMS ID is automatically assigned to the instrument.
The instrument record Created date is automatically populated.
The instrument record Modified date is automatically populated; this field keeps track of any changes made to the instrument details.
The instrument Status toggles between Active and Archived. By default, when adding a new instrument, its status is Active.
These instruments are in use, or available for use, in your lab workflows; they can be selected by lab users as they record work for a protocol step.
Users may edit the details of Active instruments.
These instruments are not currently in use in lab workflows (for example, they may be expired or under repair), and are not available for selection by lab users working in the LIMS.
When the expiry date for an instrument has passed, the LIMS automatically archives the instrument.
The details of Archived instruments are read-only. They may be viewed, but not edited.
Archived instruments are listed together in a single Archived Instruments group (no subgrouping by type), at the bottom of the Instrument Types area. If an archived instrument is reactivated, it once again displays under its respective instrument type.
When editing instrument types and instruments, keep the following in mind:
Only users with the Configuration:update permission type can edit instrument types.
Changes made to an instrument type, or to any instruments of that type, are reflected on all future steps on which the instrument type is enabled.
Steps that have already been run are not affected by changes made to instrument types or instruments.
When deleting instrument types and instruments, keep the following details in mind:
To delete instrument types, the Configuration:update permission is required.
When deleting an instrument type, all instruments of that type are also deleted and are no longer available for selection on steps.
You cannot delete an instrument type if any instruments of that type are in use.
You cannot delete an instrument if that instrument has been used in a step.
In the Instrument Types area, expand the Archived Instruments section and select the instrument to reactivate.
If the instrument has not expired, select Activate.
If the instrument has expired, reset the expiry date to a date in the future, and then select Activate.
Select Save.
In the Instrument Types area, the reactivated instrument displays under its instrument type. The instrument may now be selected when running steps.
This section describes how to add and configure label groups (reagent categories) and labels (reagent types or molecular barcodes), and enable them for use on specific master steps.
Add a label group for each reagent category used in your lab, and then add labels to the groups. Each label represents a reagent type (or molecular barcode) within the group/category.
Select the label groups to be used in the step when configuring the properties of steps generated from an Add Labels master step type.
To access the Labels configuration screen, the Configuration:update permission is required. Users who do not have this permission do not see the Labels option displayed under the Consumables tab.
By default, only the Administrator role has the Configuration:update permission. For more on user roles and permissions, see and .
When adding label groups and labels to the LIMS, there are several main steps involved:
Add a new label group.
Then, to add labels to the group:
Download a template label list (Microsoft® Excel® file) from the Labels configuration screen.
Add reagent type details to the downloaded template.
Upload the completed label list.
On the main menu, select Configuration.
On the configuration screen, select the Consumables tab and then select Labels.
On the Labels configuration screen, select New Label Group.
In the Label Group Details area, enter the name of the label group you are adding.
Numeric names are not permitted.
Duplicate names are not permitted.
You may use the name of a previously deleted label group.
Select Save.
The new label group is listed in the Label Groups list. Because there are no labels in the group, no count displays.
The Upload Label List and Download Label List buttons display in the Label Group Details area.
On the Labels configuration screen, in the Label Groups list, select the label group to which you want to add labels.
In the Label Group Details area, select Download Label List to download the template.
Open the template in Excel. It has two example label entries containing the following information:
Group Name (column A): Prepopulated with the name of the label group you selected in the LIMS.
Label ID (column B): No information is provided in this column as it is populated by the LIMS.
Label Name (column C): Provides examples of label name (reagent type) formats.
Sequence (column D): Provides examples of sequence properties of theIndex special type of the reagent type. Dual-indexes may be used, separated by a hyphen.
To complete your label list, add new rows between the opening and </LABEL ENTRIES> closing tags and enter reagent label information into these rows:
Group Name: (Required on upload) Enter the name of the label group (reagent category) into which you are adding labels.
Label ID: Leave this column empty. It is populated by the LIMS when you upload your completed label list.
Label Name: (Required on upload) Enter the names of the reagent labels (reagent types) you would like to add to the LIMS, using one of the example formats.
Sequence: (Optional on upload) Enter the index sequence of the Index special type of the reagent type, for example, "ATCACG." You may enter dual-indexes, separated by a hyphen.
Save your label list file.
In the Label Groups list, the label count shows the number of labels in the group.
When editing/deleting label groups, keep the following in mind:
The only item you can change directly in the LIMS is the label group name.
Deleting a label group does not affect historical run data. This information is preserved in the LIMS.
When editing and deleting labels (reagent types), keep the following in mind:
Changes you make to a label are reflected on all future steps on which the label is applied.
Steps that have already been run are not affected by changes you make to labels. The labels are mapped to samples in the run and historical run data are preserved.
When uploading a label list, the following conditions result in an error:
One or more of the four headers (Group Name, Label ID, Label Name, Sequence) is missing or misspelled.
Attempting to rename a label to the same name as an existing label within any label group.
Attempting to rename a label to the same name as an existing label—even if you are also renaming the other label at the same time.
Adding a label with the same name as another label within any label group.
Attempting to edit a label without providing the Label ID.
Providing labels for the wrong group. That is, the Group Name column does not match the name of the label group into which you are uploading labels.
Providing a sequence for a reagent that does not have the 'Index' special type.
In Clarity LIMS, custom fields are used to record information about a step, sample, or other LIMS component.
There are two types of custom fields: global fields and master step fields.
The default configuration includes both global and master step fields. You can add additional fields to meet the needs of your lab, and display those fields to the user at run time (see ).
Global fields—Apply to the whole LIMS system. You can use these fields to record measurements and information about measurements, submitted samples, derived samples, accounts, containers, projects, and clients.
Master step fields apply to the master step on which they are configured, and are inherited by all steps derived from that master step.
To access the Custom Fields configuration screen, the Configuration:update permission is required. Users who do not have this permission do not see the Custom Fields tab.
By default, only the Administrator role has the Configuration:update permission. For more on user roles and permissions, see and .
Configure custom fields to record information about a step, sample, or other Clarity LIMS component.
For example, you can:
Use global fields to capture sample measurements and track information about projects.
Use master step fields to record instrument settings and other details about a specific step.
Configure automation scripts that populate custom fields or perform calculations at run time.
Create groups of defaults—collections of prepopulated master step fields that eliminate the need for manual input of values at run time and make sure that the correct information is always recorded.
When adding custom fields, keep the following in mind:
You cannot save a custom field until you have entered a name and selected a field type.
You cannot create a custom field on a global field object or master step with the same name as an existing field on that object/master step. For example, if you have a created a global field called 'Description' on the Account object, you cannot create another global field called 'Description' on the Account object. However, you can create a 'Description' field on the Project object.
If the field name you specify is the same as a field that has been deleted, the new field is created and the conflicting field name is renamed. Deleted fields do not display in the LIMS interface, but are saved in the database.
On the main menu, select Configuration.
On the configuration screen, select the Custom Fields tab.
On the Custom Fields configuration screen, select the Global Fields or Master Step Fields tab.
In the header of the global field object or master step for which you want to add a new field, select Add.
In the Field Details area, complete the required fields:
a. Type a name for the field.
b. Select the appropriate field type. See sections below for details.
Set the required field options:
Required: If this field must be filled in, set this option to Yes. Otherwise, set to No.
Read only: If you do not want the user to edit the field value at run time, set to Yes. To allow editing of the field at run time, set to No
The Field Options and Additional Options reflect the field type selected:
Default (for nondrop-down field types only): If you would like to set a default value, enter the value here.
Dropdown Items (for drop-down field types only):
To set a default item, add this value first and set the Set as Default toggle switch to Yes. You can only set the first item as the default, and you cannot reorder items after you have added them.
Repeat to add more items to the list.
To remove a list item, select the X button.
If you do not specify any drop-down items, or if you specify only one item and set it as the default value, upon save, the field converts to its equivalent nondrop-down type and custom entries are enabled.
Select Add and enter the first list item.
Complete other options, as required. See sections below for details.
Select Save. The new custom field is added to the bottom of the fields list. It is now available to be displayed on master step and/or step milestone screens.
For any field selected, the Field Details area displays the following information:
The field name.
The global field object (Derived Sample shown here), or the master step, with which the field is associated.
The field type.
The field options, that is, whether the field is:
Required—If set to Yes, the field must be filled in.
Read only—If set to Yes, the field cannot be edited at run time.
The default value for the field, if set.
For drop-down field types:
The Default option is replaced with a Dropdown Items list.
The first list item may be set as the default value for the field.
Additional options may also display, as described below. These differ depending on the field type. For example, the Range From and To fields only display for Numeric field types.
The following table describes the field types available for custom fields, and the additional options that apply to each type.
Custom Field Types and Additional Options
The Toggle Switch field type renders as a toggle switch on the Record Details screen.
Configuration options:
Default value configured as Yes or No: When the screen displays, Yes or No is selected by default. User can select Yes or No.
Default value configured as None Set: When the screen displays, neither Yes nor No is selected. User selects a value.
Required: The field may be configured as a required field, even if the default value is None Set. When the user enters the screen, neither Yes nor No is selected, but a value must be selected.
The following table explains how to use the additional options associated with the Numeric, Numeric Dropdown, Text Dropdown, and Hyperlink Dropdown field types.
Field Types Additional Options Usage
This section describes how to edit and delete custom fields.
On the Custom Fields configuration screen, select the Global Fields or Master Step Fields tab.
Expand the global field object group or master step containing the field to edit.
Select the field.
Make your changes and select Save.
When editing custom fields, keep the following in mind:
You cannot modify the field type, unless you are changing a drop-down field type to its equivalent nondrop-down type or vice versa. For example, you can change a Numeric Dropdown field to a Numeric field, or a Text field to a Text Dropdown field.
If you convert a drop-down field type to its equivalent nondrop-down type, Clarity LIMS removes all nondefault list values and enables custom entries upon saving. If a default drop-down option was set, it becomes the default for the nondrop-down field.
On the Custom Fields configuration screen, select the Global Fields or Master Step Fields tab.
Expand the global field group or master step containing the field you would like to delete.
Select the field and select Delete. Confirm the deletion.
When deleting custom fields, keep the following in mind:
You cannot delete a master step field if it has been assigned a value, or is in use in a step—that is, if a step derived from the master step with which the field is associated has been started.
If you delete a custom field, it no longer displays in the LIMS interface. However, its information is saved in the database for historical purposes.
You cannot restore a deleted field for use in the LIMS, but you can create a field with the same name. The original deleted field is renamed in the database.
However, you can reorder global fields by simply clicking and dragging them into position.
To drag and drop on a mobile or touch-screen device, touch and hold the item you wish to drag. After a moment, the item appears to lift off the page and its color changes to white. You can then drag the item and drop it into its new position.
The order is reflected in various places in the LIMS interface, for example:
Submitted sample global field ordering is reflected on the Sample Management screen, in the Sample Details section.
Project global field ordering is reflected on the Project Details screen (on the Custom Fields tab) and on the Sample Management screen (in the Project Details section).
Groups of defaults are collections of prepopulated master step fields. Using these eliminates the need for lab scientists to manually enter field values each time they run the step, and makes sure that the correct information is recorded every time a step is run.
When you have added groups of defaults to a master step:
They become available for selection when you create a step based on the master step.
When running a step in Clarity LIMS, if the step has one or more groups of defaults configured, these steps display in a drop-down list in the upper-right corner of the Record Details screen. Select the desired group of defaults and the LIMS populates the step fields with the corresponding values.
If you have configured a default group of defaults, those values automatically populate the step fields.
On the Custom Fields configuration screen, select the Master Step Fields tab.
Expand the master step on which to configure a group of defaults.
Below the configured master step fields, in the Group of Defaults section, select Add.
In the Group of Defaults area on the right, the fields associated with the master step display.
Name the group of defaults.
Populate each field with the value to set for the group of defaults.
Select Save.
[Optional] When configuring the Record Details milestone for a step, if the related master step has one or more groups of defaults configured, you can select a default group to display.
[Optional] Reorder groups of defaults by clicking and dragging them into position.
To drag and drop on a mobile or touch-screen device, touch and hold the item you wish to drag. After a moment, the item appears to lift off the page and its color changes to white. You can then drag the item and drop it into its new position.
The order is reflected in the drop-down list that displays at the top of the Record Details screen.
The User Management configuration screen allows for viewing and managing users, clients, and accounts.
Users are the individuals who have access to the Clarity LIMS interface. Because each step in Clarity LIMS is associated with a user, you can make use of user profiles to track the work moving through your lab. While users are associated with the steps they perform as part of a project, they are not directly associated with that project—unless they are assigned as the project client.
Clients are directly associated with projects in Clarity LIMS. When you create a project, you must associate it with a client. Clients differ from users in that they are not able to log in and access the Clarity LIMS web interface. They are typically external collaborators or customers who submit samples to the lab.
Accounts must be directly associated with projects, users, and clients that are created in Clarity LIMS.
NOTE: Viewing user/client/account details, and adding, modifying, and deleting users/clients/accounts are role-based permissions. For more information, see .
This section describes how to add and manage users and clients in Clarity LIMS.
When creating users, keep the following in mind:
The username must be unique among active users in the system. This is validated when you save the user details.
If the username is already associated with an existing user, an error message displays and you are not be able to save the new user profile.
All users must provide their email address and reset their password upon upgrading their software to v5.4 (or later).
From User Management, select the Users tab.
Select inside the Role field to display a drop-down list of roles:
Select the role to assign to this user.
To remove a role from this field, select the X to the left of the role name.
[Optional] Enter a title, phone number, and fax number for the user.
Select Save.
An invitation email is automatically sent to the user. This email includes the login screen URL and information on how to set the login password. You may resend the login instructions email at any time (see ).
The user displays in the Users list.
[Optional] By default, the status of a new user is set to Active, which means that they can log in to Clarity LIMS. To temporarily prevent a user from logging in, change this setting by selecting Archived. (See also )
From User Management, select the Users tab.
Select the user to modify.
In the User Details area, modify the details as required. If you change the username, a password reset email is sent automatically to the user.
Select Login and Password to access the following options:
Send login instructions—Choose this option to re-send the user the login screen URL and information on how to set their login password.
Select Save.
From User Management, select the Users tab.
Select the user to delete.
In the User Details area, select Delete.
When deleting users, keep the following in mind:
You cannot delete a user if that user has logged in to Clarity LIMS.
You cannot delete a user if that user is associated with a project (eg, the user is the project client).
When adding new clients, each client must be a unique entry in the LIMS.
From User Management, select the Clients tab.
Select New Client.
In the Client Details area, complete the following required information:
Enter the first name and last name of the client.
Select inside the Account field and select the client account from the drop-down list.
Enter the client email address.
[Optional] Enter client title, phone number, and fax number.
Select Save.
The user displays in the Clients list, under their account name.
A client cannot be deleted if that client is associated with a project.
From User Management, select the Clients tab.
Select the client to delete.
In the Client Details area, select Delete.
Accounts are the organizations with which a facility conducts business. In the Clarity LIMS Projects and Samples screen, select the existing account from the Account drop-down list to associate projects and samples with it.
To create a new account, type directly into the Account field.
For Clarity LIMS v6.2 and later, you can also create a new account through the Accounts section of the User Management tab that is under Configuration.
From Configuration, select the User Management tab.
Select the Accounts tab.
In the Account Details area, select New Account.
Type a name for the account and complete any other applicable fields (eg, Billing Address).
Select Save.
From Configuration, select the User Management tab.
Select the Accounts tab.
In the Accounts list, select the account that you want to modify.
In the Account Details area, update the fields that need to be modified.
Select Save.
From Configuration, select the User Management tab.
Select the Accounts tab.
In the Accounts list, select the account that you want to delete.
Select Delete.
You cannot delete an account that is associated with a user or project.
This section describes how to add and configure the containers used in your lab, and enable them for use on specific master steps.
Clarity LIMS is a container-based system requiring that samples reside in a container at every step of a workflow. Add the types of containers used in your lab to Local Analysis Software and enable them for use on specific steps.
When running a step in the LIMS, the lab scientist scans in the container barcode and proceeds to the Ice Bucket screen. In the Ice Bucket, the output container types that can be used in the step are listed.
To access the Containers configuration screen, the Configuration:update permission is required. Users who do not have this permission do not see the Containers option displayed under the Consumables tab.
By default, only the Administrator role has the Configuration:update permission. For more on user roles and permissions, see and .
When adding a new container to the LIMS, you are adding a container type (ie, a tube, a 96 well plate, a flow cell). When the container barcode is scanned, an instance of that container type is added.
On the main menu, select Configuration.
On the configuration screen, select the Consumables tab, then select Containers.
On the Containers configuration screen, select New Container.
In the Container Details area, enter the name of the container type you are adding. This is the only required field. When you have entered a name, the Save button becomes available.
[Optional] Specify the details of the rows and columns in the container:
In the Number fields, enter the number of rows and columns in the container.
Use the Naming toggle to specify whether the rows and columns have Alphabetic or Numeric labels.
For numeric rows and columns, use the Start at field to specify the number at which the row/column labels start.
[Optional] If you enter 1 in both row and column Number fields, an additional Yes/No toggle setting displays, asking "Do you want to skip the placement screen?".
Yes—The LIMS does not display the placement screen when the step is run. It automatically places the samples into the container.
No—The placement screen displays samples that need to be manually placed into the single well.
[Optional] List any unavailable wells (ie, wells in which samples must not be placed). Specify these in a comma-separated format, for example, A:1, A:2, A:3, A:4.
Note the following:
If you switch between Numeric and Alphabetic rows/columns, the list of unavailable wells updates to reflect the change.
If you change the Start at number for numeric rows/columns, the list of unavailable wells updates to reflect the change.
If you specify an invalid unavailable well, or change the dimensions of the container such that one or more of the specified unavailable wells becomes invalid, the List unavailable wells field turns red.
The Save button is only available when all specified unavailable wells are valid.
Select Save.
To prevent lab users from placing samples in specific wells of a container, list the unavailable wells in a comma-separated list. Each well must be listed individually. You cannot enter a range.
Wells that are marked as unavailable are shown with a dashed line border in the sample placement screen. If a sample is placed into an unavailable well, a Destination is unavailable error message displays.
On the Containers configuration screen, in the Containers list, select a container type.
The Container Details area displays the details for the selected container type.
Edit the details as required.
Select Save.
When editing container type details, keep the following in mind.
To view and edit container types, the Configuration:update permission is required.
Changes made to a container type are reflected on all future steps on which that container type is enabled.
Steps that have already been run are not affected by changes made to container type details.
When a container type has been used, its row, column, and unavailable wells settings are not editable.
On the Containers configuration screen, in the Containers list, select a container.
The container details display on the right.
Select Delete.
Confirm the deletion.The container is no longer be available for selection on steps.
When deleting container types, keep the following in mind:
To delete container types, the Configuration:update permission is required.
Container types cannot be deleted if an instance of that container type is in use, or has been used, in a step.
This section describes how to add and configure the control samples used in your lab, and enable them for use on specific master steps.
Controls behave like special samples that can be enabled at specific points in your workflows. However, unlike samples, controls do not need to belong to a project and do not have to be assigned to a workflow.
Add the control samples used in the lab to Clarity LIMS, and enable them for use on steps. When running a step on which control samples are enabled, the lab scientist can add those control samples to the Ice Bucket.
All users logged into the LIMS can access the Controls configuration screen. However, their user permissions determine what they are allowed to do in this screen.
By default, only the Administrator role has the Configuration:update permission. For more on user roles and permissions, see and .
There is no limit to the number of controls you can create or the number of steps on which you can enable a control. When adding a new control, you are not required to enable it on a step. You can do this action at any time.
Control sample/step configuration is bidirectional. Enable a control sample on a step in the following situations:
When adding control samples on the Controls configuration screen (described in this section).
On the main menu, select Configuration.
On the configuration screen, select the Consumables tab, then select Controls.
On the Controls configuration screen, select New Control.
Type a name for the control sample. This name displays in queues of steps on which controls are enabled. This field is the only required field.
[Optional] Enter additional details for the control sample:
Supplier—Enter the commercial vendor name. If this control sample was made in the lab, enter in-house.
Cat. #—Enter the catalog number.
Website—Enter the website of the commercial vendor. If it is an in-house control, enter the URL of the internal web page that contains details of the in-house control. When viewing details for the control sample, lab scientists can select the link to open the web page in a new browser window.
In the Control Use section, note the following defaults:
The status of the new control is set to Active.
The new control is not flagged as a single step only control.
Select Save.
The new control displays in the Control Samples list.
Enabling controls on steps makes them available for use in the lab.
To enable a control sample on a step, complete the following steps:
On the Controls configuration screen in the Control Samples list, select the control to enable on steps.
In the Control Use area, select the protocol that includes the step on which you want to enable the control. Type the first few letters of the protocol to filter the list.
In the adjacent list, select the step on which you want to enable the control.
Enable the control on additional steps, if necessary.
Select Save.
The Control Samples list now indicates that the control has been enabled on a step. Hovering over the 'On 1 step' label displays a popup that shows the protocol and step involved.
The status of a control may be Active or Archived.
Active controls are controls that are in use or available for use in the lab workflows.
Archived controls are controls that are not currently available for use in the lab workflows.
Lab users do not see archived controls when initiating steps.
Configuration details for archived controls are saved, so it is easy to reactivate them.
In the Control Samples list, archived controls are listed in their own group. Select the arrow to expand the list and view control details.
Single step only controls do not progress in workflows. When completing a step, lab users do not need to select a Next Step for these controls.
Use this option to represent single-use, disposable samples such as QC standards, molecular weight ladders, and blanks.
In the Control Samples list, select the control to archive.
On the Status of Control slider, select Archived.
Select Save.
In the Control Samples list, expand the archived control group and select the control to be activated.
On the Status of Control slider, select Active.
Select Save.
When deleting controls, keep the following details in mind:
You can only delete control samples that have not been used in a step.
If a control sample has been recorded in a step, or is currently being used in a step, you cannot delete it. The Delete button is not enabled.
On the Controls configuration screen in the Control Samples list, select the control to delete.
In the Control Sample Details area on the right, select Delete.
Settings saved as part of the master step configuration cannot be configured at step level. On the step configuration form, these settings display with a Locked icon.
Status | What It Means in the Lab | Configuration Implications |
---|---|---|
Setting | Configuration options | What It Means in the Lab |
---|---|---|
Property | Configured on Form | What Happens at Step Level |
---|---|---|
Setting | Options for This Step Type | Default | Description |
---|---|---|---|
Setting | Options for This Step Type | Default | Description |
---|---|---|---|
Setting | Options for This Step Type | Default | Description |
---|---|---|---|
Setting | Options for This Step Type | Default | Description |
---|---|---|---|
Setting | Options for This Step Type | Default | Description |
---|---|---|---|
Setting | Options for This Step Type | Default | Description |
---|---|---|---|
Setting | Options for This Step Type | Default | Description |
---|---|---|---|
Setting | Configured on | Notes |
---|---|---|
Step type | Default Naming Convention Token | Naming Convention Preview |
---|---|---|
Milestone settings configured on the master step remain locked on all steps derived from the master step. In this scenario, the milestone displays on the step configuration form with a Locked icon, indicating that these settings are not configurable at the step level.
NOTE: As with all other step settings, if you enable e-signatures on a master step, the setting displays with a Locked icon and is enabled on all steps derived from that master step.
The LIMS automatically archives the instrument when the expiry date is reached (see ).
By default, the instrument status is set to Active. See .
Return to the Labels configuration screen, select Upload Label List and upload your completed labels list. If there are errors in the list, the upload does not complete. Refer to .
To make changes to the labels within the group, you must upload a modified label list. See .
Upload the modified label list. See .
Upload the modified label list. See .
Field Type | Field Description | Additional Options |
---|
Additional Options | Usage |
---|
You cannot reorder master step fields on the Custom Fields configuration screen. This configuration is instead available on the Record Details milestone. For details, refer to the section of the topic.
Reset password—Choose this option to send the user a link that allows them to reset their login password (see ).
Deleting a user removes them from Clarity LIMS. You may instead prefer to archive the user or temporarily remove their access to the system. For details, see .
When configuring a step (see ).
Reagent kits
Master Step/Step Settings
Removed. No defaults set.
Instrument types
Master Step/Step Settings
Removed. No defaults set.
Automation trigger
Master Step/Step Settings
Reverts to default - Button
(manually triggered)
Sample grouping
Queue, Ice Bucket, Placement milestones
Reverts to default - Group by Containers
Well sort order
Queue, Ice Bucket, Record Details milestones
Reverts to default - Row
Sample fields display
Queue, Ice Bucket, Placement, Record Details milestones
No action - the last fields that were configured to display remain there.
Destination containers
Placement milestone
Reverts to default - uses the Container Type specified in the 'OutputContainerType' Process Type Attribute if set, and Tube otherwise.
(If Tube has been deleted from the system, then the first single-well Container Type in the system is used.)
Removing the last destination container also removes the ability to set placement on the Master Step (you only have the option to turn on the placement screen if there is at least one multi-well container).
Destination containers on a QC Step
Placement milestone
Reverts to default - No placement
Placement pattern
Placement milestone
Reverts to default - Row
Skip alternating rows, Skip alternating columns.
Placement milestone
Reverts to default - No
Label groups
Add Labels milestone
Reverts to default - First group configured in LIMS (first by creation date, not by name)
Step data heading
Record Details milestone
Reverts to default - Step Data
Default group of defaults
Record Details milestone
Removed. No defaults set.
Step fields display
Record Details milestone
Removed. No defaults set.
Step field order
Record Details milestone
Reverts to default - Vertical
File attachment method
Record Details milestone
Reverts to default - Manual
eSignature
Record Details milestone
Reverts to default - Off
Sample details heading
Record Details milestone
Reverts to default - Sample Table
Sample display default
Record Details milestone
Reverts to default - Collapse
Enable QC flags
Record Details milestone on QC Steps
Reverts to default - No
File Column Display
Record Details milestone on QC Steps
Reverts to default - Show
File Attachment Method
Record Details milestone on QC Steps
Reverts to default - Manual
Derived sample generation
Fixed – For every sample that enters the step, a fixed number of derived samples are generated.
Fixed value set to 1, configurable
The number of derived samples generated is fixed. The number cannot be changed when running the step.
Variable – For every sample that enters the step, a variable number of derived samples are generated at run time.
The number of derived samples generated can be set. This option displays on the Ice Bucket screen.
Measurement generation
Fixed – For every sample that enters the step, a fixed number of derived samples are generated.
Fixed value set to 1, configurable
The number of derived samples generated is fixed. The number cannot be changed when running the step.
Variable – For every sample that enters the step, a variable number of measurements are generated at run time.
The number of derived samples generated can be set. This option displays on the Ice Bucket screen.
Measurement generation
None - For every sample that enters the step, 0 measurements are generated.
None (not configurable)
No measurements are generated and cannot be change this at run time.
Derived sample generation
Fixed– For every sample that enters the step, a fixed number of labeled derived samples is generated.
Fixed value set to 1, configurable
The number of derived samples generated is fixed. The number cannot be changed when running the step.
Variable– For every sample that enters the step, a variable number of labeled derived samples is generated at run time.
The number of derived samples generated can be set. This option displays on the Ice Bucket screen.
Aliquot generation
Fixed – For every sample that enters the step, a fixed number of aliquots is used to generate pools.
Fixed value set to 1, not configurable
The number of aliquots used to generate pools is fixed, and displays on the Pooling screen and cannot change this value when running the step.
Variable – For every sample that enters the step, a variable number of aliquots is used to generate pools at run time.
The number of aliquots used to generate pools can be set. This option displays on the Ice Bucket screen.
Measurement generation
Fixed– For every sample that enters the step, a fixed number of measurements are generated.
Fixed value set to 1, configurable
The number of measurements generated by the step is fixed. and cannot be changed this when running the step.
Variable– For every sample that enters the step, a variable number of measurements are generated at run time.
The number of measurements generated can be set. This option displays on the Ice Bucket screen.
Measurement generation
Fixed– For every sample that enters the step, a fixed number of measurements are generated.
Fixed value set to 1, not configurable
The number of measurements generated by the step is fixed. This is not configurable and cannot be change it when running the step.
Step type
Master step
Output generation
Master step
Output naming convention
Master step
Automation
Automations are enabled on the master step
Automations triggers may be set on master step or step
Instrument types
Master step or step
Control types
Master step or step
Reagent kits
Master step or step
Step milestones
Master step or step
Some milestone settings must be configured on the master step.
Label groups
Master step or step
Label uniqueness
Master step
Step file placeholders
Master step
Standard
{InputItemName}
Input Sample
Standard QC
{InputItemName}
Input Sample
Aggregate QC
None - Aggregate QC steps do not produce outputs.
Not applicable
Add Labels
{InputItemName}-{AppliedReagentLabels}
Input Sample-N701-N501 (TAAGGCGA-TAGATCGC)
Pooling
{PoolName}
Pool #1
Analysis
{InputItemName}
Input Sample
Demultiplexing
{InputItemName} (FASTQ reads) {AppliedReagentLabels}
Input Sample (FASTQ reads) N701-N501 (TAAGGCGA-TAGATCGC)
Resolution Option
Input Result
Output Result
Next Step
Remains in workflow, but not queued for any steps
Queued for next step in the protocol
Mark Complete (last step in protocol; not last protocol in workflow)
Remains in workflow, but not queued for any steps
Queued for first step in next protocol
Mark Complete (last step in last protocol in workflow)
No longer in the workflow
No longer in the workflow
Repeat This Step
Requeued for same step
No longer in the workflow
Remove from Workflow
No longer in the workflow
No longer in the workflow
Request Review
In progress for same step
In progress for same step, but not in any queues
Rework from Earlier Step
Queued for specified earlier step
No longer in the workflow
Complete and Repeat (not last step in protocol)
Requeued for same step
Queued for next step in the protocol
Complete and Repeat (last step in protocol; not last protocol in workflow)
Requeued for same step
Queued for first step in next protocol
Complete and Repeat (last step in last protocol in workflow)
Requeued for same step
No longer in workflow
Resolution Option
Input Result
Output Result
Next Step
Remains in workflow, but not queued for any steps
Queued for next step in the protocol
Mark Complete (last step in protocol; not last protocol in workflow)
Remains in workflow, but not queued for any steps
Queued for first step in next protocol
Mark Complete (last step in last protocol in workflow)
No longer in the workflow
No longer in the workflow
Repeat This Step
Requeued for same step
No longer in the workflow
Remove from Workflow
No longer in the workflow
No longer in the workflow
Request Review
In progress for same step
In progress for same step, but not in any queues
Rework from Earlier Step (option not available for pooled output)
NA
NA
Complete and Repeat (not last step in protocol)
Requeued for same step
Queued for next step in the protocol
Complete and Repeat (last step in protocol; not last protocol in workflow)
Requeued for same step
Queued for first step in next protocol
Complete and Repeat (last step in last protocol in workflow)
Requeued for same step
No longer in workflow
Resolution Option
Input Result
Output Result
Next Step
Remains in workflow, but not queued for any steps
Queued for next step in the protocol
Mark Complete (last step in protocol; not last protocol in workflow)
Remains in workflow, but not queued for any steps
Queued for first step in next protocol
Mark Complete (last step in last protocol in workflow)
No longer in the workflow
No longer in the workflow
Repeat This Step
Remains in workflow but not in any of its queues
Replicates re-queued for replication step
Remove from Workflow
No longer in the workflow
No longer in the workflow
Request Review
In progress for same step
In progress for same step, but not in any queues
Rework from Earlier Step (this same step)
Requeued for same step
No longer in workflow
Rework from Earlier Step (another previous step)
Requeued for specified step
No longer in workflow
Complete and Repeat (not last step in protocol)
Requeued for same step
Queued for next step in the protocol
Complete and Repeat (last step in protocol; not last protocol in workflow)
Requeued for same step
Queued for first step in next protocol
Complete and Repeat (last step in last protocol in workflow)
Requeued for same step
No longer in workflow
Store for Later (only for replicates)
Remains in workflow<
Not in any queue
Step details
Description
Reagent lots
The reagent lots that were originally used when executing the step can be removed, and new ones added, but at least one lot must be selected. Reagent lots of any status (Pending, Active, Archived) can be used. Archived and expired lots can be selected.
Instruments
A new instrument can be selected. At least one instrument must be used. Instruments of any status (Active, Archived, Expired) can be used. Archived instruments can be selected.
Step custom fields
Step custom fields can be modified with the same constraints as when the step was originally executed (e.g. readonly fields will still be readonly, required fields will still be required).
Sample table custom fields
Sample table custom fields can be modified with the same constraints as when the step was originally executed (e.g. read only fields will still be read only, required fields will still be required). The QC flag can be changed.
Files
Existing files can be removed and new files can be uploaded. Files that are script generated cannot be modified.
Active
These workflows are currently in use - or available for use - in the system.
Samples can be assigned to these workflows so that lab scientists can work on them in Lab View.
Active workflow cannot be modified or deleted.
Protocols in an Active workflow cannot be reordered.
Protocols in an Active workflow can be modified or deleted, and modify their steps.
Active workflow can be made unavailable for use by changing its status to Archived. Samples that are currently queued or in progress for the workflow can complete it, but new samples cannot be added.
When a workflow in Active state is saved, it can only transition to the Archived state.
Pending
These workflows have not yet been activated.
These workflows are not available for use in the lab.
These workflows do not display in Lab View.
Samples cannot be added to these workflows from the Projects and Samples screen.
Pending workflows can be modified, for example, by renaming them, or by adding, modifying, or removing, modifying protocols.
Pending workflows can be activated.
Archived
These workflows are currently not in use in the system.
These workflows do not display in Lab View.
Samples cannot be added to these workflows from the Projects and Samples screen.
Samples that are currently queued or in progress for an Archived workflow can complete it.
After a workflow is saved in Archived state, it can only transition to the Active state.
An Archived workflow cannot be modified or deleted.
In an Archived workflow, protocols cannot be reordered.
QC Protocol?
QC protocols comprise a series of QC steps.
All steps share queue of samples.
Samples do not move sequentially from step to step. Instead, they appear available/unavailable for a particular step based on configured filtering criteria.
Non-QC protocols typically comprise a series of non-QC steps. However, you can include a QC step as part of a Non-QC protocol.
Each step has its own queue of samples.
Samples move sequentially through the steps, until they have completed all steps in the protocol.
Protocol Type
Sample Prep
Library Prep
Sequencing
Data Analysis
Sample Analysis
Other
See Non-QC protocol information in previous row.
Show in Lab View?
No
These protocols are hidden for both lab scientists and administrators in Lab View and are therefore not available for use in the lab.
These protocols are only visible to administrative users in the configuration area.
Yes
These protocols are displayed in Lab View and can be used by lab scientists to perform their work in the lab.
Capacity
The sample capacity of the protocol. This depends on the number of lab scientists in your facility, and the number of samples they can work with at any given time.
The Capacity setting controls the highlighting on the Overview and Projects dashboards, allowing you to see at a glance which protocols are approaching or exceeding sample capacity.
Token
Resolves to
Usage
Example
a
AM/PM marker
The system returns the marker in the same format, regardless of how many times the token is repeated.
If runtime is in the afternoon:
a resolves to PM
H
Hour in day
(24-hour clock)
The number of times you repeat the token determines the minimum number of digits returned, with the system padding the value with zeros if necessary.
If runtime is 11 PM:
H resolves to 23
HHH resolves to 023
h
Hour in AM/PM
(12-hour clock)
The number of times you repeat the token determines the minimum number of digits returned, with the system padding the value with zeros if necessary.
If runtime is 11 PM:
h resolves to 11
hhh resolves to 011
m
Minute in hour
The number of times you repeat the token determines the minimum number of digits returned, with the system padding the value with zeros if necessary.
If runtime is 11:10:
m resolves to 10
mmm resolves to 010
s
Second in minute
The number of times you repeat the token determines the minimum number of digits returned, with the system padding the value with zeros if necessary.
If runtime is 11:10:23:
s resolves to 23
sss resolves to 023
S
Millisecond
The number of times you repeat the token determines the minimum number of digits returned, with the system padding the value with zeros if necessary.
If runtime is 1:10:23:01:
S resolves to 1
SS resolves to 01
z
Time zone - general
One token results in the abbreviated time zone. Four tokens results in the entire name.
If runtime is in the Pacific time zone, during daylight savings:
z resolves to PDT
zzzz resolves to Pacific Daylight Time
Z
Time zone - RFC 822
The system returns the time zone in the same format, regardless of how many times the token is repeated.
If runtime is in the Pacific time zone, during daylight savings:
Z resolves to -0800
Text | Field in which to type a line of text. Field length is only limited by the database field used to store it. PostgreSQL limit - 1 Gb, Oracle limit - 4000 characters. | Not applicable |
Numeric | Field in which to type a number. |
|
Hyperlink | Field containing a link to a website URL. Select the link to open the URL in a web browser. | None |
Text Dropdown | Field in which to select from a list of predefined text options. |
|
Numeric Dropdown | Field in which to select from a list of numbers. |
|
Hyperlink Dropdown | Field in which to select from a list of website URLs. Select a link to open the URL in a web browser. |
|
Multiline Text | Field in which to type multiple lines of text. | Not applicable |
Toggle Switch | A field to toggle between Yes and No values. |
|
Date | A calendar tool to select a date. | Not applicable |
Range From, To | Use to define the range within which numeric values must fall. At run time, the user is prevented from entering a number outside of the defined range. |
Decimal Places Displayed | Use to specify the number of decimal places to display in a numeric field. This value is used for display purposes only. The field stores the value as input by the user or script. Note: If the user edits the value of a Numeric field (or gives the field focus by selecting inside it), the value that displays— including the number of decimal places, is written to the database, overwriting the existing value. For this reason, we recommend that you increase the number of decimal places to display to ensure sufficient precision. |
Dropdown Items | Use to create a list of options to select at run time. |
Custom Entries | Use to control whether or not the user may enter a value at run time. If set to No, a value from the predefined drop-down list must be selected. If set to Yes, a value must be entered into the field. |
This section describes two tasks that Clarity LIMS administrators are often required to perform:
Temporarily prevent a user from logging in by archiving the user.
Email a link to a user that allows them to reset their Clarity LIMS password.
While Clarity LIMS does not enforce password changes, for best practice and security, we recommend that user passwords are changed frequently.
On the main menu, select Configuration.
Select User Management.
Select the Users tab to see a list of all current active and archived users in the system, categorized by role.
Select the user to archive.
The details for the selected user display in the User Details area on the right. The Status slider displays the current status of the user.
Select Archived to temporarily archive the user.
Select Save.
By default, every new user created in Clarity LIMS is an active user and can sign in to Clarity LIMS with their username and password.
On the main menu, select Configuration.
Select User Management.
Select the Users tab to see a list of all current active and archived users in the system.
Select the user whose password is to be reset.
The details for the selected user display in the User Details area on the right.
Select Login and Password and select Reset password.
This sends the user a link that allows them to reset their password.
The Send login instructions option sends the user the following information:
The URL for the login screen.
Instructions on how to set their login password.
This email is sent automatically when a new user is created, but you may occasionally need to resend it.
When Clarity LIMS is running scripts via the External Program Plugin mechanism, it is not uncommon for these scripts to rely upon a file that contains information germane to the script. A common example would be using the sample input file generator script that is part of the Lab Instrument Toolkit. This script merges runtime information within a Clarity step into a file whose format is directed by a 'template' file.
Under the old method, template files must be saved to a folder accessible to the automation worker node. Typically
If a script needs a template file, the file is specified by including its full path in the syntax that invokes the script.
As of Clarity LIMS v5.1, template files can (optionally) be attached directly to an automation via the GUI.
We recommend that you use a combination of both methods, as follows.
Use the embedded template while developing the template. During this process, having the template file easily available for editing is helpful. After the template is finalized, move it to the server and adjust the automation command line to use the server path/filename instead of the file token.
This method allows for easy, iterative testing and precise traceability for production work. This method also facilitates reliable migrations involving the config-slicer tool and coordinated movement of associated /customextensions/ files.
Config-slicer does not currently migrate automations that need template files without additional manual manipulation after the configuration migration. Regardless of method, you must manipulate the system manually to complete the migration of the template files.
The Clarity LIMS automated QC system is configurable. This allows administrators to determine which QC master steps and steps are required for aggregation, the criteria to apply to each master step/step, and the field values to be copied up to aggregation.
The following features are provided:
Preconfigured protocols for DNA Initial QC and RNA Initial QC that include master step templates for Bioanalyzer, NanoDrop, Qubit, PicoGreen, CaliperGX, TapeStation, and Aggregate QC.
Custom field-based storage of key QC results for both automated and spreadsheet-based manual storage.
Custom field-based criteria (Source Data Field, Operator, and Threshold Value) for QC steps that allow for automatic assignment of QC flags triggered on the storage of a QC result file.
QC protocol filter configuration that allows you to determine to which QC step samples are queued and which must be completed before aggregation can occur.
Automated aggregation of QC results and assignment of QC flags, from individual step right up to the sample undergoing QC, with the option of using custom field-based configuration to update Concentration (and other fields) from a particular QC result.
Easy access to individual sample QC measurements and flags, allowing you to review results, see the criteria evaluated, and the resulting QC flags assigned.
Excel-based log files, attached to the aggregation step, clearly show the criteria evaluated, the resulting QC flag assigned, and which individual QC test results were used to update the custom field values on the sample undergoing QC.
This section describes how to update some of the details associated with your profile, including your password, email address, and profile photo.
After signing into Clarity LIMS, you can update some of the details associated with your profile, including your password, email address, and profile photo.
If the user is an LDAP account, then you cannot update the profile in Clarity LIMS.
In Clarity LIMS, at the right of the menu bar, select your username and then select Profile.
The Profile page opens, displaying the details associated with your user profile.
On this page, you can:
Change your password.
Change your email address.
Upload an image to associate with your profile.
On the Sign In screen, click the Forgot your password link.
In the Reset Your Password screen, enter your username or email address and click Submit.
Use an automation with the copyUDFs script and to copy custom fields from a step input to a step output. This example uses the Library Normalization master step, and shows how to copy the Concentration field from the step input samples to the output samples.
On the Lab Work configuration screen, select the Automation tab, then select the Step Automation tab.
Add a new automation.
Name the automation and enter the channel name.
In the Command Line field, copy the following command, replacing the { } placeholders with your own information:
bash -c "/opt/gls/clarity/bin/java -jar /opt/gls/clarity/extensions/ngs-common/v5/EPP/ngs-extensions.jar script:copyUDFs -u {username} -p {password} -i {processURI:v3:http} -f Concentration"
In the Automation Use section, enable the automation on the desired master step (this example uses the Library Normalization master step).
Save the automation.
Return to the Lab Work configuration screen and select the Lab Work tab. In the Master Steps list, select the master step on which you enabled the automation.
In the Automation section, the new automation is listed. Configure as follows:
Trigger Location: Record Details
Trigger Style: Automatic upon entry
NOTE: Automation triggers can be configured at the master step or the step level. If configured on the master step, the trigger settings are locked on all steps derived from that master step.
You can add it as an expanded view field or as a column header (for details, refer to #configure-record-details-milestone).
Save your changes.
At run time:
When the Record Details screen is entered, the automation are automatically triggered.
The copyUDFs script runs and copies the Concentration field values from the step input samples to the output samples.
Clarity LIMS users are assigned roles. These roles control permissions and the ability to:
Access certain Clarity LIMS features.
Perform certain actions.
Sign in to the Clarity LIMS interfaces.
In a typical LIMS lab environment, there are four primary user roles:
The following sections describe the default permissions of the four primary user roles. Some user role permissions are configurable (see Configured Role-Based Permissions).
By default, both the System Administrator and Facility Administrator user roles have access to:
All configuration areas of the Clarity LIMS web interface, allowing them to:
Add and configure workflows, protocols, and steps.
Add consumables—reagents, controls, instruments, reagent labels, containers.
Add and configure custom fields.
Add and configure automations.
Supervisory and lab management functions in the Clarity LIMS web interface, allowing them to:
Review escalations.
Remove samples from workflows.
Move samples into the next step in a workflow.
Access the Overview and Projects dashboards.
User management, allowing them to:
Create, modify, and delete user accounts.
Modify user roles and permissions.
Approve access requests from external collaborators.
The Researcher role is typically assigned to the laboratory scientist. By default, individuals who are assigned this user role are able to:
Log in to Clarity LIMS.
Access Lab View.
Manage and work with samples contained in all projects in the system.
Edit their own user profiles—ie, they can change their own passwords and other profile information.
Access three Consumables configuration areas: Reagents, Controls, and Instruments, and do the following.
View reagent kits and add new reagent lots to those kits (researchers cannot create reagent kits).
View controls.
View instrument types and add new instruments to those instrument types (researchers cannot create instrument types).
Reactivate expired (archived) instruments by resetting the expiration date.
The Collaborator role is assigned to external collaborators who interact with Clarity LIMS using the LabLink Interface.
The Collaborator role is supported in v5.3 and later. It is not supported in v5.0.x to v5.2.x.
An external person can request a user ID through LabLink. By default, when the request is approved by an administrator, the collaborator is able to:
Sign in to LabLink.
Create, view, and delete projects. (Collaborators are automatically given full permissions to projects they create.)
Submit samples to projects, and delete samples from projects.
By default, collaborators do not have access to the main Clarity LIMS web interface.
With Clarity LIMS v6.2 or later, there are two ways to search for items:
Basic Search —Searches the entire system or within a specific category, like samples, projects, containers, and protocol steps.
Advanced Search —Locates information stored in the system using specific criteria that cannot be defined in Basic Search.
Genealogy view is an interactive and hierarchical view of the history of an experiment processed through Clarity LIMS. A genealogy starts with a submitted sample and progresses with nodes nested beneath it. Information is displayed in a hierarchy and shows parent - child relationships between these nodes. For example, a derived sample may be an input to a step that produces derived sample outputs. This sample output is then input to the next step in the workflow.
Genealogy can be used to:
View submitted samples and derived samples, on each step the samples pass through.
Troubleshoot and view the outcome and possible continuous outcomes of each step, including the indented children steps nested within their parent steps.
View QC flags relating to sample measurements.
View custom fields for items in the genealogy.
View and download files.
View and download files.
On the Projects & Samples screen, or using the search toolbar, navigate to a sample.
Projects & Samples screen: Select a project. All samples in the selected project are listed in the Samples & Workflow Assignment section at the bottom of the screen.
Search toolbar: Select Search, then select Sample from the drop-down list . Type a search term into the adjacent field and press the Enter key. You can type the complete search term, or part of it followed by an asterisk. (For details, see #search-for-samples.)
Select the sample to open the search results page. If using the search toolbar, the search results page lists submitted samples, derived samples, and measurements that match the search criteria.
Select the genealogy icon next to the submitted sample name. Selecting a derived sample genealogy icon opens a partial genealogy. This provides information from the selected derived sample onwards. A partial genealogy does not show how the selected derived sample was produced, or provide any information about the original submitted sample.
Genealogy starts with a submitted sample, and progresses with additional nodes (steps, derived samples, custom fields, and/or measurements) nested beneath it.
Steps—may be parent or child nodes. Most often, they are parent nodes with child nodes that are derived samples or measurements. The nodes for steps do not contain an icon.
Parent nodes—produce nested child nodes. Select the plus icon to expand the nested child nodes.
Child nodes—listed beneath parent nodes. Select the step minus icon to nest the child nodes beneath the parent node.
Submitted samples—listed at the top of a complete genealogy (not listed with a partial genealogy). Cannot be a child node.
Derived samples—denoted by an Erlenmeyer flask icon, these nodes contain partial genealogies. Can be parent or child nodes.
Custom fields—denoted by an 'i' information icon, these nodes are created by parent nodes with custom field entries. Can be parent or child nodes.
Measurements—denoted by a 'paper file' icon, these nodes represent measurements (typically produced by a QC step). Measurement nodes may also serve as file placeholders. Can be parent or child nodes, but do not produce child nodes.
Genealogy view can be customized to show or hide specific nodes.
Select the plus icon next to a parent node to show the nested hierarchy of child nodes. The plus icon changes to a minus icon.
Inversely, select the minus icon next to a parent node to hide the nested child nodes. Child nodes with identical indentation are siblings.
By default, the genealogy lists parent custom fields nodes, but hides the child nodes.
To expand the child custom fields nodes, select Show All Custom Fields from the pulldown menu in the top-right corner.
Select Hide All Custom Fields to return to the previous view.
Some steps produce numerous measurements and file placeholder nodes. By default, the genealogy lists all file placeholder nodes.
Select Hide All Files from the drop-down menu to hide all placeholders and child nodes.
Select Show Available Files to limit the view to file place holders with attached files.
Select Show All Files to return to the default view.
Genealogy lists additional information in the following areas:
The genealogy information box contains additional information useful to the user.
Select the blue hyperlink text associated with a step name to launch the step screen and view detailed step information in a new browser tab.
Select the blue hyperlink text associated with a measurement node name, or a file placeholder node name, to download an attached file.
Select the blue hyperlink text listed in the LIMS ID column to open a new browser tab, and view the XML representation of the node in the API (requires API access).
Measurement nodes produced by steps that pass required thresholds are denoted by QC flag icons (listed under the QC flag column). To ensure they are visible when child nodes are hidden, QC flags are always shown on the parent measurement node rather than the child node.
The Advanced Search function is included in Clarity LIMS v6.1 or later. Advanced Search allows you to search for specific criteria and create relationships that are used to locate information stored in the system. You can use Advanced Search to build detailed search strings (including grouped and nested strings). These search strings provide the search engine with precise instructions on what to look for in the system.
You can use the search toolbar to select Advanced Search from the drop-down list. This selection takes you to the Advanced Search page. You can also access the Advanced Search page directly at /clarity/advanced-search.
Users with Read-Only permissions cannot perform a search using Advanced Search. If a user with Read-Only permissions must use Advanced Search, enable access as follows.
Locate the file on the server (/opt/gls/clarity/tomcat/current/lib/activity-management-ui-config.groovy). Create a backup of the file before proceeding.
Open the file and add the following code:
// To grant Advanced-search access to Readonly user
readonly.allowUrlMap = [
[post: ['/clarity/api/advanced-search']]
]
Restart Clarity LIMS.
From the query builder panel, select a category from the Search drop-down list.
Select Add to add one or more search conditions to the selected category.
If a search condition has other conditions following it, an AND/OR operator is displayed. A search condition contains the following components:
Constraining Entity
Field
Constraint
Constraint Value
To remove a condition, select Delete (X) to the right of the condition.
Use grouping controls to group search conditions by selecting the Show Grouping Controls checkbox. When these controls are enabled, select the checkboxes next to the conditions, and then select Group to group them together. The conditions and groups must be adjacent to each other.
The Query Preview field uses the numbers of each condition to display your search string. To separate the conditions, select the checkbox within the group, then select Ungroup.
The Undo and Redo operations record the state of a search string when you perform one of the following operations:
Add
Delete
Group
Ungroup
Clear all groups
When you select Undo, the string returns to the state before the last operation. When you select Redo, the string reverts to the state after the Undo operation.
After you perform a search, a table shows the results. This table displays the first 500 returning results by default. Use the Configuration Property tool to change the default display.
To export the search results, select Export (down arrow) to the right of the table. Filters applied to the table are not preserved in the exported search results.
The Save Query operation is enabled after a search is completed using a search string. You can use the Save Query panel to save search strings as follows.
Select Save Query.
Enter a descriptive name for the search string.
Select Save.
The new search query shows in the Save Query panel and can be used in future searches.
To view a saved query, select the query from the Saved Query panel. The Query Builder panel shows the conditions and details of the selected search query.
To modify the search query, select the applicable conditions and edit them. When you update a saved query, an EDITED label displays to the left of the query name.
To delete a saved query, select Delete (trash bin) to the left of the query.
To share a saved query, select Share (right arrow) next to the applicable query in the Saved Query panel.
A text file is downloaded that contains the selected query details.
Import a saved search query as follows.
Select Import Query.
Browse for the text file and select Open.
The query builder panel shows the details of the imported search query.
After performing a search with the imported query, select Save Query.
Enter a descriptive name for the search query.
Select Save.
The imported search query shows in the Saved Query panel and can be reused in future searches.
Project notes are made through the Project Overview tab. Sample notes are made through the project Sample tab.
When this feature is active, LabLink sends an email notification if a new project note or sample note is created.
The email notification is sent to the owner of the project/sample or to the distribution list configured in properties (lablink.admin.email), depending on who saved the note.
Notes saved by the project/sample owner produce email notifications sent to the distribution list.
Notes saved by anyone other than the project/sample owner produce email notifications sent to the project/sample owner.
Project note email notifications contain a link to the project details Project Overview page. Sample note email notifications contain a link to the project details Sample page.
NOTE: This feature is not active by default. To receive email notifications for notes, send a request to the Support team.
The Resource Materials tab provides LabLink users with resource materials from the lab. The lab can upload different types of resources from the Configuration tab. There are two types of resource materials:
Sample Submission Templates—Templates that can be used to submit samples. Each template contains headers and custom fields that are required for each sample.
Supplementary Materials—Documents that can be used as part of the sample submission process.
The Contact Us tab provides LabLink users with information on how to contact the lab. The lab can update this information through the Configuration tab.
The Projects tab is the landing page that displays after signing in to LabLink. The view of the Projects tab differs between administrators and collaborators:
LabLink administrators—the Projects tab displays all projects submitted to the lab.
Collaborators—the Projects tab displays only the projects submitted to the lab by the current collaborator.
To create a project and submit sample, go to the Projects tab and select Create.
The Create A New Project guided sample submission form displays. This form includes the following steps:
Enter general information for the project, as follows:
Enter the project name.
[Optional] Enter project notes for the lab
Select Continue
By default, the Project Name and Project Notes fields are always shown. Other project fields can display, depending on the configuration of custom fields by the lab.
In this step, upload a sample submission document containing sample information for the lab.
Download a sample submission document template from the Resource Materials tab (the lab administrator uploads the templates). Populate the template with the sample information.
Select Browse Document and locate the sample submission document. Select Open to upload the document.
After the document has been added, LabLink validates the headers and fields. If the headers and fields are not valid, error messages display.
After successfully uploading a document, a list of fields populated with sample information displays for review.
If the sample information is incorrect, replace the existing document by selecting Replace.
If the sample information is correct, select Continue.
In this step, upload additional documents to share with the lab regarding this project. Skip this step if additional documents are not needed.
[Optional] Select Browse Documents and locate the document to be shared with this project submission. Select Open to upload the document.
Upload additional documents or select Continue.
In this step, review a summary of the project and uploaded documents before submitting the project and samples to the lab.
If the lab has configured a disclaimer for sample submission, the disclaimer is available for review.
After reviewing, select all required checkboxes.
The Submit Project button becomes available.
Select Submit Project.
After the project has been successfully created and submitted to the lab, a confirmation message displays.
[Optional] Select View Confirmation to print an overview of the submitted samples.
The Project Submission Confirmation page opens in a new tab. The page includes an overview of project information, lines for a signature and date of signature, and a list of all samples submitted.
This page can be printed and shipped to the lab with the submitted samples.
Once submitted, all projects are associated with one of the three following status types:
Pending—The lab has not started processing the samples (or samples have not been received).
Open—The lab has received the samples and the samples are being processed.
Closed—The lab has completed processing the samples (or the lab has decided to close the project).
These three status types correlate to the status shown in Clarity LIMS. By changing the status of the project in Clarity LIMS, the status automatically changes in LabLink.
After samples are submitted to the lab, the project is listed with a Pending status until the lab changes the status in Clarity LIMS.
To search for projects or samples in LabLink, enter a term in the search box and select one of the following criteria:
Projects—The search is performed on all projects.
Sample—The search is performed on all samples.
To view a single project, select a project in the project list.
LabLink is a sample submission portal that is part of Clarity LIMS. LabLink allows end users of the lab (ie, principal investigators, clinicians, external labs) to submit samples to the lab for processing. By allowing end users to submit samples through LabLink, the lab benefits in the following ways:
Save time and avoid manual errors during sample accessioning by automatically receiving submitted sample information in Clarity LIMS.
Easily configure LabLink to include sample submission templates and supplementary materials (ie, shipping instructions). End users of the lab use these materials during the sample submission process.
Provide progress updates and publish files to end users of the lab.
To sign in to LabLink, open the LabLink URL associated with the Clarity LIMS instance for the lab.
On the Sign In screen, the following actions are available:
Sign in to LabLink with a user ID and password.
Request a user ID by selecting Request for a User ID.
Request to reset a password by selecting Forgot Password?.
Reset a password after multiple failed sign-in attempts.
The Users tab is only available for LabLink admins. This tab displays a list of pending requests and all users. If there are any pending requests, a red dot appears on the Users tab.
In the All Users section, only approved users display. The account status for each user shows as Active or Deactivated.
Pending requests require a LabLink admin to approve or deny the request. To approve or deny the request, complete the following steps:
Select the name of the pending request.
A Review User Request window displays with the option to approve or deny.
Approve or deny the pending request.
To approve, update any of the prepopulated fields and provide a user ID. When all fields are correct, select Approve and an email is sent to the requester stating that the request has been approved.
To deny, select Deny and provide the reason for the denial. After entering in the reason, select Deny and an email is sent to the requester with the reason for denial.
To deactivate a user, edit the user information as follows:
Select the name of a user.
The User Information screen displays.
Select Edit.
Deselect the Active checkbox (under Account Status) to deactivate the user.
Save changes.
Create a new user from the Users tab:
Select Create.
The Create A New User screen displays.
Define the user information, which includes the following fields:
First Name
Last Name
Title [optional]
Phone Number
Email Address
User ID
Role
Submitting Lab Name (Account).
Select Create.
LabLink offers the following two roles:
An admin role grants access to all submitted projects and samples, resources materials, users, configuration, and the Contact Us tab. The LabLink admin role is equivalent to a system administration role.
A collaborator role grants access to the submitted projects and samples from that user, available resource materials, and the Contact Us tab. The collaborator role in LabLink does not grant access to Clarity LIMS.
All users with a collaborator role in Clarity LIMS appear in the LabLink All Users list.
Standard
Standard QC
Aggregate QC
Add Labels
Pooling
Analysis
Demultiplexing
This section describes how to add and configure the three types of automations in Clarity LIMS: step automations, project automations, and derived sample automations.
To access the Automation configuration screen, the Configuration:update permission is required. Users who do not have the Configuration:update permission do not see the Automation tab.
By default, only the Administrator role has the Configuration:update permission. For more on user roles and permissions, see User Roles and Configured Role-Based Permissions.
You can create three types of automations in Clarity LIMS: step automations, project automations, and derived sample automations.
Step automations—Actions that are triggered when running samples through a step. Configure them to be triggered automatically (at the start/end of the step, or when a particular screen is entered or exited), or manually (when selecting a button on the Record Details screen). The automations are enabled on the master step, but the trigger points are configured at the master step or step level. See #add-a-step-automation
Project automations—Actions that users can run on submitted samples, directly from the Projects & Samples screen. For example, you might configure an automation that gives the ability to assign the samples to a workflow. See #add-a-project-automation
Derived sample automations—Actions that users can run on derived samples, directly from the Projects dashboard. For example, you could configure an automation that gives the ability to queue selected samples for a new workflow. In this case, the automation would trigger a custom script created for this purpose. See #add-a-derived-sample-automation
Step automations are reusable. After you have created an automation, you can enable it on multiple master steps.
If you intend the automation to be triggered manually, the name you choose for the automation is used to name the button that initiates it from the LIMS interface.
Two step automations can have the same name as long as they are unique in some other way. For example:
channel name is unique, or
command line is unique, or
run-program-per-event values are unique (available in the API only)
Attached files and associated master steps are ignored in these comparisons.
Two project automations cannot have the same name, regardless of the uniqueness of channel name and command line.
You cannot enable multiple automations with the same name on a master step, even if the automations are configured differently.
On the main menu, select Configuration.
On the configuration screen, select the Automation tab.
On the Step Automation tab, select New Automation.
In the Automation Details area, complete the required fields:
Type a name for the new automation.
In the Channel Name field, enter the channel to be used for this automation (for more information, refer to Automation Channels in the Clarity LIMS API documentation section).
In the Command Line field, enter the command line to be run when the automation is triggered. Copy/paste tokens from the Tokens list, as required. For details, refer to Step Automation Tokens in the Clarity LIMS API documentation section.
[Optional] Enable automation on steps:
In the Automation Use section, select inside the Enable on these Master Steps field and select the master step on which to enable the automation. (Note that this configuration is bidirectional—when configuring a master step, you can select automations to associate with that master step.)
If necessary, you can:
Repeat this process to enable the automation on multiple steps.
Select the X button to remove a step from the field.
Select Save.
The new step automation is now available to be configured on the selected master steps.
The Automations configuration screen includes a Template Files section that allows for the upload of a template file to an automation. Reference the file in the automation command line and use it to generate a file that is attached to the step—typically a sample sheet file that can be used to start the instrument run.
A token for the template file is automatically added to the Tokens list. When included in the command line, the token is replaced with the absolute path of the template file at run time.
Downloadable sample sheet template files are available for several Illumina instrument integrations. For details on modifying the example template for the needs of your lab, refer to the Lab Instrument Tool Kit section of the Clarity LIMS Integrations and Tool Kits documentation.
In the Template Files section, select Upload File.
In the Upload File dialog, select Choose File, and then browse to and select the appropriate template file.
Select Upload. The file is attached to the automation and listed in the Template Files section. When upload is complete, a new dynamic token is added to the Tokens list.
In the Command Line field:
Include a script that generates the output file.
Provide the template file token as a script parameter. You can copy and paste the token directly from the Tokens list. At run time, the token is replaced with the absolute path of the file.
Select Save.
In the Step Automation list, an icon indicates that a file is attached.
If necessary, you can:
Repeat this process to attach additional files to the automation.
Select the X button to remove the file from the automation.
You can also attach template files to automations via the API, using the files endpoint. For details, refer to the Clarity LIMS API documentation.
On the Derived Sample Automation tab, select New Automation.
In the Automation Details area, complete the required fields:
Type a name for the new automation.
In the Channel Name field, enter the channel to be used for this automation (for more information, refer to Automation Channels in the Clarity LIMS API documentation section).
In the Command Line field, enter the command line to be run when the automation is triggered. Copy/paste tokens from the Tokens list, as required.
For details, refer to Derived Sample Automation Tokens in the Clarity LIMS API documentation section.
Select Save.
The new derived sample automation is added to the Derived Sample Automations list, and is now available to be run on derived samples from the Projects dashboard.
The following examples show how derived sample automations can be used in the lab.
On the Project Automation tab, select New Automation.
In the Automation Details area, complete the required fields:
Type a name for the new automation.
In the Channel Name field, enter the channel to be used for this automation (for more information, refer to Automation Channels in the Clarity LIMS API documentation section).
In the Command Line field, enter the command line to be run when the automation is triggered. Copy/paste tokens from the Tokens list, as required. For details, refer to Project Automation Tokens in the Clarity LIMS API documentation section.
Select Save.
The new project automation is added to the Project Automations list, and is now available to be run on submitted samples from the Projects & Samples screen.
On the Automation Configuration screen, select one of the following tabs:
Step Automation
Project Automation
Derived Sample Automation
In the list of automations on the left, select the automation to edit.
Make your changes and select Save.
When editing step automations, keep the following in mind:
Changes you make to a step automation are reflected on all future steps on which that automation is enabled.
Steps that have already been run are not affected by changes you make to a step automation.
On the Automation Configuration screen, select one of the following tabs:
Step Automation
Project Automation
Derived Sample Automation
In the list of automations on the left, select the automation to delete.
Select Delete.
Information about deleted automations is saved in the Clarity LIMS database for historical purposes. However, there is no way to restore a deleted automation for use in Clarity LIMS.
Manage the permissions of the System Administrator, Facility Administrator, Researcher, and Collaborator user roles to restrict or allow the following actions:
Sign in to Clarity LIMS.
Sign in to the API.
View and interact with certain features of the interface.
Perform certain actions in the interface.
View and restrict any actions in the interface. [Clarity LIMS v6.1 and above]
Role-based permissions are controlled through the permissions-tool.jar tool, at /opt/gls/clarity/tools/permissions/.
For assistance with running the command-line permissions tool, contact the Illumina Support team.
Functionality includes the following commands:
#listroles—List all roles in the system.
#describerole—List names and descriptions of all permissions in the system.
#createrole—Create a role.
#showsummary—List permissions assigned to each role in the system.
#listpermissions—List permissions assigned to a specific role.
#assignpermission—Assign a permission to a role.
#removepermission—Remove a permission from a role.
NOTE: The permissions-tool.jar tool function names and property names are case-sensitive. If you type the incorrect case, your command or property cannot be understood.
There can be a delay (up to 20 minutes) before changes to some API-related permissions take effect.
List all user roles in the system:
Show permissions for a specific role:
Create a role:
Show assigned permissions for all roles:
List names and descriptions of all permissions:
Assign a permission to a role (the example assigns permission to create controls):
[Clarity LIMS v6.1 and above] Assign a permission to a role (the example assigns read-only permission to a role):
Refer to #supported-permissions.
Remove a permission from a role (the example removes permission to create controls):
Refer to #supported-permissions.
The sections below list LIMS permissions and actions, and the user roles to which each permission/action is assigned by default.
By default, System Administrators and Facility Administrators have all permissions listed.
The default role with AdministerLabLink permission is Administrator. This permission is added to the existing System Administrator & Facility Administrator roles.
The Collaborator role is based on the existing Collaborator role in LabLink v1.0.
Note: The existing Researcher role does not have the new permission and behaves similarly to the LabLink Collaborator role.
Default roles with this permission: Administrator, Researcher
The Sample:update permission is automatically granted to roles that have the Sample:create permission at the time of migration to Clarity LIMS v5.x. If you have removed create permissions from any default role, the role does not acquire the update permission.
Default roles with these permissions: Administrator
Users with ClarityLogin permission can access the Consumables > Controls tab and view control sample details (read only).
Default roles with these permissions: Administrator
Users with ClarityLogin permission can access the Consumables > Reagents tab. They can also view, edit, and delete reagent lots, and add lots to existing kits. No additional ReagentKit permissions are required.
Default roles with these permissions: Administrator
APILogin permission is required for role management. All users with ClarityLogin permissions can view and edit their own user details (except for assigning/removing roles).
Default roles with this permission: Not applicable. You can assign this permission to any role.
At least one System Administrator must be available to reconfigure user roles. Therefore, we recommend that you do not assign the Read-Only permission to the default Administrator and API users.
Default roles with these permissions: Administrator
In the LIMS user interface, the term 'contact' has been replaced with 'client.' However, the API still uses the permission Contact.
All users with ClarityLogin permission can view and edit their own user details (except for assigning/removing roles).
Default roles with these permissions: Administrator
In the LIMS user interface, the term 'contact' has been replaced with 'client.' However, the API still uses the permission Contact.
Users with ClarityLogin permission can view and edit their own client and user details.
Clients can edit their own details (except for assigning/removing roles) without having update permission.
Default roles with these permissions: Administrator
In the LIMS user interface, the term 'process' has been replaced with 'master step.' However, the API still uses the permission Process.
Default roles with this permission: Administrator
Default roles with this permission: Administrator
Default roles with this permission: Administrator, Researcher, Collaborator
Default roles with this permission: Administrator, Researcher, Collaborator
Default roles with this permission: Administrator
Default roles with this permission: Administrator
Default roles with this permission: Administrator
Default roles with this permission: Administrator
Default roles with this permission: Administrator
Default roles with this permission: None
Modifications are limited to what is available on the Record Details screen for the step.
Details such as sample placement or routing cannot be modified.
Only steps completed after upgrading to LIMS v5.1 can be edited. Steps completed in v5.0 or earlier cannot be edited.
Steps that were executed using the Process API cannot be edited.
For details, see Modify Completed Step Details .
Follow these steps to configure the out-of-the-box QC solution to meet the needs of your lab:
Remove unnecessary QC steps. (See #removingunnecessaryqcprotocolsteps-removeaqcprotocolstep )
Configure master step fields for QC aggregation. (See #configure-master-step-fields-for-qc-aggregation)
Configure QC evaluation criteria. (See #configure-qc-evaluation-criteria)
Specify QC master step field values to copy up to aggregation. (See #copy-master-step-fields)
Clarity LIMS includes preconfigured RNA Initial QC, DNA Initial QC, and Library Validation QC protocols, each containing a sequence of steps. You can modify these protocols, and remove steps that your lab does not use.
On the main menu, click Configure.
On the LIMS configuration screen, click the Lab Work tab.
The Workflow, Protocol, Step, and Master Step navigation panel displays.
In the Protocol list, select the protocol you want to modify.
The Workflow, Protocol, Step, and Master Step navigation panel displays.
To delete a step, select it and click the Delete button.
You cannot remove a step if it contains samples that are currently in progress, or if there are samples queued for it.
In this scenario, you must move the samples before proceeding with the deletion. For details, see #manually-moving-samples-to-the-next-step.
In the default BaseSpace Clarity LIMS configuration, the Aggregate QC step aggregates the results of all QC steps in the protocol—if they are available.
If a step has been removed from a QC protocol, it is ignored and aggregation still occurs for the remaining steps. No error is generated.
The following flowchart shows the logic behind the default configuration of QC aggregation.
Often, this default configuration is acceptable and there is no need to make any changes. However, you can configure alternate QC master step field values that overwrite these defaults. For example, you may want to:
Make a particular step required for aggregation.
Increase the priority of the results of a particular step – a step whose results are considered more accurate, for example.
Make one/both of the changes listed above, and then lock down values so that users cannot modify them.
The Aggregate QC master step includes a master step field for each QC step to be considered for aggregation.
The value for the QC master step field may be one of two values:
Use if available: If the QC step was run run for a sample, its value will be used in the aggregation calculation; if the QC step was not run, it will be ignored.
Required: The step is required for aggregation. If it has not been run for a sample, the QC value cannot be calculated and aggregation will not proceed.
The default setting for all QC steps is Use if available.
The Required and Use if available master step field values can have an additional (Priority n) suffix.
To be an acceptable value, the value for 'n' may be any value between 1–99, where 1 is the highest priority and 99 is the lowest priority. Master step fields without a priority value defined are assigned a priority of 99.
The default priority value for all QC steps is (Priority 5).
If sample measurements exist for multiple QC steps, all of which have been assigned the same level of priority (as is the case in the default system) the QC flags are evaluated such that any failure results in an overall fail flag for QC aggregation.
Sometimes, however, a particular QC technology may be considered more accurate and you may want its results to take precedence.
For example, a lab may run Bioanalyzer and NanoDrop QC steps on all samples. If one of these steps results in a QC fail, a PicoGreen assay is run on the failed sample and its result is then considered more accurate. In this case, set Bioanalyzer and NanoDrop at priority 5 and PicoGreen at priority 3.
All QC aggregations are logged in an Excel workbook log file and attached to the Aggregate QC step.
Subsequent invocations of the aggregation script results in a new sheet being created in the workbook, in the first sheet position. This new sheet automatically becomes the active sheet.
The best practice method for configuring alternate master step field values for QC aggregation is to create a group of defaults. The fields defined in the group overwrite the default configuration values.
For details, see #add-and-configure-custom-fields.
The aggregate QC script treats any custom field that does not begin with 'Copy' as a priority master step field. Therefore, you are restricted from adding fields that are not priority fields. Any new priority fields must contain default values of Required and Use if available.
If you do not want lab scientists to be able to manually edit a master step field value, lock it down by setting the Custom Entries toggle switch to No. At run time, the lab scientist must select a value from the predefined drop-down list.
In the default Clarity LIMS system, each QC master step has two sets of QC evaluation criteria defined. These are displayed in the Record Details screen when the QC step is run.
Each set of criteria comprises three master step fields: Source Data Field, Operator, and Threshold Value.
When the QC protocol step is run, the lab scientist selects from the prepopulated drop-down lists of Source Data Field and Operator values, and then types a numeric value into the Threshold Value field.
You may want to restrict the values available for selection from the Source Data Field and Operator lists, or set default criteria values that cannot be changed.
The best practice method for configuring the criteria used to evaluate QC protocol steps is to create a group of defaults. The master step fields defined in the group overwrite the default configuration values.
For details, see #configure-groups-of-defaults.
In the default Clarity LIMS system, when QC aggregation is executed, the Record Details screen allows lab scientists to (optionally) select one or two master step field values to be copied up from a QC protocol step.
Generally, this is a concentration value.
If necessary, you can specify the master step field values to be copied.
The best practice method of setting master step fields to be copied up to aggregation is to create a group of defaults. For details, see #configure-groups-of-defaults.
Setting QC master step fields to copy up to aggregation is an optional step. If you do not require this feature, leave the default system as is. Note also that if the Copy task fields are left empty, no values are copied up and no errors are generated. Similarly, if you specify a QC step that has not been run or a field that contains no value, no errors are generated.
Basic Search is used to search the entire system or search within the following categories:
Samples
Projects
Containers
Protocol steps
Type a complete keyword or search term, or part of the term followed by an asterisk. Search terms are case-sensitive.
The results include all samples, containers, projects, and protocol steps that contain the search term.
Select Search.
In the drop-down Search category list, select All. Type a search term into the adjacent field and press the Enter key.
At the left of the Search results windows, the Category Results panel summarizes the search results and groups them by category.
In the Search results list, the following actions are available:
Hover over the information icon to view more information.
Select on links to drill down further into the data.
When searching for a particular sample, you may want to:
View a list of all steps that have been performed on a specific sample.
See which in-progress steps involve a specific sample.
Determine which steps used a particular derived sample.
Return a sample to the queue for a particular step, so that the step can be repeated or requeued (For details, see Requeue and Rework Samples).
When searching for samples, type the keyword or term to search for into the Search box.
Search on the following information, as it relates to a sample:
Select Search.
In the drop-down Search category list, select Sample. Type a search term into the adjacent field and press the Enter key.
You can type the complete keyword or part of the keyword followed by an asterisk.
For example, the search can be performed on the complete sample name (eg, heart-123). However, a keyword followed by an asterisk (eg, heart*) could also be used. The results return all samples whose name include the keyword.
From the sample Search results list, view more information about the sample, including all steps in which it is involved and/or queued.
These options differ depending on whether the sample is a submitted sample or a derived sample.
In QC protocols, you cannot go directly to the queue for a specific step, because all steps share a queue.
Repeated steps display in the sample details search results.
For example, in the previous illustration, suppose that the sample was requeued for the Qubit QC step and the step was repeated. The next time a sample is searched, the step is listed twice.
Searching within the Project category allows you to drill down and quickly find the following project details:
The account and client associated with a specific project.
The number of samples included in a project.
The current project status. For example, at what stage in the workflow samples are currently located and what has been completed.
Select Search.
In the drop-down Search category list, select Project. In the adjacent field, type the project name and then press the Enter key.
Type the complete project name or part of it followed by an asterisk.
From the Search results list, view more information about the project.
For example, the following search results show that the Liver Study project was started on January 27, 2017 and contains 98 samples. The account and client associated with the project are also shown.
Expanding the sample details shows a list of all the steps in which the project samples are actively involved.
In QC protocols, users cannot go directly to the queue for a specific step, because all steps share a queue.
Search within the Container category to search for a specific container by name (Clarity LIMS ID) or quickly find all containers of a certain type in the lab. From the search results returned, the following information can be determined:
Which samples are in the container(s).
Which projects are associated with the sample(s) in the container(s).
What was the last step performed on the sample(s) and who performed it.
Select Search.
In the drop-down Search category list, select Container. In the adjacent field, type the container name and then press the Enter key.
Type the complete container name/Clarity LIMS ID or part of the name followed by an asterisk.
From the search results list, you can view more information about the container and the sample(s) it contains.
For example, the following search results show numerous details:
Clarity LIMS ID 27-301 belongs to a 96 well plate.
The plate contains six samples from the Liver Study project.
These samples are currently in the queue for the Cluster Generation step.
Restrict a search to protocol steps to determine how many steps of a particular type are currently in progress or completed.
Select Search.
In the drop-down Search category list, select Protocol Step. In the adjacent text box, type the step name and press the Enter key.
You can type the complete step name or part of it followed by an asterisk.
The search results list provides at-a-glance information, such as the status of each step, the date the step was started, the user who ran the step, and the number of samples contained within the step.
Various options for drilling down further into the data are provided.
Select Search.
In the drop-down Search category list, select Protocol Step. In the adjacent field, type all or part of the user name to search for and press the Enter key.
The Search results returned will list all of the in-progress and completed steps associated with the user.
Automations (formerly referred to as EPP triggers or automation actions) allow lab scientists to invoke scripts as part of their workflow. These scripts must successfully complete for the lab scientist to proceed to the next step of the workflow.
EPP automation/support is compatible with API v2 r21 and above.
The API documentation includes the terms External Program Integration Plug-in (EPP) and EPP node.
As of BaseSpace Clarity LIMS v5.0, these terms are deprecated. The term EPP has been replaced with automation. EPP node is referred to as the Automation Worker or Automation Worker node. These components are used to trigger and run scripts, typically after lab activities are recorded in the LIMS.
Automations have various uses, including the following:
Workflow enforcement—Makes sure that samples only enter valid protocol steps.
Business logic enforcement—Validates that samples are approved by accounting before work is done on them. This automation can also make sure that selected samples are worked on together.
Automatic file generation—Automates the creation of driver files, sample sheets, or other files specific to your protocol and instrumentation.
Notification—Notifies external systems of lab progress. For example, you can notify Accounting of completed projects so that they can then bill for services rendered.
You can enable automations on master steps in two configuration areas of Clarity LIMS:
On the Automations tab, when adding or configuring an automation. For more information, see #add-and-configure-automations.
On the Lab Work tab, on the master step configuration form. For more information, see #add-and-configure-master-steps-and-steps.
After it is enabled on a master step, the automation becomes available for use on all steps derived from that master step.
You can configure the automation trigger on the master step, or on the steps derived from that master step.
The demultiplexing API endpoint is an extension of the existing artifact endpoint. This endpoint demultiplexes artifacts recursively to all individual derived samples that they represent.
For more information, see the API and Database documentation.
In the past, some may have experienced performance and usability issues when demultiplexing large data sets. Clarity LIMS now includes the demultiplexing API endpoint, resulting in performance enhancements that speed up demultiplexing and allow quicker interaction with the data.
While acknowledging that usability is subjective, the Clarity LIMS product and development teams have established usability ratings based on criteria that measure how lab scientists must wait before they can interact with a feature on the screen. These criteria also allow for the comparison of performance and usability across the various screens of Clarity LIMS.
In the following table:
Successful user interaction means that a feature can begin to be interacted with (ie, it can be selected, scrolled, moved, and so on).
Numbers are provided for guidance only, and differ depending on the RAM and CPU speed of the computer used to view the page.
The following table shows how the usability rating changes as the number of samples in the pool undergoing demultiplexing increases.
The Queue screen lists the samples that are queued for a step, and provides a table from which samples are selected for placement into the Ice Bucket.
By default, samples listed in the Sample table are grouped by container. Groups are collapsed by default and can be expanded as required by selecting the arrows.
Lab scientists can also choose to group samples by project, submitted sample, or previous step.
In the past, some performance and usability issues were encountered when viewing large data sets in the Queue screen. Clarity LIMS now includes performance enhancements that speed up the Sample table loading time, allowing users to more quickly interact with the data.
Clarity LIMS development teams measured the performance for various samples that are queued for a step with the Time to Interactive (TTI) metric. This metric defines the time it takes for a page to become fully interactive and for functionality to start working (eg, selecting, scrolling, and so on). The metric numbers vary based on the following information:
Server specification.
Amount of data stored on the server database.
Client hardware specifications and the browser type used to access Clarity LIMS.
Network conditions between the server and client.
The following table shows the server and client specifications used for the performance test.
The different client types are used to demonstrate the different setups.
Hardware | Specification | Additional Notes |
---|---|---|
The following tables show the results of two performance tests conducted on a Clarity LIMS system on which performance enhancements had been implemented. In both tests, samples were grouped by container.
The tables show how the usability rating changes as the number of samples in the queue increases.
Test 1 Performance Results
Test 2 Performance Results
Replicate(s)The following table defines the terms used in Clarity LIMS and related documentation.
The Configuration tab is only available to LabLink administrators. This tab consists of the following sections:
If there are any configuration errors, a red dot appears on the Configuration tab and the section that requires attention.
In LabLink, there are two types of resource materials:
Sample submission templates
Supplementary documents
The administrator can manage these resource materials to assist end users of the lab during the sample submission process. LabLink allows the admin to upload up to 20 resource materials.
To add new templates or documents, a LabLink administrator can use the following steps:
Select Create.
Provide a name and description for the document.
Select the resource type.
Upload the resource file.
By selecting the resource type, the document is added to the appropriate location on the Resource Material tab.
The Custom Fields section of the Configuration tab displays all custom fields that have been configured in Clarity LIMS for projects and submitted samples.
A LabLink administrator can choose to display custom fields by selecting the checkbox next to the custom field.
The project custom fields display during the sample submission process.
The sample custom fields display on the Samples tab.
The client custom fields display during the create new user and request a new user ID processes.
The Disclaimer section allows the LabLink administrator to manage the disclaimers that display for the following processes:
Requesting a new user ID
Creating a project for sample submission
Select Edit to modify the disclaimer text. The character limit is 4096 for each disclaimer.
The Contact Us section allows the LabLink administrator to manage the contact us information that displays for the end users of the lab. Select Edit to modify the contact us information.
Allows: | Result of denied permission |
---|---|
Allows: | Result of denied permission |
---|---|
Action: | Allows: | Result of denied permission |
---|---|---|
Action: | Allows: | Result of denied permission |
---|---|---|
Action: | Allows: | Result of denied permission |
---|---|---|
Action: | Allows: | Result of denied permission |
---|---|---|
Action: | Allows: | Result of denied permission |
---|---|---|
Action: | Allows: |
---|---|
Action: | Allows: | Result of denied permission |
---|---|---|
Action: | Allows: | Result of denied permission |
---|---|---|
Action: | Allows: | Result of denied permission |
---|---|---|
Action: | Allows: | Result of denied permission |
---|---|---|
Action: | Allows: | Result of denied permission |
---|---|---|
Allows: | Result of denied permission |
---|---|
Allows: | Result of denied permission |
---|---|
Allows: | Result of denied permission |
---|---|
Allows: | Result of denied permission |
---|---|
Allows: | Result - permission granted |
---|---|
Allows: | Result - permission granted |
---|---|
Allows: | Result of denied permission |
---|---|
Allows: | Result - permission granted |
---|---|
Search Keyword | Applies To | Notes |
---|---|---|
Number of Samples | Response Time (seconds) - Client Type A Response Time (seconds) - Client Type B 1000 | Response Time (seconds) - Client Type B |
---|---|---|
Number of Samples | Response Time (seconds) - Client Type A Response Time (seconds) - Client Type B 1000 | Response Time (seconds) - Client Type B |
---|---|---|
-a
--apiUri
REST API base URI (ends with "/api/<version>/") Must be completed as: http://<servername>/api/v2/
-p
--password
LIMS password (required)
-u
--username
LIMS sign-in username (required)
Action
Permission Required
System Administrator and Facility Administrator
Collaborator
Sign in to LabLink
CollaborationsLogin action
Yes
Yes
Manage Project
Projects create, read, update.
Yes
Yes
Manage Sample
Samples create, read, update.
Yes
Yes
Manage User
Users create, read, update.
Yes
No
Manage Configuration
Configuration update
Yes
No
View the Configuration page
AdministerLabLink
Yes
No
View the User Management page
AdministerLabLink
Yes
No
Sign in to ClarityLIMS
Access Lab View and Projects and Samples screen
Access Consumables > Reagents configuration tab; view, edit, and delete reagent lots; add lots to existing kits.
Access Consumables > Controls configuration tab and view control details
Access Consumables > Instruments configuration tab; add, edit, delete, and activate instruments; view instrument types.
Sign In screen
Sorry, you do not have permission to sign in to Clarity LIMS.
Access LIMS Rest API
Sign In screen
403 Forbidden error via http://host/api/*
create
Create project
Modify project details
Modify project custom fields
Projects and Samples
New Project button hidden
View project details (read-only)
Note: No permission is needed to upload files to a project
Update
Modify project details
Projects and Samples
Save button disabled (if delete is permitted)
Button menu hidden (if delete is not permitted)
View project details (read-only)
Delete
Delete project containing no samples.
Delete project containing samples (also requires Sample:delete permission)
Projects and Samples
Delete button disabled (if update is permitted)
Button menu hidden (if update is not permitted)
create
Submit/add samples
Upload sample list
Download sample list example
Modify samples.
Projects and Samples
Submit Samples title hidden
Download Example Sample List link hidden
Upload Sample List button hidden
Add Samples button hidden
Modify Samples button renamed Download List
Modify Samples button hidden (sample list)
Sample Management
Sample + button hidden
Update
Modify samples.
Projects and Samples
Modify Samples button renamed Download List
Delete
Delete a submitted sample on Projects and Samples screen, provided no work has been performed on the sample.
Delete a submitted sample in API, provided no work has been performed on the sample.
Projects and Samples
Delete button hidden
403 Forbidden error via http://host/api/sample
create
Create control samples.
Controls
New Control button hidden
New Control button hidden
Update
Modify control samples.
Archive control samples (requires both update and delete permissions)
Controls
Save button disabled (if delete is permitted)
Button menu hidden (if delete is not permitted)
View control sample details (read-only)
Delete
Delete control samples.
Archive control samples (requires both update and delete permissions)
Controls
Delete button disabled (if update is permitted)
Button menu hidden (if delete is not permitted)
Archived toggle disabled
create
Create reagent kits
Reagents
New Reagent Kit button hidden
View reagent kit details (read-only)
Update
Modify reagent kits
Archive reagent kits (requires both update and delete permissions)
Reagents
Save button disabled (if delete is permitted)
Button menu hidden (if delete is not permitted)
View kit details (read-only - except for Status)
Delete
Delete reagent kits
Archive reagent kits (requires both update and delete permissions)
Reagents
Delete button disabled (if update is permitted)
Button menu hidden (if delete is not permitted)
Archived toggle disabled
read
View client (researcher/contact) details, including details such as username and roles in API
View users and clients (contacts) on Users and Clients screen
403 Forbidden error via http://host/api/roles
create
Create user roles.
403 Forbidden error via http://host/api/roles
Update
Modify existing user roles.
Add/remove user role permissions
403 Forbidden error via http://host/api/roles
Delete
Delete user roles.
403 Forbidden error via http://host/api/roles
read
View project and sample details on the Projects & Samples screen
View lab activities, in-progress steps, and steps that are ready to be worked on in Lab View
read
View users and clients on Users and Clients screen
View client details, including details such as username and roles in API
403 Forbidden error via http://host/api/researchers
create
Create users and clients on Users and Clients screen (User:update permission is required to assign permissions to the user)
Send login instructions and password reset emails on Users and Clients screen (either this action or User:update is required)
Create clients in API.
Create user credentials and assign roles in API.
Users and Clients
New User button hidden
View user details (read-only)
403 Forbidden error via http://host/api/researchers
Update
Update users and clients on Users and Clients screen
Send sign in instructions and password reset emails on Users and Clients screen (either this action or User:create is required)
Modify client details in API.
Assign role to user in API.
Remove role from user in API.
Save button disabled (if delete is permitted)
Button menu hidden (if delete is not permitted)
View user/client details (read-only)
403 Forbidden error via http://host/api/researchers
Delete
Delete users and clients on Users and Clients screen.
Delete a client and associated user in API.
Delete button disabled (if update is permitted)
Button menu hidden (if delete is not permitted)
403 Forbidden error via http://host/api/researchers
read
View clients on Users and Clients screen
View client details in API
403 Forbidden error via http://host/api/researchers
create
Create clients on Users and Clients screen.
Create clients in API.
Contact:update permission is required to assign permissions to clients.
New User button hidden
View user details (read-only)
403 Forbidden error via http://host/api/researchers
Update
Update client details on Users and Clients screen.
Update client details in API.
Assign role to/remove role from client.
403 Forbidden error via http://host/api/researchers
This permission does not affect the display of clients in Project and Samples and Sample Accessioning screens.
Delete
Delete clients in API
Delete clients on Users and Clients screen.
Clients with associated user details cannot be deleted
Delete button disabled (if update is permitted)
Button menu hidden (if delete is not permitted)
403 Forbidden error via http://host/api/researchers
read
View master steps
403 Forbidden error via http://host/api/roles
create
Create master steps.
403 Forbidden error via http://host/api/roles
Update
Modify master steps.
403 Forbidden error via http://host/api/roles
read
Access the Overview Dashboard
No Dashboards button
update
Manage all configuration in the LIMS interface (ClarityLogin permission is also required)
Manage configuration in API (APILogin permission is also required)
403 Forbidden error via any URI that begins with http://host/api/configuration.
Requeue a sample in sample search.
Requeue a sample in container search.
Sample and Container Search
Requeue button hidden.
Assign sample to workflow from Projects and Samples screen.
Sample Management
Sample cannot be dragged into workflow widgets.
Workflow selection widget hidden
Workflow lozenge Remove button hidden
Delete workflow button hidden.
Remove sample from queue.
Remove sample from workflow.
Sample Management
Remove from this queue option hidden (if Move to next step is permitted)
Options button hidden (if Move to next step is not permitted)
Move sample to next step in workflow
Sample Management
Move to the next step option hidden (if Remove from this queue is permitted)
Options button hidden (if Remove from this queue is not permitted)
Rework a sample from a previous step.
Sample Management
In Select the next step of the sample drop-down list, Rework from an earlier step option displays.
On Protocol Step Results screen, a button displays to allow the sample to be reworked from an earlier step.
Review escalated samples.
Sample Escalation
Enter Review Comment box enabled.
Sign an eSignature on step completion.
Record Details
Error message in e-Signature popup
Edit button when viewing a completed step.
Select button to edit completed step details on Record Details screen.
Assign Next Steps.
Edit button displays.
Record Details
After clicking Edit button, Record Details fields are editable, as applicable/permitted.
Usability Rating
Criteria
Good
A successful user interaction (data load) in ~ 2 seconds
Reasonable
A successful user interaction (data load) in ~ 6 seconds
Acceptable
A successful user interaction (data load) in ~ 9 seconds
Degraded
A successful user interaction (data load) in ~ 20 seconds
Unusable
A subjective limit to usability
Usability Rating
# Samples
Good
2200 - 2400
Reasonable
3400 - 3600
Acceptable
4600 - 5800
Degraded
7200 - 7400
Sample name
Derived and submitted samples
Derived sample custom fields
Derived samples
Submitted sample custom fields
Derived and submitted samples
Container information, container custom fields
Derived and submitted samples
Searchable information includes container name, container type, and well information.
Project information, project custom fields
Derived and submitted samples
Searchable information includes project name, project owner, and account name.
Workflow name
Derived and submitted samples
Search on a workflow name to find the samples included in that workflow.
Server
3.1 GHz Intel Xeon Platinum processor (8-core)
32 GB RAM
Oracle Linux v8.8
PostgreSQL 15.2 database
The server database is loaded with the following information:
200,000 submitted sample records
2,000,000 derived sample records
500 projects
Client A
3.1 GHz Intel Xeon Platinum processor (8-core)
32 GB RAM
Access Clarity LIMS within the same network in the lab.
Client B
2.3 GHz Intel Core i9 (8-core)
32 GB RAM
Access Clarity LIMS on the cloud. The VPN access and different network region setup results in high network latency and demonstrates the worst case for performance.
1000
2.0
4.5
3000
2.5
5.0
7000
3.5
6.5
10,000
5.0
7.5
15,000
7.0
10.5
20,000
9.0
12.5
1000
2.0
4.5
3000
3.0
5.5
7000
5.0
7.5
10,000
7.0
9.5
15,000
9.5
12.5
20,000
12.5
15.0
The lab carrying out the processing/assaying of the samples can publish the following, and make them available to collaborators:
Files that are attached to steps and derived samples.
Examples include the following use cases:
Publishing an image of an electropherogram for a sample that failed Bioanalyzer QC.
Publishing small files such as AB1 files that result from Capillary Electrophoresis sequencing.
NOTE: It is not recommended to publish large NGS or microarray derived data (such as FASTQ or VCF files) through LabLink. Although technically possible, BaseSpace Sequence Hub works much better for these large files.
Custom fields that are defined and populated on samples in Clarity LIMS. For example, a field that tracks the progress of the samples.
The only way to publish a file is by using the Clarity LIMS API. To publish documents to LabLink, run a custom script in Clarity LIMS at the end of a step.
For example, at the end of a Bioanalyzer QC step, if individual electropherogram images are available, they may be published and made available immediately in LabLink. See the API & Database documentation for an example script that publishes files to LabLink.
Files published to LabLink are available under the Results and Documents tab of a LabLink project, in the Sample Results and Documents section.
The Sample Results and Documents section of the page has three elements for each row:
Document Name—The name of the file that has been stored in Clarity LIMS and published.
Sample Name—The name of the sample associated with the published file.
Download Link—Select this link to download a copy of the file from LabLink.
For example, if all samples in a project participate in a step, publishing a file associated with that step causes the file to also be associated with all samples in that step.
Publishing a step-level file may have undesired consequences. Imagine a single step that was run on samples from two separate LabLink projects. If that step includes a step-level file that is published, it is available to both LabLink projects.
Depending on the contents of the file, publishing might lead to one collaborator seeing sample data that relates to another collaborator.
NOTE: Only publish step-level files if they are associated with samples in a single LabLink project.
The lab can publish custom fields that are defined and populated on samples in Clarity LIMS. Collaborators in LabLink can then see these custom fields.
This functionality depends on how Clarity LIMS is configured. For example, assume that the sample-level custom field called Progress should be visible in Clarity LIMS.
Define the Progress custom field in Clarity LIMS, including field type, field options, and additional options.
Make the Progress custom field visible in LabLink.
Navigate to the Custom Fields screen from the Configuration menu.
Select the Progress checkbox for the field to be visible in LabLink.
When samples are first submitted to LabLink, the Progress field is empty.
When the lab receives the samples, they may set the Progress field to have the Awaiting QC value for all samples in the project.
When the samples have completed the QC steps, the lab may update the Progress field with different values depending on the success or failure of QC.
As the samples move through the workflow, lab technicians update the Progress field as required. These updates provide the collaborator with insight into what is happening in the lab, on a per-sample basis. You can configure a script that automatically updates the Progress field, eliminating the need for manual updates by lab technicians. For example, configure the following automation command line that uses Clarity LIMS Lab Logic Toolkit to update the Progress field.
bash -l -c "/opt/gls/clarity/bin/java -jar /opt/gls/clarity/extensions/ngs-common/v5/EPP/ngs-
extensions.jar -i {stepURI:v2:http} -u {username} -p {password} script:evaluateDynamicExpression - t
true -h false -exp '\
submittedSample.::Progress:: = ::Awaiting QC::; \
' -log {compoundOutputFileLuid0}"
If the value of the Progress field is long, it may be truncated in LabLink. However, the full value can be obtained by hovering the mouse over the truncated value.
NOTE: Widening the browser window may show more of the field, but truncation may still occur.
Account | Not fully supported in Clarity LIMS v5.x and later. Support is planned for a future release. A workaround is to create an account quickly from the Projects and Samples screen. An organization with which the facility conducts business. Includes account address, client names, and other information. Associate clients with their applicable account (see Client). NOTE: In Clarity LIMS v2.1, the term labs was replaced with accounts. However, the API resource is still called lab. |
Aggregation | See QC Aggregation. |
Analyte | See Derived Sample. As of Clarity LIMS v5.0, this term is deprecated. However, the API resource is still called analyte. |
Artifact | A generic term for an item at the beginning of a step generated by an earlier step. A derived sample or measurement is a type of artifact. NOTE: In the Clarity LIMS user interface, the term artifact has been replaced with derived sample or measurement. However, the API resource is still called artifact. |
Automation | Used to trigger scripts from the Clarity LIMS user interface. An automation can be configured for steps and for derived samples. |
Automation worker | A software component that runs automation. May be installed on the same server as Clarity LIMS or on several other machines that all draw from the queues of one Clarity LIMS instance. |
Batch processing | Operations performed on more than one object at a time. For example, adding multiple samples to the system, rather than adding them individually. |
BClarity LIMS | The main web client used by lab scientists, lab managers, and system administrators to complete the following tasks:
|
Client | An individual within the laboratory, or an external individual who works with the laboratory, who is directly associated with a project in Clarity LIMS. NOTE: n the Clarity LIMS user interface, the term contact has been replaced with client. However, the API permission is still called contact. |
Cloud hosted deployment | Clarity LIMS deployed with Illumina automation scripts to the Illumina Amazon AWS environment. |
Collaborator | Clarity LIMS user role assigned to external customers or other individuals who submit samples to Clarity LIMS. |
Contact | See Client. |
Custom field (formerly UDF) | A field that Clarity LIMS administrators can add to the interface to collect information for a sample (or group of samples), master step, step, client, account, or container. For example, the administrator may wish to add a field to record whether a sample is toxic or safe to handle. |
Derived sample | A sample that was generated (output) by a step. All derived samples trace back to submitted samples. By default, a step generates one derived sample. Configure some step types to output multiple derived samples (also referred to as derivatives) |
Derivative | One of multiple samples generated by a step. See Derived Sample. |
External Program Integration Plug-in (EPP) | See Automation. As of Clarity LIMS v5.0, this term is deprecated. However, some of the API documentation may still refer to EPP. |
EPP node | See Automation worker. As of Clarity LIMS v5.0, this term is deprecated. However, some of the API documentation may still refer to EPP node. |
Export | To copy a file and place it on the hard disk drive or in another piece of software. |
File placeholder (formerly shared ResultFile) | A placeholder configured on a step that is replaced by a file at a run time. The file may be automatically generated or manually uploaded. |
Group of defaults | A collection of prepopulated master step fields that define values for step data at run time. This eliminates the need to manually enter and make sure that information is recorded correctly every time a step is run. Groups of defaults are configured on the Custom Fields > Master Step Fields tab. |
Import | To bring a file into Clarity LIMS and attach it to a placeholder or step. |
Index | See Reagent Label and Label Group. |
Input | An item that is consumed, processed, or analyzed by a step. |
Label Also referred to as Reagent Label | Also referred to as reagent type, index, or molecular barcode. Add a label group for each reagent category used in the lab, and then add labels to the groups. Each label represents a reagent type within the group/category. See also Label Group. |
Label group | A reagent category. Add a label group for each reagent category used in the lab then add labels to the groups. Each label represents a reagent type within the group or category. See also Label. |
LIMS ID | A unique identifier assigned to all assets (samples, projects, containers, steps, and so on) in Clarity LIMS.
|
Master step (formerly process type) | A technique or procedure performed on a sample. To be considered a master step in Clarity LIMS, the technique/procedure must be created and configured as such, and must have an input (not all steps have an output). Master steps are created and configured on the Lab Work configuration screen. These master steps act as templates from which individual steps are created and configured. |
Measurement (formerly ResultFile) | Data that is generated during a step for each sample input to the step. Measurements can either be data written to measurement fields and/or files attached to the step inputs. |
On premise deployment | Clarity LIMS deployed to your on-premise server or another non-Illumina cloud hosted environment, such as your own Amazon AWS environment. |
Parse | The process of analyzing an input file and displaying it in Clarity LIMS. |
Plug-in | A software component or module that adds functionality to a software program. |
Preset | See Group of Defaults. As of Clarity LIMS v5.0 this term is deprecated. |
Process | See step. As of Clarity LIMS v5.0, this term is deprecated. However, the API resource is still called process. |
Process type | See Master step. As of Clarity LIMS v5.0, this term is deprecated. However, the API resource is still called process. |
Project | Clarity LIMS uses projects as the basis for all work performed in the system. A project stores the information about the user who created it, the account and client with which they are associated, samples submitted to the project, significant dates, associated files, and so on. When adding samples to Clarity LIMS, they must add them to a project. |
Protocol | In Clarity LIMS, a protocol is a set of steps that must be performed in a specific sequence, as part of a lab workflow. |
QC aggregation | QC aggregation refers to a configured step that assembles sample QC measurements, evaluates them based on priority, determines overall QC results, and then assigns QC flags. |
Queue | Queues allow the grouping of a collection of samples that are all waiting to be processed at a specific stage (protocol) in the lab workflow. |
Reagent label | See Label. |
Replicate(s) | See Derivative. As of Clarity LIMS v5.0, this term is deprecated. |
Resultfile | See Measurements and File Placeholders. As of Clarity LIMS v5.0, this term is deprecated. However, some of the API documentation may still refer to ResultFile outputs. |
Researcher | A role that can be assigned to a user in Clarity LIMS. Typically, the researcher role is assigned to laboratory scientists who use Clarity LIMS to manage and record their work as samples are processed in the lab. |
Step | A lab procedure that has been configured and included as part of a protocol. All steps have a master step as their foundation. |
Step type |
Submitted sample | The original sample that is added to a project in Clarity LIMS. |
Timestamp | The dates and times associated with a file. |
User | An individual within the laboratory or an external individual who works with the laboratory and has access to the Clarity LIMS system. Because each step performed in Clarity LIMS is associated with a user, you can use this feature to track work through the lab. |
User-defined field (UDF) | See Custom Fields. As of Clarity LIMS v5.0, this term is deprecated. However, the API resource is still called UDF and some API documentation may still refer to UDFs. |
Workflow | A set of protocols arranged in a sequence that corresponds to how work is performed in the lab. |
In Clarity LIMS, steps and master steps are categorized based on the requirements and goals of the step, its inputs, and its outputs. For details, see .