Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
The Clarity LIMS Rapid Scriptingâ„¢ API provides scientific programmers with self-descriptive, yet flexible, data access. It uses a RESTful model for data access because this model is well suited to these requirements. This article provides a high-level introduction to REST concepts and technologies.
Representational State Transfer (REST) is a style of software architecture for distributed information retrieval systems, most commonly observed by using the web.
REST governs proper behavior. It is not a methodology or a design principal, but rather a set of rules to which a system should conform.
REST allows a uniform interface between clients and servers that is simple and decoupled, enabling each system to evolve independently.
REST is referred to as stateless because each new API request contains all the information required to complete it, without relying on previous requests. Conforming to these REST principals is referred to as being RESTful.
REST was developed in parallel with HTTP and makes use of this protocol. It is an elegant way to programmatically access resources over HTTP. It is very flexible because you can use it with any language or tool that supports HTTP.
The web is probably the largest known RESTful system. Its behavior is very simple:
When you click a link in a web browser, your system requests information by sending a GET request to the specified URL. This URL is a resource.
The server that hosts the URL responds, typically with one of two things:
If the page exists, the server sends the browser an HTTP 200 response code and the contents of the page.
If the page does not exist, the server sends an HTTP 404 response code and an error message indicating that the page cannot be found.
Many software development groups use RESTful APIs. Google, Yahoo, and many public web sites use the RESTful model for information access.
The REST API allows you to retrieve and update information using HTTP operations. This ability provides some flexibility in how to communicate with the system.
While REST requests and responses can be in a variety of formats, we chose XML. Each resource and XML element is detailed in the API Portal.
To use the REST API, sign in using HTTP BASIC authentication. The method used to authenticate will depend on how you use the API:
When using a browser to retrieve information from the API, sign in to the browser with a user name and password. When signing in using a browser, the session remains open until the browser is closed.
When using an HTTP request tool to retrieve, add, update, or remove data using the API, the tool asks for a user name and password each time you submit a request to the system.
When using a script to communicate with the API, the script must first authenticate with the API. The session remains open for as long as the script is being actively read by the system.
The account you use to sign in to the API must have System Administrator or Facility Administrator privileges.
The API allows self-discovery of an object. When you request information about an object, the system typically returns URIs to its children and, sometimes, its parent. Use one URI to find the next URI in a hierarchy.
When viewing XML in a browser, tools can automatically create links from the URIs returned by the system. Examples of such tools are the Firefox Text Link or Linkificator add-ons. This way, you can select URIs to browse through the API.
Requests are made to the API by sending XML in HTTP calls:
GET is used to read an item.
POST to create an item.
DELETE to delete an item.
PUT is used to update an item.
In its simplest form, use a browser to enter and read the content of a URI, which allows browsing through the system. When using this method, a GET request is issued to the API for a specified object (referred to as a resource). The request returns XML containing the metadata about that resource. See the following section for details.
If you want to add, update, and delete small amounts of data using the API, use an HTTP request tool, such as the Firefox RESTClient add-on.
When working with REST, there are references to resources and namespaces.
For references:
A RESTful API groups related information into resources, each of which is referenced with a global identifier (URI).
In the API, for example, every sample in the LIMS has a resource for its information. When scripting, the resource is created or updated with POST and PUT HTTP calls. There are two types of resources: single and list types.
A list resource is used to access a collection of single resources (such as a listing of all samples).
The single resource type is used to access details on just one resource (a sample, for example).
It's important to understand how the information in the LIMS has been grouped and structured into resources. To learn more, see Structure of REST Resources.
For namespaces:
An API that uses XML relies on namespaces. In XML, namespaces define the vocabulary of elements and attributes in an XML document. Each REST resource references the XML structure defined by a particular namespace.
When scripting, we use namespaces to look up specific details related to the XML data elements, attributes, and formats that represent a resource. Namespaces also order the subelements of the XML document.
In the current revisions of the API, the PUT and POST methods read the subelements of the XML independent of order, but the namespace still defines the order of the XML provided in GET calls.
For sophisticated write operations and automation of work, you must use a script to communicate with the API. The Cookbook contains examples that demonstrate how to use scripts to perform your work.
When multiple users are working on multiple plates in high-throughput labs, programmers may find that the large number of HTTP method calls to the REST API can slow down their scripts.
To improve performance, Illumina has created the following batch resources:
artifacts.batch.retrieve
artifacts.batch.update
containers.batch.create
containers.batch.retrieve
containers.batch.update
files.batch.retrieve
files.batch.update
samples.batch.create
samples.batch.retrieve
samples.batch.update
Use the batch resources to access a group of artifacts or a group of containers using a single batch method call. Using these resources to iterate a list of items significantly improves script execution times.
Batch resources are best thought of as unordered collections or lists of items accessed. A POST to batch/create, batch/update or batch/retrieve, therefore, is a request to create, update, or retrieve those items. There is no guaranteed order to batch responses.
Batch resources are nonbreaking additions to the existing REST API. Updated scripts can still use their existing nonbatch methods.
For example, the resources may have URIs (Universal Resource Identifiers) such as:
Batch operations do not require sophisticated HTTP client or server methods. The only HTTP method for batch resources is POST.
To update a group of artifacts, use a POST operation to the /artifacts/batch/update resource. The XML input payload consists of a series of elements, as follows.
As large data transfers can affect performance, it is important to return concise XML in response to a batch resource request. Therefore, except for retrieve resources, the XML output payload consists of a list of created or updated URI links, such as the following:
The batch resources use common HTTP return codes:
An HTTP 200 (OK) code is returned when batch resources have been successfully created or updated.
An HTTP 400 error code is returned if the input payload details included incorrect, mixed, or duplicate URI links. For example, if the details of an artifacts.batch.update (list) request included a container resource.
The internal Clarity LIMS API (eg, https://example.claritylims.com/clarity/api) is the API used to deliver the Clarity LIMS web interface. This interface is not typically meant for public consumption. However, some customers use it for troubleshooting and to mitigate system issues.
As of Clarity LIMS v5.1, access to the internal Clarity LIMS API changed to enhance security and prevent Cross Site Request Forgery (CSRF) attacks. Two new HTTP headers must now be present when issuing PUT, POST, DELETE, and PATCH requests:
Origin—This header must be set to the scheme and authority of the server being accessed (eg, https:// example.claritylims.com).
X-Requested-With—This header must be set to XMLHttpRequest.
The attached cURL, Python, and Java examples demonstrate how to authenticate and issue internal API requests. These examples assume a Clarity LIMS server at https://example.claritylims.com.
csrf headers.sh:
csrf headers.py:
csrf headers.java:
The REST API methods attempt to return appropriate HTTP status codes for every request. To use the REST API effectively, a good understanding of HTTP and status codes is required. A complete list of HTTP status codes and definitions can be found at the following website:
The primary status codes used by the REST API are as follows:
200 OK: Success.
201 Created: A resource was successfully created.
400 Bad Request: Invalid data was supplied for the relevant resource type.
401 Unauthorized: The requested resource cannot be loaded until valid logon credentials have been entered. If this error is received after logon credentials have been entered, this indicates that the credentials are not valid.
403 Forbidden: Access to the requested resource has been denied. (Make sure that the authorized user has administrative privileges.)
404 Not Found: The URI requested is invalid or the resource requested does not exist.
413 Request Entity Too Large: The request is larger than the server is willing or able to process.
500 Internal Server Error: A generic error message, given when there is no suitable specific message.
Error messages are returned as exception elements with a message element containing a user-facing error message.
The exception may also include a suggested-actions element with more detail on how to resolve the error.
User-facing XML error messages are not returned for 401 and 403 errors. In these cases, the HTTP error must be resolved.
The REST API only returns 500 results per request. Because of this feature, with certain resources, you can use the start-index parameter and previous-page and next-page elements to work with large amounts of data.
For example, the following request is submitted to the API:
The response looks like this:
Note the presence of the previous-page and next-page URIs, which allow moving within the pages of results.
Use the start-index parameter to view results from a specified point in a list. The first record in a list is index 0, and you can use values that are positive, whole numbers. If the value specified is greater than the number of results available from a resource, the system returns an empty list.
By default, the REST API only returns 500 results per request. To change the default number, contact the Illumina Support Team.
REST and automation are the key interfaces for scripting. A language-agnostic application programming interface (API) is important to scientists as it allows for broad and diverse integration. Together, REST and automation provide powerful and easy-to-use scripting. However, you first need to understand the conceptual structure and design of these interfaces.
Within the Clarity LIMS Rapid Scripting API, REST technology is used to provide data specifically structured for life science research.
The API documentation includes the terms External Program Integration Plug-in (EPP) and EPP node.
As of BaseSpace Clarity LIMS v5.0, these terms are deprecated. The term EPP has been replaced with automation. EPP node is referred to as the Automation Worker or Automation Worker node. These components are used to trigger and run scripts, typically after lab activities are recorded in the LIMS.
NOTE: If you are new to the REST Web Service, we recommend that you read and .
The REST Web Service is the fundamental data access interface using XML over HTTP. It is agnostic to programming languages as most languages support HTTP and XML with libraries or built-in methods.
In life science research labs, tracking samples and the data associated with biology, research, and lab work is complex. The REST resources return information that is human-readable and interpretable. Use a web browser to explore the XML returned.
REST represents real laboratory items and activities in self-contained groups of data called resources. It provides access to recorded lab steps and to sample test results, and it provides this access using resources. For example:
The process and steps resources track the steps in the lab in terms of who did what and when.
The sample and artifact resources contain information on the submitted sample and test results on sample derivatives (also referred to as derived samples).
The REST resources and their relationships are explained in .
The full details of each resource are described in the .
Requests are made to the API by sending XML messages:
POST is used to create an item.
GET is used to read an item.
PUT is used to update an item.
DELETE is used to delete an item.
Note: HEAD requests are not supported.
The full URL to which requests should be sent will vary depending on the specific installation, but will generally follow this format:
Automation / EPP is used to trigger scripts from within the Clarity LIMS interface.
Script-triggering is often used because the data collected needs to be dispatched for further processing. Automating data processing and returning information, in the appropriate format, to the lab for immediate use increases efficiency and quality.
File handling and file management are fundamental elements in life science scripting. When triggered, scripts can issue a command, transfer files for processing, and collect and transfer files back to the server. To enable triggering of scripts in any programming language, the information and files are provided for batch processing at the operating system command line level.
As of Clarity LIMS v5, the Operations Interface Java client, which was used by administrators to configure processes, consumables, user-defined fields, and users, has been deprecated. All configuration and administration tasks are now executed in the Clarity LIMS web interface.
To use automation, administrators complete the following steps:
In Clarity LIMS, create and configure master steps.
Configure automations that trigger scripts. Enable those automations on the master steps.
Use the configured master steps as building blocks to create and configure steps to be run by lab scientists.
Related Resources
When submitting a GET request to certain REST API resources (also known as list resources), the system returns a list of records. For example, submitting a GET request to the samples resource returns a list of all submitted samples stored in the system. Depending on the resource being used, use various query parameters to filter the records based on certain criteria. For more information about the parameters that are available, refer to the reference documentation for the desired resource.
To filter a list, the resource and parameter must be separated with a question mark (?). The parameter and the value you want to base the query on must be separated with an equal sign (=).
When filtering a list of artifacts, combine parameters within the same query statement. You can also repeat certain parameters, specifying a new value with each occurrence of the parameter.
The first parameter must be preceded with a question mark (?). Add additional parameters by separating each parameter with an ampersand (&).
Repeating a parameter with new values:
Combining parameters:
When combining or repeating parameters, each record returned matches one of the parameter values, or all the parameter values, depending on the usage:
If the query statement contains multiple values for the same parameter, the ampersands are treated as an OR.
If the query statement contains values for multiple parameters, the ampersands are treated as an AND.
For example, if a project LIMS ID and a process type are provided as parameters, the system returns only the files that match both the project LIMS ID and the process type. To see the files that match the project LIMS ID or the process type, issue two separate GET requests and combine the results.
/api/v2/processes—This URI returns all processes run in the system.
/api/v2/processes?type=MALDI—This URI returns all MALDI processes run in the system.
/api/v2/processes?type=Sample Prep&type=MALDI—This URI returns all MALDI or Sample Prep processes run in the system.
/api/v2/containers—This URI returns all containers in the system.
/api/v2/containers?type=Tube—This URI returns all tubes in the system.
/api/v2/containers?type=Tube&name=27-111&name=27-112—This URI returns the tubes in the system that are named 27-111 OR 27-112.
Certain resources include the last-modified query parameter.
When used, the system displays only the results that have been modified because the last-modified date. The last-modified date is represented in ISO 8601 Complete date including hours, minutes, and seconds format: YYYY-MM-DDThh:mm:ssTZD.
Lists of records are often large, spanning multiple pages. Many list resources include parameters that are used to work with paginated results.
Certain resources include parameters that can be used to filter the results displayed based on UDF information that is associated with the results:
udf.UDFNAME[.OPERATOR]=UDFVALUE—This parameter filters the results based on a specified value for a specified UDF. Any item that contains the value for the UDF is returned, unless parameters include an optional operator filter ( [.OPERATOR] in the expression provided). The lowercase filter operators of min or max are described later.
udt.name=UDTNAME—This parameter filters the results based on a specified UDT. Any item that has the UDT selected is returned.
udt.UDTNAME.UDFNAME[.OPERATOR]=UDFVALUE—This parameter filters the results based on a specified value for a specified UDF that resides within a specified UDT. Any item that contains the value for the UDF is returned, unless parameters include an optional operator filter ( [.OPERATOR] in the expression provided). The lowercase filter operators of min or max are described later.
To To filter results using UDF information, use the following query structure:
To filter results using the name of a UDT, use the following query structure:
To filter results using UDF information that is part of a specific UDT, use the following query structure:
When filtering lists and using date or numeric UDF or UDT values, use operators to restrict a query. The following operators are supported:
.min—This operator displays results that are greater than or equal to the specified value.
.max—This operator displays results that are less than or equal to the specified value.
Examples:
/api/v2/processes?type=Sample Prepandudt.name=Plasma—This URI returns all Sample Prep processes run in the system that have the Plasma UDT selected.
/api/v2/processes?type=Sample Prepandudt.Plasma.Platelet Count.min=50—This URI returns all Sample Prep processes that have a UDF named Platelet Count with a value of 50 or greater, within a UDT named Plasma.
/api/v2/processes?type=Sample Prepandudf.Sample=Serumandudf.Sample=Tissue—This URI returns all Sample Prep processes that have a UDF named Sample with a value of Serum OR a UDF named Sample with a value of Tissue.More examples of filtering exist in the Cookbook.
When filtering with UDT or UDF parameters, all special characters in the parameter string must be URL encoded. The Pipe ( | ) or the URL-encoded pipe ( ) cannot be used.
When filtering on a UDF that is configured as a Multiline Text UDF, if a value contains a hard return, the value must include the URL-encoded line feed () at the appropriate location. Depending on how API requests are issued (via a browser or a script), spaces in names or values may require URL encoding, and trailing spaces in a name or value always require encoding. For example, for results to be returned, ‘name ‘ requires ‘name’.
When submitting a request to the REST API, specify the version of the API being used. The version number is a path parameter in each resource URI. The desired API version is substituted into the request URI as follows:
Changes to the API are tracked with a version number (version major) and a revision number (version minor).
The version number indicates forwards and backwards compatibility.
The revision number within the version describes features added to the API that will not negatively affect current functionality.
Only the version number is referenced as part of the request. The revision number simply tracks incremental enhancements to the API.
When a new version of the API is released, update the scripts and code as soon as possible.
To find out what version of the API is available from a given server, submit a GET request to the base API URI. For example, in a web browser, browse to:
The system will return the version:
The API was originally intended for internal use or for just a few customers. In those early days, API versioning was different. If working with legacy scripts, this older functionality can be maintained. For example, if scripts were written before v2 and have nondefault system configuration properties for api.prefix and api.rewrite on the server, the …/api/ URI lists the resources and does not provide version information.
The internal Clarity LIMS API (eg, https://example.claritylims.com/clarity/api) is the API used to deliver the Clarity LIMS web interface. This interface is not typically meant for public consumption. However, some customers use it for troubleshooting and to mitigate system issues.
As of Clarity LIMS v5.1, access to the internal Clarity LIMS API changed to enhance security and prevent Cross Site Request Forgery (CSRF) attacks. Two new HTTP headers must now be present when issuing PUT, POST, DELETE, and PATCH requests:
Origin—This header must be set to the scheme and authority of the server being accessed (eg, https:// example.claritylims.com).
X-Requested-With—This header must be set to XMLHttpRequest.
The attached cURL, Python, and Java examples demonstrate how to authenticate and issue internal API requests. These examples assume a Clarity LIMS server at https://example.claritylims.com.
csrf headers.sh:
csrf headers.py:
csrf headers.java:
To remove a UDF or UDT value, submit a PUT request with the desired UDF or UDT omitted from the XML.
When submitting a PUT, it is critical to update all information. The submitted XML must include all the current UDFs and UDTs for the resource. If the field and type elements for a UDF and UDT are not included, the system removes those fields and types.
To update the UDF information for an item, the PUT request can add new UDF values and update or remove current UDF values. When working with UDTs, replace the current UDT with another UDT, or add or remove fields within the current UDT.
Update all UDFs and UDTs in a PUT
Even if the current user-defined values are not changing, include the current UDF and UDT values in the XML representation for a PUT request.
Data formatting of the UDF and UDT values is important when filtering a resource list with a query parameter. When using UDF or UDT values as a query parameter, all nonalphanumeric characters must be URL encoded.
UDFs and UDTs are presented as fields and types in the XML. The following example shows a representation of fields and types returned by a GET request for a sample:
XML resource representations do not render UDF values in the same format as views in the client user interface. The table later in this section compares images taken from the client user interface and the XML from an http GET.
The differences are intentional, to remove ambiguity and aid script writers when handling the data values.
The following table compares the values displayed by the user interface and the API for UDF data types.
Trailing zeros are removed to support both integer and real numerics. Determine the significant digits by looking up the display-precision element of the /configuration/udfs/{udfid} resource.
When Date UDFs are expanded on the EPP command line, their format differs from the one used in the Clarity LIMS GUI and the REST API.
The following table shows how API terminology maps to terminology used in the Clarity LIMS v5.x interface.
API Terminology | Clarity LIMS Terminology | Notes |
---|
See also the Terms and Definitions section.
For more examples of filtering, see the .
Configured Data Type | API XML Response Element Type Name | Client Display | API Element Type Format |
Single-line Text | String | Leading and trailing spaces rendered | Leading and trailing spaces rendered |
Multi-line Text | Text | Leading and trailing spaces rendered | Leading and trailing spaces rendered |
Numeric | Numeric | Set by display-precision ie, 4.5300 is displayed when display-precision equals 4 | Simplest numeric form, removing trailing zeros ie, 4.53 |
Date | Date | mmmm dd, yyyy ie, Feb 15, 2019 | yyyy-mm-dd ie, 2019-02-15 |
Analyte | Derived sample | Not applicable |
Artifact | An item that is input to or generated by a step. Derived samples and measurements are both artifacts. | Not applicable |
Lab | Account | Accounts are not fully supported in the Clarity LIMS v5.x web interface. However, lab is supported in the API. |
Process | Step | step and process both exist in the API. While related, they are not synonyms and have different uses. |
Process type | Master step | Not applicable |
Researcher | Client or user | Not applicable |
Resultfile | Measurement or file / file placeholder | A file could be a log file that is shared across all samples in the step or a file that belongs to a single sample, such as an Electropherogram. |
Sample | Submitted sample | The original sample submitted to the system. |
UDF | Custom field | User-defined types (UDTs) are not supported in the Clarity LIMS v5.x web interface. However, udt is supported in the API. |
Understanding lab information management in a scientific context is one of the more powerful skills in genomics research today. The Clarity LIMS Rapid Scriptingâ„¢ API is designed to use these skills, allowing a knowledgeable scientific programmer to adapt lab informatics with scripts and automation.
NOTE: Based on experience working with bioinformaticians and scientific programmers, assumptions about your background, setup, and skills have been made.
Before using the API Cookbook, set up a #h_19503004-55bb-48e6-a8ad-af73e66a1d54.
If any of the topics covered on this page are a concern, contact the IlluminaSupport team for additional training or custom scripting services.
Within the Cookbook, the term scripting refers to programs running independently of the client and server that direct the input and output of information. Use scripts and the API for file handling and text processing in the context of biological samples, containers, and instruments.
This API Cookbook assumes that you can program in modern computer languages, and are comfortable with scripting and bioinformatics.
The topics are best understood by those users who can program small applications and are experienced with experimental processes in molecular biology.
The topics assume that you have received administrator-level training or know how to configure the system. The topics also assume that a nonproduction server is set up to play with cookbook examples, develop real scripts, and test before deploying in production.
Be comfortable with the following skills:
XML
System file handling
General-purpose scripting languages
Working on the command line
Illumina provides multiple server licenses for API users: a production server license and one or more non-production server licenses for developing and testing.
To allow developers to design, build, test, and upgrade efficiently, it is recommended to install at least two servers. Installing three is even better.
The non-production server licenses serve the following purposes:
To provide a sandbox in which to experiment with the API and the system configuration.
To provide a verification platform for upgrading scripts, software components, and overall system integration before deploying to production.
All the examples in the Cookbook are intended to be used with the nonproduction scripting sandbox server. See Useful Tools.
If you do not have the time or resources to use the API, but are interested in expanding your implementation, contact the Illumina Support team. There are various consulting, training, and scripting services available.
As of BaseSpace Clarity LIMS v5.0, several terms have been deprecated:
External Program Integration Plug-in (EPP) has been replaced with automation
EPP/AI node has been replaced with automation worker / AW node
Parameter has been replaced with token
User defined field (UDF) has been replaced with custom field
When a job is dispatched to the AI node/automation worker, the following steps occur:
A temporary working directory is created on the AI node / automation worker:
In AIInstallDirectory/temp/
With a unique name including the client process LIMS ID.
The command configured and selected as part of the step run in the LIMS is then sent to the AI node / automation worker, with any specified parameters / tokens replaced with actual values.
The command is executed on the AI node / automation worker, spawning step execution using the temporary working directory as the working directory context.
Script processing can use stdout, stderr, and return codes following standard shell programming packages.
When the script exits, the AI node/automation worker automatically retrieves any files with matching LIMS IDs from the temporary working directory. The files are attached to the appropriate output file placeholders.
The automation API infrastructure can be used alone or with the REST API infrastructure.
For example:
Simple scripts can use automation parameters/tokens and data files directly from the current working directory. They can write results back to the current working directory, associating them back to the relevant placeholders in Clarity LIMS.
More advanced scripts can also use the REST API infrastructure to retrieve additional required information and place relevant data back into UDFs/custom fields. Advanced scripts can also attach and associate data files to placeholders, which may be in different locations, while the script is still running.
Clarity LIMS version 4.0 introduced architectural changes that enforce SSL-based security. As a result, the structure of the URIs that reference the Clarity LIMS API was modified, and scripts written before Clarity LIMS v4.0 may require updating.
Scripts that use the API do so by using RESTful methods on specific URIs. The base portion of the URI references the server on which the Clarity LIMS application is running.
Before Clarity LIMS v4.0 the base portion of the URI took the following form:
http[s]://<your_server_name>:<your_port_number>/api
Where:
The protocol could either be HTTP or HTTPS, depending on whether the application was SSL-enabled or not.
<your_server_name> represented the fully qualified domain name (or IP address) relating to the server on which the Clarity LIMS application was running.
<your_port_number> represented the port number (typically 8080) on which the Clarity LIMS application was listening.
In Clarity LIMS v4.0 and later, the base portion of the URI is in the following form:
https://<your_server_name>/api
Where:
The protocol must be HTTPS, because the Clarity LIMS application is now installed with SSL enabled.
The server name must match the certificate that was purchased and installed into Clarity LIMS.
The port number (and the colon) is no longer required. Do not provide it.
The following information should help determine if updates to the scripts are needed.
Scripts generally determine the API URI in one of the following ways:
The URI is passed to the script by the automation or External Program Plugin (EPP) component, as a parameter or command-line argument.
The URI is passed to the script by another script or a command line embedded in a crontab file.
The script contains the URI as a hard-coded string literal.
The script determines the fully qualified domain name of the server and adds the prefix (http://) and suffix (:8080) accordingly.
The script imports, or includes a file that contains, the URI.
Most scripting uses methods one or three. However, other methods may be used in the facility.
If method one is used, it is not necessary to update the scripts because Clarity LIMS passes in the new form of the URI.
If other methods are used, you likely need to update the scripts to convert the URI to the new format. Often, a search and replace tool is able to make these changes.
To make sure that the correct locations are searched, keep in mind that scripts are often stored in the following locations:
In the /opt/gls/clarity/customextensions folder (and subfolders) on the server where Clarity LIMS is running. This location is the domain of the default Automation Worker (AW)/Automated Informatics (AI) node, which listens on the channel name of limsserver.
If there are additional AW/AI nodes on the server, in the folders used by these nodes.
If there are additional AW/AI nodes external to the Clarity LIMS server, within the folders used by these nodes.
If scripts are launched by cron, or other mechanisms, they could be stored anywhere and may not even be on the Clarity LIMS server itself.
For points 1–3, query Clarity LIMS (via either the API or the database) to produce a listing of all the scripts it is configured to use. As a result, determine on which node they run and their location.
For point 4, there is no easy answer. Hopefully, if the script is important, the location has been documented.
As of Clarity LIMS v5.0, the terms External Program Integration Plug-in (EPP), EPP node, and AI node are deprecated.
The term EPP has been replaced with automation, while the Automated Informatics (AI) node is referred to as the Automation Worker (AW) node.
The BaseSpace Clarity LIMS Rapid Scriptingâ„¢ API adapts lab informatics using the Clarity LIMS platform.
It is important to integrate scripting into the overall processes. Begin by identifying any areas that may require adaptation to fit the lab workflow. It also helps if users are involved in the early stages of the software system analysis process.
Most scripts in an implementation are finalized towards the end of the process, as the full impact and benefits of the new system become clear.
Take some time to become familiar with the user interface, learn how to configure the product, and work with the tools that the lab uses. Also, establish the workflows and the configuration of the system before investing in API scripts and automation.
New customers receive administrator-level training before working with the API.
If you are not comfortable configuring steps, custom fields, containers, etc., in Clarity LIMS, you may find the API material difficult to understand. Contact Illumina for more information on administrator training and training materials.
If you are not comfortable configuring steps, custom fields, containers, etc., in the LIMS, you will find the API material difficult to understand. Contact Illumina for more information on administrator training and training materials.
Before committing time and resources to using the API, it is important to define what you would like to accomplish. Understanding the key outcomes, use cases, users, and constraints of the lab helps with learning the API more quickly and improves efficiency.
If you require assistance, Illumina can provide expert resources to audit and analyze the laboratory users, processes, workflows, instrumentation, data production, and environment. This careful and focused analysis results in a requirements specification that provides extensive value to the facility.
Together, REST and External Program Integration Plug-ins (EPP)/automation provide powerful and simple-to-use scripting. Before working with the REST API, understand the conceptual structure and design of these interfaces.
The links below provide overview information to help you get started, a self-training Cookbook guide with example scripts, and videos that supplement the API training materials.
Clarity LIMS v6.3 - v2 r34
Clarity LIMS v6.2 - v2 r33
As of BaseSpace Clarity LIMS v5, the Operations Interface Java client, which is used by administrators to configure processes, consumables, user-defined fields, and users, has been deprecated. All configuration and administration tasks are now executed in the Clarity LIMS web interface.
In addition, several terms have been deprecated:
External Program Integration Plug-in (EPP) has been replaced with automation
EPP/Automated Informatics (AI) node has been replaced with automation worker / AW node
Parameter has been replaced with token
Use step automations to trigger a command-line call on a process/step or a file attachment event. The steps required differ depending on the LIMS version.
This article provides an overview of the steps required to configure automations and automation triggers. For detailed version-specific instructions, see the following documentation:
Clarity LIMS v6 reference guide > Configuration > Automations
On the main menu bar, click Configuration, and then click the Automation tab.
On the Automation configuration screen, on the Step Automation tab, add a new automation:
Name the automation.
Set the channel name.
Define the command line.
Enable the automation on the desired steps.
On the Master Step Settings or Step Settings screen of the related step, set the following:
Trigger Location—The stage at which the script it is to be initiated (beginning of step, end of step, on entry to/exit from a screen, etc.).
Trigger Style—How the script is to be initiated (automatically or manually when the user selects a button in the interface).
For more information, see the API Training Videos
Clarity LIMS automations typically call scripts or third-party programs written for a shell or command-line interpreter, of either a Linux or Windows operating system (OS). Although the use of any system shell if acceptable, Bash is recommended.
Depending on the systems that integrate with the given automations, various restrictions apply to the string parameters/tokens and formatting used in the automation command line.
As of Clarity LIMS v5, several terms have been deprecated:
External Program Integration Plug-in (EPP) has been replaced with automation
EPP/AI node has been replaced with automation worker / AW node
Parameter has been replaced with token
Environment variables can be used to aid in configuration. However, automation commands are generated with a limited shell. For full access to environment variables, the recommended practice is to start the command to instantiate a 'full' user shell. For example, for Bash use the following command:
This procedure provides the following advantages:
Ensures updates of the environment variables, removing the need for repeated AI node/automation worker restarts.
Ensures access to all environment variables, including the full path to Groovy.
Allows certainty of the shell being used.
The various operating system (OS) shells each have their own rules and regulations. When creating command-line strings, be aware of the considerations described in the following sections.
Windows shell command-line interpreters require different syntax and formatting than Linux shell variants. For example, the following scripts are identical, but are formatted for different AI nodes/automation workers running on different operating systems.
On Linux:
On Windows:
Most of the examples in this specification use Windows formatting, because Windows is the most common platform found in the lab.
Spaces
Spaces in paths, file names, or parameter/token data can cause commands to be misinterpreted as information passes between systems.
Many OS shells automatically parse command-line contents by space, which cannot be what is intended. Enclose commands in double quotes " " to avoid misinterpretation of spaces by the OS shell command-line interpreter.
Special characters
OS shell command-line interpreters can attempt to interpret and act upon certain special characters, rather than passing them along as textual information. A character can have a rule applied to it within one OS shell environment, and a different rule under another environment. To use a character in its literal form, escape the characters. The escape character used varies depending on your OS shell. The most common escape character is the backslash character.
The most common OS shell characters that require escaping are:
To make sure that a configured command in the client is properly interpreted, test it on the AI node/automation worker machine command line.
This section provides information to help you work with Clarity LIMS automation tokens in Clarity LIMS v5 and later.
Scripts produce a numeric code on exit. By convention and by default, a successful exit has a code of 0 (zero). Within error handling, select different nonzero exit codes to indicate various error conditions.
NOTE: For Clarity LIMS v5.0, the term External Program Integration Plug-in (EPP) is deprecated and replaced with automation.
Though logging information in the Clarity LIMSuser interface is useful, scripts can also write debugging/troubleshooting information into the automatedinformatics.log file using stderr (standard error). When a line is printed to stderr, a [WARN] line is written to this log, which is useful for troubleshooting interactions between the script and the automation programs. For more information on the automatedinformatics.log file, see Troubleshooting Automation. For more information on using stderr on the command line, refer to Unix and Windows documentation on standard I/O streams, especially standard error.
In Clarity LIMS, the last line written to stdout (standard out) is automatically captured and shown in the interface.
If the exit code is zero, the message displays in green. If the exit code is nonzero, the message displays in red.
The Operations Interface Java client is deprecated in Clarity LIMS v5. All configuration and administration tasks are currently executed in the LIMS web interface.
If the script exits with a nonzero code, a sample genealogy flag is automatically added to the process outputs, with a standard error message indicating there is an External Program Error.
If the script is complex, or includes several error conditions, configure an additional result file process output (eg result.log) designed to capture status and error information. Make the external script write additional information to this file. If an error occurs, the file is still captured in the client, and is available to view for troubleshooting purposes.
The automation and integration of the day-to-day work in the lab requires different Automated Informatics (AI) nodes/automation workers to perform different tasks.
For BaseSpace Clarity LIMS v5.0, several terms are deprecated:
Automation replaced External Program Integration Plug-in (EPP).
Automation Worker (AW) node replaced EPP/AI node.
Channels are manually named, and ideally clearly represent the task performed (eg. Type_2_Analysis). To make sure that dispatched automation work is routed to the correct destination, specify a channel in the following places:
On the AI node / automation worker
Clarity LIMS v5 and later: When configuring step and derived sample automations on the Automation tab
When the automation trigger conditions are met in the LIMS, the automation job first enters a channel-specific 'first in, first out' (FIFO) queue of work for completion.
Jobs queue in this channel until one of the AI nodes/automation workers operating on the channel completes its previous work, and indicates it is free to accept more.
The next job is then dispatched from the channel queue to the node. This strategy allows a single channel queue to receive service by one, or many, AI nodes/automation workers servicing the specified channel.
It is possible to have multiple AI nodes/automation workers performing the same type of work all configured on the same channel, allowing a simple but effective way to increase throughput of a particular analysis bottleneck, or to ensure redundancy during a single node failure.
This section provides tips and tricks to help you work efficiently with the API. For example, learn how to copy and update field values, create and rename samples, work with files and QC flags, and automate BCL conversion.
Clarity LIMS v4 and later
Automation is powerful and simple in design. However, its applications can quickly become complex. We recommend you keep your scripts simple. When troubleshooting, the best practice is to isolate the issue to determine the source. The Automated Informatics (AI)/automation worker log file, automatedinformatics.log, is useful for isolating system components and diagnosing problems.
Isolate the behavior of each component in the system. In particular, determine if the following components can be ruled out as causing of the problem:
The script or program—Running the custom logic, REST calls, and file handling.
The AI nodes/automation workers—Calling the command line and invoking the script or program.
The network—Providing reliable and timely TCP/IP packet transfers.
The client—Completing the process/step and notifying the server.
The server—Responding to client notifications and dispatching to AI nodes/automation workers.
The script provides many options for troubleshooting. For example, increase logging to rule out unexpected behavior.
Printing to stderr in the script writes a line to the Error Handling automatedinformatics.log file. This file is a great source of information.
The records in the log file allow for emulating the command-line call for unit testing the script, and calling it manually on the command-line prompt (see #review-automated-informatics-log-file).
#validate-ai-node-automation-worker helps verify the automation program on the AI node/automation worker. If the script and the AI node/automation worker are functioning, review the log file entries for any warning (WARN) or error (ERR) lines near the time-stamp of the process completion event sent from the client.
If the issue is related to the client or server software, contact the Illumina Support team, providing:
The automatedinformatics.log file
The server log
The results of the isolation tests (in the previous section).
Use the following steps to test and verify the setup.
Create a process/step that generates a result file.
Configure an automation on the process/step. Associate it with the channel on which the AI node/automation worker is configured to communicate.
Add the following command line string.
cmd /c "C:\ai\ai.bat {outputFile0}"
On the AI node/automation worker machine, create an ai folder in C:\ so the system has an C:\ai path. Create a new file named ai.bat.
Edit the ai.bat file and add the following line:
echo Data for Output File LIMS ID %1 > %1.txt
Run the process/step created on an existing attached result file.
The step passes the LIMS ID of its output file placeholder to the script.
The script creates a file in the working directory.
When the script exits, this file transfers to the LIMS and associated with the step. The file contains a single line of text that includes the LIMS ID of the output file for easy verification.
Create a process/step that generates a result file
Configure an automation on the process/step. Associate it with the channel on which the AI node/automation worker is configured to communicate.
Add the following command line string:
bash -c "echo Automation Test > {outputFile0}.txt"
Run the process/step created on an existing attached result file.
The step passes the LIMS ID of its output file placeholder to the script.
The script creates a file in the working directory with a file name that contains this LIMS ID.
When the script exits, this file transfers to the LIMS and associated with the step. The file contains the text "Automation Test". When open, the file opens in the default program associated with *.txt files.
With automation, the command-line information is important. The actual values sent on the command line are recorded in the AI log file as the AI node/automation worker receives them. Copy the parameters/tokens from the log and use them on the command line to troubleshoot. If scripting in Groovy, the cli class handles command-line tokens well. See the example *.groovy files used with automation and the utility class section in Work with EPP/Automation and Files.
AI nodes/automation workers are installed using the Automated Informatics (Automation Worker for LIMS v5 and later) software package.
In the installation directory:
Find the /log directory, which contains an automatedinformatics.log file.
Use this file to locate log lines near the time of the process/step completion event.
Locate the log line containing the parameter/token command string, and manually run and test the script.
Locate the working directory to review temporary files created.
This step can indicate if the cause of the error lies with a network or other issue external to the computer running the AI node / automation worker.
If no records are found, use the #validate-ai-node-automation-worker procedure to confirm that logging is functional.
Running the script manually forms a unit test. The script is run on the command line, without being invoked by automation.
To locate the line containing the command-line string, search for the Command string, or externalprogram.runExternalProgram.
An example is shown in the following abridged log file section. Copy the line to a text file and modify it for script testing.
2014-02-28 21:24:48,688 INFO ... definitions.behaviour.automatedinformatics.plugins.externalprogram.runExternalProham as ...
2014-02-28 21:24:49,323 INFO ... (ExternalProgramBehaviour.java:133)... Command string: bash -c "~/scripts/HelloWorld.sh http://###.###.###.###/api/v2/processes/A30-MXX-110228-24-2320 > 92-2869.txt"
Used when temporary files are left by the script, this automation script removes the temporary working directory, unless there was an error. In case of an error, the directory provides clues to the root cause of the error.
To locate the working directory, search for "Working directory:" in the log file.
An example is shown in the following abridged log file section. Use the recorded directory to list and review temporary files.
2014-02-28 21:24:49,324 INFO ... Working directory: /home/gls/GenoLogicsAutomatedInformatics/temp/runExternalProgram-28022014-432350866632555775.A30-MXX-110228-24-2320
2014-02-28 21:24:49,324 INFO ... Retrieved files. Executing command.
As of BaseSpace Clarity LIMS v5.0, several terms have been deprecated:
Automation replaces External Program Integration Plug-in (EPP). In LIMS v4.x and earlier, the Operations Interface still uses the term EPP.
Automation worker/AW node replaces AI node.
Token replaces Parameter.
Step replaces Process in the web interface.
The incoming message contains the following:
Project ID or Name
Sample ID or Name
Container ID or Name
Container type (plate / tube type)
Container well position (if sample is on a plate) eg G:2
Sample user-defined fields (UDFs) / custom fields
POST to https://your_server/api/v2/samples:
We receive something like the following:
POST to https://your_server/api/v2/projects
We receive something like the following:
POST to https://your_server/api/v2/containers:
We receive something like the following:
POST to https://your_server/api/v2/containers:
We receive something like the following:
POST to https://your_server/api/v2/samples:
We receive something like the following:
GET: https://your_server/api/v2/projects?name=Week%2039
If the project exists, we receive something like the following:
If the project does not exist, we receive something like the following:
GET: https://your_server/api/v2/containers?name=Example%20Container%2020140910
If the container exists, we receive something like the following:
If the container does not exist, we receive something like the following:
In a highly automated workflow, a lab gains little value from manually selecting samples into the ice bucket and then transitioning them through a step. Ideally, upon completion of one step, a following step could be automated such that the output analytes were transitioned through to the Record Details screen.
The Clarity LIMS External Program Plugin (EPP)/automation system cannot aid in this transition. The last point at which an automation can be triggered is before the step completion.
This scenario requires a stand-alone API application, which can be run by an automation at the end of a step.
Using this approach, a standalone app would poll the API until each of the output analytes from the previous step were queued for the next step. After they are queued, they can be walked through to the Record Details stage.
The steps are as follows:
EPP / automation triggers at step completion and launches an API app as a new Linux process and then finishes. The parameter for the API app is the URL for the current process.
API app polls to see if each output analyte is queued.
Use the artifacts batch endpoint (api/v2/artifacts/batch/retrieve) to poll.
Check the last workflow-stage node within workflow-stages and look for status="QUEUED".
API app moves the output analytes through the step to Record Details.
Use the /api/v2/steps endpoints to start the step and then move the analytes forward.
A lab may receive samples submitted from various sources. This can pose a problem with regards to sample names.There may be duplicate sample names, and/or various name formats, all of which make it hard for lab scientists to recognize a sample.
Clarity LIMS programmers often rename all incoming samples to a certain naming convention.
This section provides an example to address this problem.
When accepting a project and its samples, the receiving lab scientist runs a Clarity LIMS step named Receive Samples.
The underlying Receive Samples process type / master step is configured with analyte (sample) inputs, and no analyte outputs.
A shared result file output is configured to capture logging from the script.
The sample name could be a derivative of the Sample LIMSID, with a prefix:
Because the LIMSID is guaranteed to be unique, this approach mitigates any need to maintain an external sequence of numbers.
The Sample LIMSID is derived from the Project LIMSID, which is configurable.
The Receive Samples process is configured to trigger a script that renames the samples that are input to the process.
This trigger also passes the OriginatingProcessURI to the script. This example assumes that the original submitted sample name must be preserved, and so it is saved in a sample UDF.
The following pseudo code shows how one might implement the sample-renaming script:
Connect to the API, using the OriginatingProcessURI.
Retrieve the OriginatingProcessXML and store it in a variable.
Iterate through the inputoutput map of the OriginatingProcessXML, and for each InputArtifact:
GET the InputArtifactURI and store the input ArtifactXML in a variable.
From this ArtifactXML, GET the SourceSampleXML and store it in another variable.
Modify the SourceSampleXML. To do this:
Rename the SampleName to a desired name (see Recommendations section, above).
Finally, PUT the Sample XML back.
To trigger scripts and third-party programs from the BaseSpace Clarity LIMS user interface, use command-line calls configured in step automations.
There are many ways to use automation in Clarity LIMS. Consider the following examples:
Automate sample tracking and enhance the information recorded.
Generate and attach specially-formatted text files.
Simplify data entry.
Automate the population or updating of data fields and data files.
Before using automation, become familiar with the topics discussed in this article and understand how automation functionality interacts with users, Clarity LIMS, and REST.
As of BaseSpace Clarity LIMS v5.0, several terms have been deprecated:
External Program Integration Plug-in (EPP) has been replaced with automation
Automated Informatics (AI) node has been replaced with Automation Worker (AW) node
Parameter has been replaced with token
Automation scripts are often used to automatically create and attach specially-formatted text files, such as the following:
Files containing sample lists (sometimes called instrument driver files). Users can import these driver files into control software, saving time and ensuring accurate sample processing.
Barcode or label files. These are specially-formatted files that can be supplied to barcode software systems to allow users to print out container labels.
Summary analysis results - for example, from alignment or molecular identification and quantification algorithms.
The two common applications of automation are:
To create files and attach them to process output placeholders.
To update data fields with information created during data analysis.
Scripts triggered by automation also use REST to update information directly within the REST resources.
For details, see REST Web Services and the version-specific documentation in the following sections:
Record lab work in Clarity LIMS by running steps on samples. These steps may be configured in Clarity LIMS by an administrator.
Most steps can be configured with an automation trigger that invokes an external script. The script may include fixed and variable information parameters/tokens on the command line.
NOTE: As of Clarity LIMS v5.0, the term command-line parameter has been replaced with token.
After an AI/AW node is installed, processes (in Clarity LIMS v4.2 and earlier) or steps (Clarity LIMS v5 and later) must be configured to call out to it.
This configuration is executed by the Clarity LIMS administrator:
In Clarity LIMS v4.2 and earlier, execute configuration in the Operations Interface process configuration dialog on the External Programs tab.
In Clarity LIMS v5 and later, execute configuration on the Automation configuration screen.
When configuring an automation, the following must be defined:
Name
Channel
Command line call
Trigger style and location (see also Automation Triggers and Command Line Calls)
NOTE: Only a brief summary of automation configuration is provided here. This material should be familiar from the Clarity LIMS administration training.
Scripts or third-party programs are called using the operating system command line. They must meet the following requirements:
Be callable on the command line and, preferably, be able to read and respond to command-line parameters.
Be accessed by the user account running the automation, with appropriate permissions and disk locations.
Exit with appropriate exit codes, otherwise the automation may record the script completed with an error.
The following diagram illustrates what happens when a user runs a step in the LIMS.
The lab scientist tracks activities by running a step in Clarity LIMS. The step is configured to display a button that invokes the configured automation script.
NOTE: As of Clarity LIMS v5.0, the term parameter has been replaced with token.
The application server creates a new step, which is much like POSTing to the processes resource of the REST API. The server resolves any parameters/tokens found in the string and sends the resolved command-line string to the automation.
In this example, two of the most common parameters/tokens used when working with automation are discussed:
{processURI:version:scheme}—This parameter/token passes the REST API URI of the step that issued the command-line string.
{ouputFileN}—This parameter/token passes the LIMS ID of the specified expected output file of the step that issued the command-line string. You can use 0 (zero) for the first file, 1 (one) for the second file, etc.
For more information about the parameters/tokens available for use, refer to the articles in the following API documentation sections:
NOTE: As of API v1 r12, the version and scheme values for {processURI:version:scheme} are automatically populated, based on the REST version and protocol of the deployed server.
The automation program receives the command-line string from the application server. It may also receive other process(step)-related information, such as temporary files. The command-line string is executed by the operating system of the host computer.
The automation can work with any third-party program that supports command-line parameters/tokens. The program may simply create files or it may manipulate information directly via the REST API (steps 5 and 6).
Simple automation operation does not require anything of the REST API. If a third-party program creates files that users would like brought back into Clarity LIMS, scripts should use the outputFileN parameter/token to specify that the program create file names that are expected by the client. The files are placed in a temporary local working directory and automatically imported into the client. With this method, the automation automatically handles many of the things that would need manually scripts using the REST API processes, artifacts, files, and glsstorage resources.
For more complicated scenarios, you may want to use automation with the REST API. This situation is where the processURI parameter/token is used. A GET request on the URI of the step that issues the command-line string provides all the information recorded by the user. This information includes links to the analytes (samples) used as inputs. The script can then use other REST API resources to create or update information.
On completion, the third-party program exits (step 7). Standard shell exit codes apply: zero (0) equals successful completion.
On exit of the third-party program, the automation software updates the application server. If the system finds files with names that match the file placeholders produced by a process/step, the files are uploaded to the file server and attached to the appropriate placeholders.
A nonzero exit code sets a flag on the step, indicating that there is an error.
With updates complete, the application server sends refresh events to the Clarity LIMS. The user will see that files have been uploaded.
Verifying and testing scripts is an important part of working with automation. Remember that there are three software components:
The server
The automation instance (calling scripts)
The script
The best way to debug scripts is to unit test each component separately. For example a logical order to work on a script is as follows:
Define and test the REST calls required in a web browser.
Define the command-line parameters / tokens sent to the automation at step completion.
Test the script running just from the command line.
Test automation calls to the script at step completion by running the step from the LIMS interface.
Before the system can use the automation, a system administrator installs one or more automation workers / AI nodes within the lab network. The installation typically occurs on the server that contains the script program, or third-party application, to be integrated.
The installer program is contained in the Automated Informatics / Automation Worker software package.
If a BaseSpace Clarity LIMS script is run in an automation context, it is easy to obfuscate usernames and passwords by choosing the appropriate tokens ({username} or {password}) to be passed in as run-time arguments.
However, this type of functionality is not easily available outside of automations, and it is often necessary to store various credentials on machines that need to interact with the LIMS API, database, or some other protected resource. This article explains how to use cryptography in Python to protect and obfuscate these important authentication tokens.
Many of the API Cookbook examples use a simple auth_tokens.py file that has usernames and passwords stored in plain text. This file can be compiled in Python, simply by importing it at a Python console:
Importing this file creates an auth_tokens.pyc file—a byte-compiled version of the source file. The source file can now be deleted, providing the first rudimentary level of security. However, the credentials can still quite easily be retrieved. Even if the permissions on this file are restricted, this solution does not present a suitable level of security for most IT administrators. It does, however, allow us to easily prototype our code, hence its use in Cookbook examples.
You have pycrypto installed (either through the OS package manager or pip).
You have generated a secret key of random ASCII characters (the easiest way to do this is to button-mash on a US-layout keyboard and include a lot of symbols).
You already have a plain-text auth_tokens.py file. An example is attached at the bottom of this article.
You have access to the Python or iPython command line console.
Python provides the pycrypto library that can easily be installed using the operating system's package manager, or the pip installation tool. It contains myriad different encryption algorithms and gives us a straightforward interface to wrap our own encryption objects and accessor functions.
The goal is to be able to create a flat text file containing obfuscated usernames, passwords, hostnames, and so on. To do this, use a utility class called ClarityCred that provides encryption and decryption functionality using the ARC4 cipher from pycrypto. The ClarityCred class is provided in cred.py, attached at the bottom of this article.
While the use of ARC4 is considered deprecated in favor of stronger encryption algorithms, such as AES, the ARC4 example lends itself to easier understanding. ARC4 simply requires a secret key and a salt size to be specified. The secret key can be generated at random using any preferred method and is hard-coded in cred.py, along with the salt size purely for ease of demonstration. Ideally, the secret key and salt size should be stored externally.
After applying the ARC4 encryption, the ClarityCred class wraps base64 encoding around it to obfuscate the data further.
Assume that you need to store a username, password, and hostname inside our auth_tokens.py, and we have this information in plain-text stored in another file called auth_tokens_plain.py. The usage is as follows.
Open a Python console, and import ClarityCred from cred.py.
Call the ClarityCred.encrypt() static function on the plain text username, password, and hostname strings.
Copy-paste these values into auth_tokens.py.
The following image illustrates steps 1 and 2, using an existing auth_tokens_plain.py file:
The old auth_tokens_plain.py looked like this:
username = 'testuser' password = 'testpass' hostname = 'https://encryptiontest.claritylims.com'
The new auth_tokens.py looks like this:
username = 'zq1AwnqIkfA=$YFY1UuO1r6edu7qPnN9/l3kMI15ZG1JAsH7IhnxnNvYulMndhYh6lxjVBfFwjN9sZEqPM0Qlx6kjq3fbht/FlRrgklDL79H7NiUP6uYM2qVltPloRA4g8SiphF3KHx4gVTE93Ku58sFCgu1rnH5u6tkCz98v0R7PsuIOW1CDMi9zSToIu+IkcYDPPYcD1b4z8ojez/7lczunaDfrmPhwopyyUiETu9BR49Bwp5fz4XSWICZFGCd9AjoEg/FTE+/X18f+0pIz0viXQyN+JjE3vJkpNsRY2Z3d72sPgQmFFZhd48m+POUtD1UXLXhaijdxp78QTcEp7AHY+TiM8hsXT7BX1Q=='
password = '9qW5BftGyXY=$6GL1t/Zl1CbSmB7Qq54uf2TJ5fI8GUlW9NdBnumkTtF/X27WLEsr1+C0ilXQX6jnLm4kzR+5pCVgnz4xz6/80/dMLMlTll6tOvCJgPU4ZkRpkUYmcPVbrp+X3azR7I024O8UjV/JeJYV869h3kvdPyWJGXRH4oJgs5NTJKI2y6URBs0wlrlgBuZ2YkO855ZGPw9J07UMM606q9xERRzQ+LT1XLRzSCuFnuSoDVEhshhYqZ/jpYWDHvA6Z5+YTYI/i099iYZ+WQdJAiU9hcgkUnWCybjcwivvHG6vAIROroLqlOefo+hrJsVFBA3uDaPS8pkgMVsKMPUGeft6vx4NgN/jaw==
hostname = 'Q+oyq2m9Nv8=$rhgeJOMdm/M+dDNlSbBA3RCsUoo0Ts65G7lePvuajRmsLSNC5Qo5bwagRuyat0ztpeZrUmD8xTxTvhUBvZYDlM6GBLsq5drBP6PFh/lplxb6O8YiSRXrboFov8tRnu6GbaTfGR8WV7s8vBZsXhrhlPn67p7yalJLnHWb9VOKhx8AgCTtytQkkEwmpm2vbDwDha9kMdK63IrOSp2jmRaI/9X3xsd4upqaxvX7zrEJ8ruGU/szN0ITxTK1rprnowpyXfBRiOEcrI7uh1bg73oqOETn3pB/uTrGkhGETKYB2aHaewwWMccbeZTgEPT0kDmuJdpoGYy+p+gxSoR9Arh3JtREIA=='
Examples of the plain-text auth_tokens_plain.py and encrypted auth_tokens.py are attached at the bottom of this article.
Now that the new auth_tokens.py is ready to use, you can import it and create the corresponding PYC file to provide that extra level of security, as previously discussed. You can remove the PY file and ship the PYC file everywhere it is required.
It may also be a good idea to restrict the read/write/execute permissions on the file to the system user that is calling the file (usually glsai in Clarity LIMS installations).
To use the values in this file in the code, we need to use the decrypt() function in ClarityCred. Look at the simple example of initializing a glsapiutil api object. For reference, the example current directory listing looks like this:
Notice the .py source files are removed wherever possible.
Using a Python console, the normal api invocation (using a plain-text auth_tokens file) would look as follows.
import glsapiutil import auth_tokens_plain api = glsapiutil.glsapiutil2() api.setHostname( auth_tokens_plain.hostname ) api.setVersion( 'v2' ) api.setup( auth_tokens_plain.username, auth_tokens_plain.password )
Now, however, with our encrypted tokens, we decrypt the values on-the-fly (changes shown in italicized red text):
import glsapiutil import auth_tokens from cred import ClarityCred api = glsapiutil.glsapiutil2() api.setHostname( ClarityCred.decrypt( auth_tokens.hostname ) ) api.setVersion( 'v2' ) api.setup( ClarityCred.decrypt( auth_tokens.username ), ClarityCred.decrypt( auth_tokens.password ) )
This method provides a relatively robust solution for encrypting and obfuscating sensitive data and can be used in any Python context, not just for Clarity LIMS API initialization. By further ensuring that only the auth_tokens.pyc file is shipped and copied with restricted read/write/execute permissions, this method should help satisfy IT security requirements.
However, the matter of storing the secret key externally remains. One idea is to store the secret key in a separate file and encrypt that file using openssl or an OpenPGP key. While the problem of storing each piece of information in encrypted format likely never fully goes away, the use of multiple methods of encryption can offer better protection and peace of mind.
auth_tokens.py:
auth_tokens_plain.py:
auth_tokens_plain.py:
As an API programmer, it is important to understand the difference between steps and stages. This distinction is especially important because the concept of stages is hidden from the end user. As such, when receiving requirements from end users, steps sometimes means steps. At other times, steps mean stages. This article highlights the differences between these two entities.
We tend to think of a protocol as being a linear collection of steps, as shown below.
Figure 1
However, this illustration is not complete as the life cycle of a sample modeled within Clarity LIMS reflects what happens in reality. The workflow is broken into periods of activity and inactivity. If a workflow is comprised of three steps (A, B, and C—as shown in Figure B), Step B does not begin at the exact time that Step A is complete.
To reflect these inactive periods, Clarity LIMS uses the concept of stages in addition to steps. A more complete representation of a workflow is shown below, with the stages occurring between the steps.
Figure 2
The following phrase simplifies this concept:
If the sample isn't active in a step, it's waiting in a stage.
NOTE:
The Clarity LIMS concept of the virtual ice bucket is another state that occurs when a sample leaves a stage, but work on the step has not started. This scenario is represented in the Figure 3, with the virtual ice bucket appearing between stages and steps, as the sample moves from left to right. However, virtual ice buckets are largely irrelevant to this discussion. While recognizing their existence, they are discounted from further explanation.
Figure 3
Having simplified our model of a sample passing through a workflow to resemble Figure 2, we can now add the next layer of complexity.
Protocols are components of workflows. As such, it is easy to imagine two or more workflows sharing a protocol. This detail leads to the following summary:
Steps belong to protocols, whereas stages belong to workflows.
This summary means that the stages that exist between steps are part of the workflow (represented in Figure 4 below). For example, samples passing through Workflow O proceed through Step A, Stage X, Step B, Stage Y, and Step C.
Samples passing through Workflow P (which shares the protocol with Workflow O) pass through the same steps. However, samples pass through a different set of stages (Stage X' and Stage Y').
Figure 4
Looking at the counts of samples associated with steps in a protocol (for example, in the Lab View dashboard in Clarity LIMS), the number of samples awaiting a particular step is actually the total number of samples across all relevant stages that feed into the step.
For bioinformaticians and programmers who are using the Clarity LIMS API, stages have an additional function. Route samples in ways that vary from the expected, linear route by manipulating which stages the artifacts are in. For example, using the API via a script, do the following actions:
Implement a forking workflow by assigning artifacts to one (or more) additional stages.
Create iterative (or looping) workflows by routing artifacts to an earlier stage for additional work.
When generating XML, explicitly set the document to UTF-8 character encoding.
If using other encoding methods (eg, MacRoman for OS X), special characters such as μg/ml are stored incorrectly. This could cause data integrity issues.
In Groovy, set the encoding attribute on the StreamingMarkupBuilder object as shown in the following example:
The artifacts resource includes a parent-process element that provides a URI to the process that created an artifact.
To facilitate walking up the genealogy, the processes resource exposes the parent process for an input artifact in the input-output-map of a process:
The parent-process element does not display if the parent process is not supported by the API.
The processes resource supports an inputartifactlimsid query parameter. This parameter limits the list of processes to those processes with one of the specified artifacts as an input.
Start with the initial sample.
The processes for the sample can be queried, as follows.
The process contains an input-output-map for the input artifact.
The steps can then be repeated using the LIMS ID of each output artifact that is associated with the input artifact.
The information recorded in BaseSpace Clarity LIMS is organized into resources within the REST API. Each resource refers to an XML schema associated with a namespace. Before working with the REST Web Service, understand how the information recorded in Clarity LIMS translates to the REST resources.
The following diagram highlights the major REST resources. Each resource is discussed further in the following sections.
Samples are the objects that are entered into the LIMS before processing begins. Every sample belongs to a single project and has a related analyte (sample) artifact. Every project must have an associated researcher.
When you add a sample to the system, it is classified as a submitted sample. This allows the original samples, and any related data, to remain separate and distinct, even as processing and aliquoting occurs. Every sample or file created by running a step from the LIMS user interface can be traced back to a submitted sample.
In Clarity LIMS, processes (know as steps in the user interface) are run on analyte (derived sample) artifacts. Samples must always be in containers.
Clarity LIMS v4.x and earlier: In the Clarity LIMS Operations Interface processes are run on analyte (sample) or result file artifacts. Samples must always be in containers.
As of BaseSpace Clarity LIMS v5, the Operations Interface Java client used by administrators to configure processes, consumables, user-defined fields, and users have been deprecated. All configuration and administration tasks are now executed in the Clarity LIMS web interface.
To understand how API terminology maps to terminology used in the Clarity LIMS v5 interface, see Understanding API Terminology (LIMS v5 and later).
Within the REST Web Service, the samples resource is key.
The samples resource represents submitted samples and contains information about those samples, including:
The dates samples are entered and received.
Any user-defined data related to the samples.
When a sample is added to the LIMS, the system also creates an artifact (see #artifacts).
While the artifact associated with a submitted sample is only seen at the database or REST level, and is never exposed in the LIMS interface, the system uses this artifact when running protocol steps.
When running a step on a submitted sample, the artifact is used as an input to the step, and not the submitted sample itself. All artifacts reside within the artifacts resource.
When a submitted sample is processed, the system generates output artifacts. Depending on the configuration of the process, many types of artifacts - including result files - can be generated. Any downstream sample created by running a process is considered an analyte artifact (referred to as a derived sample in the user interface).
Projects are used to group samples based on the originating lab (account) or study. Projects collect all records related to that sample in the LIMS.
A project stores information about:
The client (researcher) who owns it
Significant dates
The status of the project
Any user-defined information that the lab needs to collect
After creating a project, you can add samples to it. Samples can then be added to workflows, and steps (processes) are run on those samples to reflect the analysis performed in the lab.
In the REST Web Service, a submitted sample can only belong to one project. You can use the projects resource to return projects.
Note the following details regarding projects:
Every submitted sample must belong to a project.
Every project must be assigned to a researcher (an owner) that corresponds to a client in the system.
NOTE: In the LIMS user interface, the term Contact has been replaced with Client. However, the API permission is still called contact.
The researchers resource represents clients in the system.
When working with projects, each project must list a client as the owner of the project. This role generally represents the person who submitted the original samples.
The client does not need to have a user account.
In Clarity LIMS v5, the API still uses the term process. However, in the user interface, this term has been replaced with master step. Also, the Operations Interface has been deprecated.
Clarity LIMS v5 and later—Created in the Clarity LIMS web interface, master steps model and track the work performed on the samples in the lab. These master steps are then used as building blocks to create and configure steps. These steps are known as processes in the API.
Different interfaces may allow you to run steps/processes on different artifacts.
In the API view, a process takes in one or many analytes and/or result files and creates one or many analytes and/or result files.
When running a step in Clarity LIMS, lab scientists record information about the step, the instruments used, and the properties and characteristics of the samples.
Depending on the configuration of the process/master step on which it is based, the step can generate another sample analyte and/or placeholders to which result files can be attached for storage in the system.
With the REST Web Service, the processes resource is used to track these activities.
Note the following details regarding processes:
Processes are used to represent work that occurs in the lab or in silico.
Processes take inputs and create outputs. With the REST processes resource, this is modeled using the input-output-map element.
In addition to tracking historical work via the processes resource in the REST Web Service, use the service to POST new processes to the system.
POSTing a process to the REST Web Service creates the process itself, along with the outputs of the process. However, all the input and output containers must exist in the system already.
For a simple example of the XML required to POST a process, see the processes (list) section of the REST resources space.
For basic details about POSTing processes, see Working with Processes/Steps in the Cookbook section.
For examples of process POSTing, see Pooling Samples with Reagent Labels and Demultiplexing in the Cookbook section.
To find out how to integrate automation with process POSTing to set quality control flags, see the Setting Quality Control Flags application example.
Query the processes resource using input artifact LIMS IDs. This query allows you to find the processes that were run at each step in the workflow or on each artifact generated during processing.
All inputs and outputs of a process are artifacts, and can be returned via the artifacts resource.
Note the following details about artifacts:
An artifact is a derivative of a sample and is used as an input to a process.
An artifact may be a sample analyte or a result file.
The artifacts resource includes artifacts for the submitted sample and all process outputs, both file- and sample-based.
Artifacts are categorized by type, to distinguish between pure information results (file-based artifacts, such as result files) and the biological material created by processing the sample (analyte artifacts).
In Clarity LIMS, the term artifact is used to describe items needing to be processed. Think about artifacts as the intellectual property added by the lab.
For example, applying reagents to change the nature of a sample creates an artifact, as does generating and analyzing data files by running a sample on a NextGen or microarray instrument.
Anything created by a process in the system is an artifact. In the REST Web Service, there are several types of artifacts, but this article focuses on two:
Computer-generated files called result files
Physical sample derivatives called analytes.
The high-level relationship between artifacts, analytes, and result files is shown in the following diagram.
An artifact references data elements, which vary depending on the type of artifact you are working with. For example, a result file has an attached-to URI that links to a files resource, whereas an analyte has a location URI that links to the containers resource.
Artifacts are key to tracking lab process activities and also link to a submitted sample.
All artifacts include one or more sample URI data elements, which make it easy to trace any lab-generated product or result directly back to its original sample.
When working with artifacts in the REST API, their URIs often include a numeric state. The state is used to track historical QC, volume, and concentration values.
Unless you are interested in a historical state, it is best practice not to include state when using an artifact URI. When state is omitted, the API defaults to the most recent state.
When samples are processed in the lab, they are always placed into containers of some sort (tubes, 96-well plates, flow cells, etc.) and moved into new containers as processing occurs.
For many kinds of processing, the container placement is a critical piece of information. Further processing of the sample, and data files created by analyzing the sample, are often linked based on the placement of the sample in the container.
Containers are central to processing in the lab. In Clarity LIMS, therefore, the samples (analyte artifacts) must also always be placed into a container resource.
When working with the REST Web Service, analyte artifacts include a URI that links to the container housing the artifact. Use the containers resource to view all the containers registered in the system.
Details on finding contents of a container can be found in the Cookbook.
Note the following details about containers:
Containers represent the tubes, plates, flow cells, and other vessels that can be populated with a sample.
All samples/analytes must reside in a container or they will not be visible in the LIMS client.
All containers include a name and a LIMS ID.
The name is a text element over which the scientific programmer has full control.
The LIMS ID is a unique identifier generated by the system in a fixed format.
The name, LIMS ID, and any container-level #udfs provide various options for container labeling.
For assistance, the Illumina Consulting team can recommend various settings, such as uniqueness constraints, based on your requirements.
A lab produces various files: large scientific result data files, summary result files, image files, label files, equipment and robotic setup files, and software logs.
These files are stored in different locations and it can be challenging to manage the relationship between a file on a computer or hard disk and the sample, step, or project with which it is associated.
Clarity LIMS lets you store files related to a project or sample and files generated during a step in a workflow. These files can be imported in various locations within the client and are stored on the file server.
To model this feature within the REST Web Service, there are two resources:
files resource
glsstorage resource
Within the REST Web Service, files are represented by the files resource. This resource manages files and the resources or artifacts to which they are related, and stores information about:
The sample, project, or process output with which the file is associated, referenced by the attached-to URI.
Where the file was imported from, and its original name, referenced by the original-location URI.
The location of the file, referenced by the content-location URI. It also specifies the transfer protocol that can be used to retrieve the file. The following transfer protocols are supported:
ftp
sftp
HTTP
If you are using REST to view a file that was added through the LIMS client, the content-location URI will reference a location on the file server. This location is where the system stores all files that are imported through the Clarity LIMS client.
If you are using REST to import a file into the system, do one of the following:
Store the file on the file server:
Use the glsstorage resource to create a unique storage location and file name on the file server.
After this step is complete, the system returns a location and file name using the content-location URI element.
Then do as follows.
Provide the URI to the files resource.
Put the file in the specified location.
Store the file somewhere other than on the file server:
Use the files resource and reference the name and location of your file with the content-location URI element.
This feature must be configured by Illumina. For more information, contact the Illumina Support team.
Not the following key concepts:
Files: The files resource defines the location of a file and its relationship with other REST resources, such as artifacts and projects.
Glsstorage: The glsstorage resource allocates space on the file server.
XML elements: Within the XML used by the files and glsstorage resources, the attached-to and content-location URIs are used to link disk files to file-based artifacts produced by a process, or to link disk files to projects or samples.
The following diagram outlines how the XML elements link files to system resources and artifacts:
In the lab, one of the most important associations that must be made is between:
A file that is the result of an instrument run
- and -
The sample that was analyzed to produce that file.
In Clarity LIMS, this association is represented by creating a process that takes a sample analyte and produces a result file.
When you run a process configured to create a result file, the process generates a placeholder for a file. To populate the placeholder, simply import the result file generated by the instrument into Clarity LIMS.
While working in the lab, lab scientists can upload result files that are used or produced while samples are processed. However, it may sometimes be more appropriate to automate this work. In these cases, you can use the REST files and the glsstorage resource.
Depending on the file storage needs and how the files are generated, there are two ways to do this process.
Import a file and store it on the file server.
Import a file and store it on a different server.
Import a result file and store it on the file server:
POST conforming XML to the glsstorage resource.
This action returns XML that includes a name and storage location for the file.
Place the file into the specified location using the file name provided in the XML.
POST the returned XML to the files resource, which links the file on disk to the result file placeholder.
Import a result file and store it on a different server:
Make sure that the file exists in the desired location.
POST conforming XML to the files resource, referencing the name and location of your file with the content-location element. The file path must contain the transfer protocol supported by the server. For example: sftp://192.168.13.247/home/glsftp/Process/2010/10/SCH-RAA-101013-87-1/ADM53A1PS3-40-1.dat NOTE: It is not necessary to POST to the glsstorage resource.
If you have files that were not generated during the analysis of a sample, you can also attach reference information to projects and samples.
For example, suppose you receive an e-mail when a sample is submitted to the lab, you may want to store that information in the LIMS. In this case, when you POST XML to the files resource, the XML links the file to the desired submitted sample – instead of to a result file placeholder.
Clarity LIMS v5.x and later:
In Clarity LIMS, the file is attached to the Sample Details section of the Sample Management screen.
On the Projects and Samples screen, select the project containing the sample for which you have posted a result file.
Scroll down to the Samples and Workflow Assignment section of the screen and select the appropriate sample.
Select Modify 1 Sample.
On the Sample Management screen, scroll to the bottom of the Sample Details section to find the attached file.
Before Clarity LIMS v5:
In the Clarity LIMS web interface, the file is attached to the Sample Details section of the Sample Management screen.
For details on accessing the file, see the previous content on Clarity LIMS v5.x and later.
In the Clarity LIMS Operations Interface, the file is attached to the Files tabbed page of the applicable submitted sample.
In the Clarity LIMS Explorer, click Opened Projects.
In the Opened Projects list, double-click the project containing the sample for which you have posted a result file.
On the project details page, click the Samples tab.
At the bottom of the tab, in the Containers pane, double-click the appropriate sample.
On the sample details page, click the Files tab to find the attached file.
The REST Web Service separates the resources needed for files and file storage.
This action allows for greater control and the flexibility to apply various tracking and storage strategies. The content-location element can be used to define the file location without having to move the file. This ability is key in next-generation sequencing, which requires the management of large files, such as assemblies.
The content-location element needs to reference the file location in a storage system using a specific file transfer protocol. Currently only FTP, SFTP, and HTTP protocols are supported.
This mechanism makes file management flexible, but it maintains access to the file from REST with a single link. However, this feature must be configured by Illumina. For more information, contact the Support team.
Note the following key concepts about UDFs and UDTs:
UDFs and UDTs are configured to collect information that is important to the lab.
With the REST Web Service, you can include UDF and UDT values in the XML representation of any individual resource that has a UDF or UDT defined.
Not all artifacts have both UDFs and UDTs.
In Clarity LIMS v5 and later:
The API still uses the term udf the term. However, in the user interface, this term has been replaced with custom field.
UDTs are not supported.
You can configure the system to collect user-defined information. Consider the following examples:
You can create UDFs to add options and fields to the user interface when working with samples, containers, artifacts, processes/protocol steps, and projects.
You can also create User-Defined Types (UDTs), which are organized subsets of related UDFs. As you add and process samples, you can add information to these options and fields.
In the following example, UDFs are added to submitted samples, processes, and sample analytes (derived samples).
For the submitted sample named Goo, there are UDFs named Type, Color, and Source.
For the Prepare Goo process/step, there are UDFs named Reagent Lot ID, Temperature, and Cycle Time.
The output of the Prepare Goo process/step is an analyte named Prepared Goo, which contains UDFs named Quality and Category.
Any downstream sample created by running a process is considered an analyte artifact. In the Clarity LIMS interface, analyte artifacts are referred to as derived samples. For more information, see #samples, #artifacts and #processes.
To record information for these UDFs:
Add Goo to the system and populate the sample-level UDFs.
In Clarity LIMS, run the Prepare Goo step and complete the following actions:
Populate the Step Details fields (the process-level UDFs).
Populate the Sample Details table (the analyte-level UDFs).
You can also use the REST Web Service to collect user-defined information for samples, containers, artifacts, and processes.
After you have configured UDFs, the XML of the appropriate resource expands with data elements for the field values. For example, the Prepared sample of goo artifact would have the following XML:
UDFs/custom fields are useful for collecting data at various stages of your workflow. In next-generation sequencing, it is important to record information, such as who submitted a sample, the tested concentration of a library, the reagents that were used during library prep.
As illustrated in the previous Goo example, collect this information by adding UDFs/custom fields for the samples, artifacts, and processes resources:
A submitted sample UDF / custom field named 'Type'
An artifact-level UDF / derived sample custom field named 'Validated Concentration'
A process-level UDF / master step field named 'Reagent Name'
Artifact UDFs/custom fields are flexible.
You can configure different sets of UDFs / custom fields for the analyte artifact type and the result file artifact type.
You can configure different sets of UDFs / custom fields based on the process type.
This flexibility means that:
Process type / master step A can display fields 'm' and 'n' on a result file, and fields 'q' and 'r' on an analyte
Process type / master step B can display fields 'm' and 'o' on a result file, and fields 'q' and 's' on its output analyte.
Control how users access artifact-level UDFs/custom fields by configuring the type of artifact or process type/master step to which they apply.
Not every detail tracked and recorded needs a UDF. To optimize lab efficiency, it is recommended that you define an essential UDF set.
Increasing the complexity of information collected and managed does not necessarily improve operations or scientific quality. It may be more effective to store files, because the complete details are then available and secure within the attached file.
When configuring automations in the BaseSpace Clarity LIMS, copy tokens from the Tokens list and paste them into the Command-Line field.
These tokens are available for use in derived sample automations. If using multiple variables, add a space between each entry. All tokens and parameters are case-sensitive.
Token | Purpose | Example |
---|---|---|
When configuring automations in BaseSpace Clarity LIMS, copy tokens from the Tokens list and paste them into the Command Line field. These tokens are available for use in step automations. If using multiple variables, add a space between each entry. All tokens and parameters are case-sensitive.
Token | Purpose | Example |
---|---|---|
When configuring automations in BaseSpace Clarity LIMS, copy tokens from the Tokens list and paste them into the Command Line field.
These tokens are available for use in project automations. If using multiple variables, add a space between each entry. All tokens and parameters are case-sensitive.
Token | Purpose | Example |
---|---|---|
Use the setExitStatus.py
Python script, attached to this page, to test and simulate the use of the automation triggers within Clarity LIMS.
The setExitStatus.py
script is designed to illustrate concepts for API training purposes. Do not use in a production environment.
The setExitStatus.py
script relies on the presence of the glsapiutilv2.py
script. Typically, both scripts are located in the same directory.
The setExitStatus.py
script uses the following command-line parameters:
An example of a parameter string that invokes this script from Clarity LIMS is provided in the following. Note the use of the stepURI token in the -l parameter.
python /opt/gls/clarity/customextensions/setExitStatus.py -l {stepURI:v2:http} \
-u {username} -p {password} -s "OK" -m "successful"
glsapiutilv2.py:
setExitStatus.py.txt:
The latest glsapiutil (and glsapiutil3) Python libraries can be found on the GitHub page.
How to copy the value of a UDF/custom field from source to destination (typically from the inputs of a process/step to the outputs) is a frequently asked question.
For example, suppose a process/step takes in libraries and tracks their normalization. In such a case, the input samples have a UDF/custom field that is used to track the library ID. Since the library ID changes, it is desirable for the output samples to also have this ID.
Use the API to gather the XML for the inputs, then copy the XML node relating to the UDF/custom field to the outputs.
Alternatively, use the out-of-the-box copyUDFs script, which Illumina provides as part of the NextGen Sequencing configuration.
The copyUDFs script is available in the ngs-extensions.jar archive*, and can be called from the EPP / automation parameter string.
The archive file may be named differently, depending upon the version you are running.
Usage:
The UDF / custom field values to be copied are defined in the -f portion of the syntax. These values must be present on both the inputs and outputs of a process.
For example, suppose you wanted to use this script to copy the value of a UDF called Library ID:
The Library ID field must be defined on both inputs and outputs.
The -f flag is defined as follows:
To copy multiple UDF values from source to destination, list them in comma-separated form as part of the -f flag.
To copy Library ID and Organism from source to destination, use the following example:
This section discusses methods for integrating BaseSpace Clarity LIMS with upstream sample accessioning systems.
The following illustration shows a typical architectural overview:
Required:
A sample must have a Name / ID
A sample must be associated with a Case / Patient / Study / Project
A sample must be associated with a Container (Tube / Plate etc)
Optional (but expected):
User-defined fields (UDFs)/custom fields (defined by your LIMS configuration)
Typical flowchart of actions within the broker:
The following animation illustrates the elements of an XML sample-creation message to Clarity LIMS.
Build your own:
Pro: Not too difficult
Con: Stability as number of messages increases
?: Maintainable over the long-term
Use a commercial / open-source offering (e.g. Mirth Connect)
Pro: Quicker than build
Pro: Robust, multi-threaded support for millions of messages per day
?: May prove to be an excessive or over-complicated means to accomplish something relatively simple
Does the broker need to carry out other business logic?
For example, one customer added logic to their broker that dealt with medical billing and was able to distinguish between physicians ordering duplicate tests for a subject (not reimbursable, therefore the duplicate sample wasn’t submitted to Clarity LIMS), versus a temporal study that was reimbursable.
The best practice is to take advantage of as many legacy systems as possible, rather than creating samples in Clarity LIMS, then reinventing business logic to remove unwanted ones.
This article explains how to make files that were produced by / attached to the LIMS in an earlier step, visible in a subsequent step.
Consider a simplified workflow / protocol containing just two steps: Produce Files and Display Files.
The first step, Produce Files, will take in analytes (derived samples), and generate individual result files (one per input).
The subsequent Display Files step will allow us to view the files associated with the analytes from the previous step.
After the files have been generated by and attached to the Produce Files step, the Record Details screen of the step displays the files.
The key to displaying these files in any subsequent step involves producing a hyperlink to the file and displaying it as a user-defined field (UDF)/custom field in subsequent steps.
You may be familiar with creating and using text, numeric, and checkbox UDFs/custom fields. However, you may be less familiar with the hyperlink option. Fields of this type are used less frequently, but they are perfect for this solution.
NOTE: As of Clarity LIMS v5.0, the term user-defined field (UDF) has been replaced with custom field in the user interface. However, the API resource is still called UDF.
This solution involves a script that runs on the Record Details screen on the subsequent Display Files step and populates the fields. See the following figure.
As you can see, the structure of the hyperlink is straightforward and includes:
The IP address / hostname of the server.
The port.
A link to the LIMS ID of the file to be linked to.
To populate these fields, there are numerous methods available within an API-based script. The method discussed here works for the two-step protocol described earlier (namely that we want the files displayed in the next step of the protocol). It also works when the steps in which the files are uploaded and displayed are separated by several intermediate steps.
Assuming that the script will run just as the Record Details screen of the Display Files step is being displayed, use pseudocode to produce the hyperlinks.
For each output:
Determine the LIMS Unique ID (LUID) of the output artifact.
Determine the LUID of the submitted sample associated with the output artifact.
Determine the LUID of the resultfile artifact produced by the earlier process, derived from the common submitted sample.
Determine the LUID of the file associated with the resultfile artifact.
Update the hyperlink UDF / custom field on the output artifact (from step 1) with the specific hyperlink value.
To illustrate these pseudocode steps, XML from a demo system is provided.
From the XML representation of the Display Files process/step. we see that we have three output artifact LUIDS: 2-81806, 2-81805 and 2-81804.
By examining the XML representation of the first output artifact (2-81806), we see the LUID of the associated submitted sample is ADM1301A2:
After the common ancestor is found, ask Clarity LIMS for the output artifacts produced by our step of interest (Produce Files) directly.
For example:
Yields the following XML:
The resultfile with LUID 92-81803 is associated with the current output artifact (2-81806), even though these entities may be separated by several steps.
If the process/step produces multiple resultfiles, you may need to further constrain the search using the name= parameter. For example:
By gathering the XML representation of artifact 92-81803, the associated file has LUID 40-3652:
Now that you know the LUID of the file associated with output artifact 2-81803, set the value of its hyperlink field in the following form:
When constructing the value for the hyperlink, the 40- prefix should be removed from the LUID of the file.
Within a script, you may sometimes need to know to which workflow the current sample is assigned.
However, in Clarity LIMS, the XML payload that relates to the sample does not provide information about the workflow associations of the sample.
For example, consider a sample (artifact), picked at random, from a demo system:
It is evident that this XML payload does not provide the workflow information.
This following solution shows how to use the Clarity LIMS API to determine the association of a sample to one or more workflows.
The XML payload that corresponds to each sample artifact contains a link to the related submitted sample (or samples, if it is a pooled artifact).
Follow that link to see what it yields:
The XML corresponding to the submitted sample has a link to an artifact. This artifact is special for several reasons:
It is known as the 'root artifact'.
It has an unusual LIMS ID for an artifact. LIMS IDs start with '2-' for Analytes, and '92-' for ResultFiles. This one appears to be derived from the LIMS ID of the sample: KUZ407A145PA1
A root artifact is created 'behind the scenes' whenever a submitted sample is created in the system.
The sample history in Clarity LIMS makes it appear as if the first step in the workflow is run on the submitted sample. However, it is actually the root artifact that is the input to the first process.
When a submitted sample is assigned to the workflow, it is the root artifact that is assigned to that workflow.
Therefore, if gathering the XML payload corresponding to the root artifact, you should see the workflow assignment:
The key element is as follows.
The name of the artifact-group (Sanger Sequencing) should match the name of the workflow in which the root artifact (and by inference, artifacts derived from the root artifact) is assigned.
If you find that the artifact-group node is missing from some of the root artifacts, there are several potential reasons:
The workflow has been completed, causing the root artifact to be unassigned from the workflow.
The derived samples / artifacts have been removed from the workflow intentionally, because of a sample processing issue.
An API script has intentionally removed the derived samples / artifacts from the workflow.
The assigned workflow has been marked as 'Archived'.
This section outlines several strategies to enable this feature.
In all cases, assume that a UDF called Batch ID that was on Step A, and you want to access it on Step D:
NOTE: If the samples in Step D do not have a homogeneous lineage, expect multiple values for the Batch ID.
This method involves crawling backwards from Step D to Step A.
The general form is as follows.
Examine the inputs to Step D.
Each input (I) has a parent-process element with a URI to the step that created the artifact. In this case, it is the URI to Step C.
Get the input-output maps for Step C (from the /details resource) and find the input (I') that produced output I. Each input (I') has a parent-process element with a URI to the step that created the artifact. In this case, it is the URI to Step B.
Get the input-output maps for Step B (from the /details resource) and find the input (I'') that produced the output I'. Each input (I'') has a parent-process element with a URI to the step that created the artifact. In this case, it is the URI to Step A.
Get the value of the UDF (Batch ID) from Step A: 1234.
This method is computationally slow, but it is safe. As the number of steps that need to be crawled back through increases, so does the duration of the script to retrieve the value.
This method tried to jump straight to Step A, without passing through Steps B and C.
The general form is as follows.
Examine the inputs to Step D. Each input (I) has a sample element that contains the limsid (S) of the related submitted sample.
https://<your_hostname>/api/v2/artifacts?samplelimsid=Sandprocess-type=Step%20A
This query should give an XML response containing the URI to Step A. From there, get the value of the UDF (Batch ID): 1234.
This method makes two assumptions:
That Step A produces analytes (derived samples). Thus, if Step A is a QC process, or does not produce analyte outputs, this method fails.
That the analytes (derived samples) resulting from S only passed through Step A one time. If this assumption is not true, you receive multiple URIs to the individual instances of Step A that relate. Also, you cannot be certain which Batch ID to rely upon.
This method is computationally fast, and its duration is not reduced if there are many steps between Step A and Step D.
This method works well, but it involves making configuration changes to the steps. As such, this method is useless for legacy data resulting from samples that passed through the steps before the configuration was applied.
Its general form involves:
In Step A: Add a script that copies the value of the Batch ID UDF (1234) to every input and output of type analyte in the step.
In Step B: Add a script that copies the value of the Batch ID UDF (1234) to every output of type analyte in the step.
In Step C: Add a script that copies the value of the Batch ID UDF (1234) to every output of type analyte in the step.
In Step D: The inputs contain the value of the Batch ID.
This method relies on propagating the Step UDF through Steps A, B, and C to Step D. It is safe and fast. However, if the protocol is edited and a new step is inserted between B and C, add the script that propagates the value. This addition is so the chain does not break. This method is safe if any of the steps are QC steps or do not produce analyte outputs.
This method is a niche solution, but it works well. It assumes that the samples from Step A proceed to Step D as an intact group, and they are joined by a control sample.
This method involves making configuration changes to the steps. As such, this method is useless for legacy data resulting from samples that passed through the steps before the configuration was applied.
In Step A: Identify the control sample for the group, then copy the value of the Batch ID to the control sample.
In Step D: Identify the control sample for the group, then retrieve the value of the Batch ID from it.
This method is the least work, but it does make several assumptions that might make it impracticable.
Use the API to update the preset value of a user-defined field (UDF)/custom field configured on a step.
From your test server:
GET a chosen UDF/custom field.
Do a PUT and include a new line.
For example, to add 'My new preset', insert the preset (My new preset), after your last value in your XML:
This tool is powerful when integrating with external systems and combined with the Begin Work trigger. For example, it can be used to reach out to an external source with a script, initiated with the Begin Work trigger. The script makes sure that the presets for the Step Details UDFs/custom fields are always up to date and in sync with the server—before entering the Record Details screen.
When running the Aggregate QC step in Clarity LIMS, the QC pass and fail flags for the samples display in the Record Details screen.
This section explains how to use the API instead to find the samples that passed or failed QC aggregation.
Query the API and filter the results list based on the qc-flag parameter value. For more on filtering, see section.
To filter the list by QC flag with a value of PASSED, use the following example:
To find an individual QC flag result for an individual sample, use the LIMS ID of the sample:\
Then search for the value of the element of the endpoint payload for the artifact.
The <qc-flag> element of the input analyte (sample) artifact is sent into the Aggregate QC step.
To demonstrate this detail, review the following steps:
In the API, find a single analyte artifact (derived sample) that has passed QC. The XML QC flag value is PASSED.
In Clarity LIMS, find the same sample and change the value of the element from PASSED to FAILED. Save the change.
In the API, find the sample again. See that the XML QC flag value is set to FAILED.
When a sequencing run is complete, it is often desirable to pass data to CASAVA for BCL conversion automatically rather than manually. This section proposes a method to configure this automation.
NOTE: This solution is not tested end-to-end on an instrument.
The proposed approach involves adding an automation trigger to the Sequencing step, such that it invokes a script that launches the BCL Conversion step.
However, because the BCL Conversion step does not run immediately, it is launched in a dormant state until the Sequencing step is complete.
The key event here is the Run Report that is created and attached to the Sequencing step. As the last event to occur in the step, the creation of this report is used to prompt the BCL Conversion step to 'wake up' from its dormant state and begin processing.
The following pseudocode describes the work that must occur within the script:
A required script that launches the BCL Conversion step via the API might be absent. The creation of such a script is covered in . This example only covers the functionality of the script rather than code.
In addition to the expected processURI, username, and password parameters/tokens, the script should accept another parameter (the LIMSID of the Run Report from the Sequencing step).
For example, the script can be invoked as follows:
Use this syntax when configuring the command line on the Sequencing process/step.
Configure the automation so the script is automatically triggered when exiting the Record Details screen.
The BCL Conversion process is configured:
To take in a ResultFile input and generate a non-shared ResultFile output
With a process parameter of 'Standard,' which initiates the actual BCL conversion.
The script is passed the value '92-3771' as the -r parameter.
This is then converted to a full URI and became the input element of the following XML, which is POSTed to the /processes API resource:
Update all URIs in the XML to point to the hostname and API version for the system.
Provide a valid URI for the lab scientist. There might be a user in the system with LIMS ID of '1'.
If the POST is successful, the API returns the valid XML for the created process.
Note: This scenario is one of the few occasions where the POST succeeds, yet returns XML that differs from the input XML. The results can be confusing, because a standard approach for validating whether POSTs are successful is to compare the output XML with the input. If they differ, assume that the POST failed. However, in this scenario it did not fail.
Process type / Master step
Result file field exposed
Analyte field Exposed
A
m, n
q, r
B
m, o
q, s
{username}
Supplies the username of the current user running the step to the triggered automation script.
cmd /c "C:\ai\ai.bat {username}"
resolves to:
cmd /c C:\ai\ai.bat adminuser
{password}
Supplies the password of the current user running the step to the triggered automation script.
cmd /c "C:\ai\ai.bat {password}"
resolves to:
cmd /c C:\ai\ai.bat 3BlindMice
In log files, the password supplied on the command line is replaced with a series of *** characters.
{baseURI}
Supplies the base API URI to the triggered automation script.
cmd /c "C:\ai\ai.bat {baseURI}"
resolves to:
cmd /c C:\ai\ai.bat https://lims.lan.29/api
{derivedSampleLuids}
Supplies the derived sample LIMS IDs to the triggered automation script.
cmd /c "C:\ai\ai.bat {derivedSampleLuids}"
resolves to:
cmd /c C:\ai\ai.bat 2-1641 2-1642 2-1643
{userinput:customParameterName}
Allows for data input to supply the triggered automation script. Custom parameters are identified with the prefix 'userinput:'
The following command line requires the user to input a value for 'more_yield':
yieldscript.sh -y {userinput:more_yield} -u {username}
{username}
Supplies the username of the current user running the step to the triggered automation script
cmd /c "C:\ai\ai.bat {username}"
resolves to:
cmd /c C:\ai\ai.bat adminuser
{password}
Supplies the password of the current user running the step to the triggered automation script.
cmd /c "C:\ai\ai.bat {password}"
resolves to:
cmd /c C:\ai\ai.bat 3BlindMice
In log files, the password supplied on the command line is replaced with a series of *** characters.
{baseURI}
Supplies the base API URI to the triggered automation script.
cmd /c "C:\ai\ai.bat {baseURI}"
resolves to:
cmd /c C:\ai\ai.bat https://lims.lan.29/api
{stepURI}
Supplies the URI of the step to the triggered automation script. Include the version parameter (ie, {stepURI:version}) to specify the version of the REST API to be accessed.
cmd /c "C:\ai\ai.bat {stepURI:v2}"
resolves to:
cmd /c C:\ai\ai.bat https://yourServerNameOrIP/api/v2/steps/CAM-CSB-100212-24-197
{artifactsURI}
Supplies the URI of the artifacts root to the triggered automation script.
Include the version parameter (ie, {artifactsURI:version}) to specify the version of the REST API to be accessed.
cmd /c "C:\ai\ai.bat {artifactsURI:v2}"
resolves to:
cmd /c C:\ai\ai.bat https://yourServerNameOrIP/api/v2/artifacts
{processURI}
{stepURI} token is preferred.
{processURI} is deprecated and less accurate. May be removed in future versions.
Supplies the URI of the step to the triggered automation script.
If using the deprecated {processURI} token, the addition of the version and scheme parameters is recommended ({processURI:version:scheme}).
Adding the version and scheme reduces the chance of a server and REST version upgrade unknowingly affecting your scripts.
cmd /c "C:\ai\ai.bat {processURI:v2:http}"
resolves to:
cmd /c C:\ai\ai.bat https://yourServerNameOrIP/api/v2/processes/CAM-CSB-100212-24-197
{processLuid}
Supplies the LIMS ID of the step that triggered the automation script.
cmd /c "C:\ai\ai.bat {processLuid}"
resolves to:
cmd /c C:\ai\ai.bat CAM-CSB-100212-24-169
{udf:nameOfUDF}
Supplies the current value stored within a UDF configured as nameofUDF.
cmd /c "C:\ai\ai.bat {udf:injection_volume}"
resolves to:
cmd /c C:\ai\ai.bat 12.4
{parentProcessUdf:nameOfUDF}
Supplies the current value stored within a UDF configured as nameofUDF of the immediate parent step to the step that triggered the automation script.
The parent step must provide the inputs (derived samples) to the step.
In cases where there are multiple parents (ie, the inputs are derived from various steps) only the first of these parents is returned.
cmd /c "C:\ai\ai.bat {parentProcessUdf:RunID}"
resolves to:
cmd /c C:\ai\ai.bat RUN_BW1765
{parentProcessUdfN:nameOfUDF}
Supplies the current value stored within a UDF configured as nameofUDF of the immediate parent step to the step that triggered the automation script. The parent step must provide the inputs (derived samples) to the step.
In cases where there are multiple parents (ie, the inputs are derived from various steps) all parent step IDs are treated as an array-based list.
N specifies the array list index position 0..n of the desired step.
{parentProcessUdf0:nameofUDF} is equivalent to {parentProcessUdf:nameofUDF}.
cmd /c "C:\ai\ai.bat {parentProcessUdf1:RunID}"
resolves to:
cmd /c C:\ai\ai.bat RUN_HJ1865
{outputFileLuids}
Supplies the LIMS IDs of all step output file placeholders.
cmd /c "C:\ai\ai.bat {outputFileLuids}"
resolves to:
cmd /c C:\ai\ai.bat "BAR103A1CO248" "BAR103A1CO249" "BAR103A1CO250" "BAR103A1CO251" "BAR103A3CO158" "BAR103A3CO159" "BAR103A3CO160" "BAR103A3CO161"
{outputFileLuidN}
Supplies the LIMS ID for the specified step output file placeholder. All output file placeholders applying to inputs of the step are treated as an array-based list, where N specifies the array list index position [0..n] of the desired file.
Assuming the same eight output files as in the previous example:
cmd /c "C:\ai\ai.bat {outputFileLuid0}"
resolves to:
cmd /c C:\ai\ai.bat BAR103A1CO248
cmd /c "C:\ai\ai.bat {outputFileLuid1}"
resolves to:
cmd /c C:\ai\ai.bat BAR103A1CO249
cmd /c "C:\ai\ai.bat {outputFileLuid7}"
resolves to:
cmd /c C:\ai\ai.bat BAR103A3CO161
{compoundOutputFileLuids}
Supplies the LIMS IDs for all shared step output file placeholders.
cmd /c "C:\ai\ai.bat {compoundOutputFileLuids}"
resolves to:
cmd /c C:\ai\ai.bat "92-527" "92-528" "100-541" "100-544"
{compoundOutputFileLuidN}
Supplies the LIMS ID for the specified step output file placeholder that applies to an individual input. All output file placeholders applying to individual inputs for the step are treated as an array-based list, where *N specifies the array list index position [0..n] of the desired file.
Assuming the same four output files as in the previous example:
cmd /c "C:\ai\ai.bat {compoundOutputFileLuid0}"
resolves to:
cmd /c C:\ai\ai.bat 92-527
cmd /c "C:\ai\ai.bat {compoundOutputFileLuid1}"
resolves to:
cmd /c C:\ai\ai.bat 92-528
cmd /c "C:\ai\ai.bat {compoundOutputFileLuid2}"
resolves to:
cmd /c C:\ai\ai.bat 100-541
cmd /c "C:\ai\ai.bat {compoundOutputFileLuid3}"
resolves to:
cmd /c C:\ai\ai.bat 100-544
Deprecated {parentProcessLuid*} tokens
The following tokens have been deprecated:
• {parentProcessLuid}
• {parentProcessLuids}
• {parentProcessLuidN}
These tokens were only applicable to steps that take file inputs. File inputs are no longer supported in the Clarity LIMS.
{username}
Supplies the username of the current user running the step to the triggered automation script
cmd /c "C:\ai\ai.bat {username}"
resolves to:
cmd /c C:\ai\ai.bat adminuser
{password}
Supplies the password of the current user running the step to the triggered automation script.
cmd /c "C:\ai\ai.bat {password}"
resolves to:
cmd /c C:\ai\ai.bat 3BlindMice
In log files, the password supplied on the command line is replaced with a series of *** characters
{baseURI}
Supplies the base API URI to the triggered automation script.
cmd /c "C:\ai\ai.bat {baseURI}"
resolves to:
cmd /c C:\ai\ai.bat https://lims.lan.29/api/
cmd /c "C:\ai\ai.bat {baseURI}v2"
resolves to:
cmd /c C:\ai\ai.bat https://lims.lan.29/api/v2
NOTE: To access the endpoints, make sure that the {baseURI} is appended with v2. You can include this in the token in the command line, as shown above, or in the script itself.
{projectLuid}
Supplies the URI of the step to the triggered automation script. Include the version parameter (ie, {stepURI:version}) to specify the version of the REST API to be accessed.
cmd /c "C:\ai\ai.bat {C:\ai\ai.bat {projectLuid}"
resolves to:
cmd /c C:\ai\ai.bat ADM123
The Clarity LIMS Cookbook uses example scripts to help you learn how to work with REST and EPP automation scripts. Cookbook recipes are small, specific how-to articles designed to help you understand REST and automation script concepts. Each recipe includes the following:
Explanations about a concept and how a particular programming interface is used in a script.
A snippet of script code to demonstrate the concept.
The API documentation includes the terms External Program Integration Plug-in (EPP) and EPP node.
As of Clarity LIMS v5.0, these terms are deprecated.
EPP has been replaced with automation.
EPP node is referred to as the Automation Worker or Automation Worker node. These components are used to trigger and run scripts, typically after lab activities are recorded in the LIMS.
The best way to get started is to download the example script and try it out. After you have seen how the script works, you can dissect it and use the pieces to create your own script.
This topic explains how to:
Detect when files have have been uploaded.
Extract the key information that might comprise a notification.
The Files API Resource
The key resource to investigate is the files resource, which provides a listing of files within the system.
On a test system accessing the files resource as follows:
produces the following output:
Although not particularly useful in itself, the files URI becomes more interesting when we filter it to only include files uploaded after a specified date-time, and also only those files that have a published status of 'true'.
For example, the following URI:
produces this output on a test system:
This outcome is much more manageable. Because they are uploaded via the Collaborations Interface, they inherently have a published status of 'true'. We use this status to exclude regular files uploaded to the LIMS via other methods and interfaces.
By following the URIs to retrieve the full XML representations of these files, the output is similar to the following:
and:
Retrieve the associated project/sample, and extract the names and/or IDs to embed into the notification, by following the URI in the 'attached-to' elements.
In this case, the following result is produced:
and:
A script must be run periodically (hourly/daily) that queries the files resource for files that have a published status of true, and are last modified in the period of interest.
After this list of files is retrieved, the following pseudocode can be applied:
An example derived from the above XML could lead to the following notifications:
The QC flag parameter qc-flag can be set on an input or output analyte (derived sample) or on an individual result file (measurement) with a few lines of Groovy code.
In the following example, the qc-flag value of the analyte artifact is set based on the value of the bp_size variable when compared to the threshold1 and threshold2 variables.
The following code determines whether a qc-flag value is previously set, such that a flag is only set if one does not exist.
This article provides hints and tips to help you get the most out of the Cookbook recipes included in this section.
When reading a recipe, look for file attachments. Almost all examples have an attached Groovy script to download.
To use the scripts with a non-production server, edit the script to include your server network address and credentials.
For illustration purposes, most scripts use populated information. You must add your own sample, process (eg, a master step in Clarity LIMS v5 and later), and other data. The non-production server has a directory set up for this purpose at
Using Full Production Scripts
When using full production scripts, the following considerations must be taken:
Cookbook scripts are written to explain concepts. They are not deeply engineered code written in a defensive programming style. Always think through the expected and unexpected input of your scripts when incorporating concepts or code from Cookbook recipe examples.
Full production servers can require different configurations for scripting languages other than Groovy, and for the EPP/automation worker node. For example, your script directory can be accessible by the user account running the EPP/automation worker node for User Interface (UI) triggers.
Discuss the software deployment plans with your system administrator to coordinate between non-production and production servers. For more information on using production scripts, see REST General Concepts and Automation.
Each recipe was written with a specific API version. For information on how to check the version of the API on your system, see Requesting API Version Information.
Apache Groovy is required for most Cookbook examples. It is open source and is available under an Apache license from groovy-lang.org/download.html. It is installed on non-production servers, but you can also install it to your desktop. The Cookbook examples were developed with Groovy v1.7.
Python is required for some Cookbook examples. It is available from www.python.org/download. The Cookbook examples were developed with Python v2.7.
The automation worker node executing the command uses the first instance of Groovy it finds in the executable search path for the limited shell. This is the $PATH variable.
If you have multiple versions of Groovy (or multiple users using different versions) and experience problems with your command-line calls, declare the full path to Groovy/Java in your command.
To see your executable search path, and other environment variables available to you, run the following command:
Compare this command to the full logon shell, which is
For more information on command-line actions, see Supported Command Line Interpreters.
For details on the programming interface methods and data elements available, refer to the following documentation:
Browsing for, and adjusting resources, in Firefox, Chrome, or other browsers is great for getting started or for troubleshooting.
The following plug-ins are available with Firefox:
Text Link—Makes any URI in the XML a hyperlink.
Linkificator—Converts text links into selectable links.
RESTClient—Provides a simple interface to call HTTP methods on REST resources. It is useful for troubleshooting, checking error codes, and for getting comfortable with GET, PUT, and POST requests.
The following plug-ins are available with Chrome:
Advanced REST Client—Provides similar functionality to Poster by Firefox.
XML Tree—Displays XML data in a user-friendly way.
You can configure the automation trigger and use automation to invoke any external program that runs from a command line. Refer to the following for details:
EPP automation/support is compatible with API v2 r21 and later.
The API documentation includes the terms External Program Integration Plug-in (EPP) and EPP node.
As of Clarity LIMS v5.0, these terms are deprecated.
EPP has been replaced with automation.
EPP node is referred to as the Automation Worker or Automation Worker node. These components are used to trigger and run scripts, typically after lab activities are recorded in the LIMS.
Before POSTing to the files resource, make sure that the file exists in the location referenced by the content-location element. If the file does not exist in this location, the POST fails.
${baseName}.dirServer-side configuration allows for configuration of multiple filestores to be associated to entities (samples, projects, processes/steps, and so on) in BaseSpace Clarity LIMS.
This feature allows for linking to large data files on a different server, eliminating the need to move large files on the Clarity LIMS filestore. Large files can include results, images, and searches, and so on.
For example, sequencing instruments typically produce large result files. Attaching these files to the Sequencing step in Clarity LIMS results in the following drawbacks.
Involves transferring the files to the Clarity LIMS filestore. The larger the file, the slower the transfer speed.
Requires a large amount of space as runs build up.
An alternative solution is to set up a remote filestore to be used as the results directory from which Clarity LIMS accesses the files directly.
To do this setup, three steps are required:
Set up HTTP, HTTPS, FTP, or SFTP access to the files and folders you wish to share.
Configure the Clarity LIMS server to recognize the URI of a file on the remote filestore.
POST information to Clarity LIMS, via the REST API, to reference the file from a Clarity LIMS entity (project, sample, process/step, result file, and so on).
BaseSpace Clarity LIMS can operate with many different forms of file servers – HTTP, HTTPS, FTP, and SFTP access are all supported.
It is your responsibility to set up this access. For HTTP, you may be interested in httpd or HFS for HTTP file serving.
To track a new remote filestore, Clarity LIMS requires four database properties: directory, host, port, and scheme.
The four properties share a base name, but have different suffixes attached (dir, host, port, scheme). These suffixes are summarized in the following table.
The base name can be anything. Clarity LIMS finds any base names that end in .scheme and uses that base name to find the other information.
If necessary, add the last three properties listed in the table (with the .domain, .user, and .password suffixes) to specify a domain, username, and password to be used when accessing files.
Clarity LIMS v5 and later—For the property changes to take effect, Tomcat must be restarted.
Use the omxprops-ConfigTool.jar to create, update, and retrieve values of the database properties. This tool is found at the following location: /opt/gls/clarity/tools/propertytool
To create a property, use the following examples:
NOTE: These properties may not be global properties. Do not use the -g property here.
To get the value of an existing property:
To update the value of an existing property:
To encrypt a password:
NOTE: To set a property to the encrypted result, set the value as ENC().
The following example maps a remote HTTP URI: http://YourHTTPHost:80/limsdata/LegacyFile.RAW
In this case, the base name for the properties is http-lims-files.
Steps
As the glsjboss user, access the omxprops property tool in /opt/gls/clarity/tools/propertytool.
Add the following dir, host, port, and scheme properties to the server from the command line:
In the example above, the <http-lims-files.dir> parameter value is /limsdata. Any file in http://<YourHTTPHost/limsdata/ is available to be referenced by BaseSpace Clarity LIMS.
For all files on the web server to be available, use the / parameter value, for example:
After the filestore properties are added to Clarity LIMS (and JBoss/Tomcat has been restarted, as applicable), you can attach the files to Clarity LIMS.
To attach the files to Clarity LIMS:
POST to http://hostname/api/v2/files, with the content-location tag pointing to the remote filestore.
An example XML POST is provided, using the filestore created in the previous example:
Results
The file is now downloadable directly from Clarity LIMS.
Any entity that can have a file attached to it may be referenced in the parameter.
For more information on working with files, see Work with Files.
This page is maintained for posterity, but customers are encouraged to visit the GitHub repository for all subsequent updates to the library (including changelogs). Unless otherwise specified, changes are only made in the Python version of the library.
Dec. 19, 2017:
glsapiutil v3 ALPHA (bleeding-edge library) released on GitHub. GitHub has the most current library.
Links to library removed from this page.
Dec. 15, 2016:
reportScriptStatus() function had a bug that caused it to not work when a <message> node was unavailable. This has been fixed.
deleteObject() functions now available for both v1 and v2 of the library.
getBaseURI() should now return a trailing slash at the end of the URI string.
getFiles() function added to batch retrieve files.
NOTE: The Python glsapiutil.py and glsapiutil3.py classes are now available on GitHub. GitHub has the most current libraries. glsapiutil3.py works with both Python v2 and v3.
The GLSRestApiUtils utility class provides a consistent way to perform common REST operations, such as REST HTTP methods or common XML string manipulation. It is a utility class written in Python and Groovy for the API Cookbook examples. This utility class is specific to the Cookbook examples. The class is not required for the API with Groovy or Python, as there are many other ways to manipulate HTTP and XML in these languages. However, it is required if you want to run the Cookbook examples as written. It is also not part of REST or EPP/automation.
Almost all Cookbook example files use the HTTP methods from the GLSRestApiUtils class.
The HTTP method calls in Groovy resemble the following example:
In this example, the returnNode and inputNode are Groovy nodes containing XML. The XML in the returnNode contains the XML available from the server after a successful method call. If the method call was unsuccessful, the XML contains error information. The following is an example of the XML manipulation functions in the utility:
As you can see from these examples, the utility class is easy to include in your scripting. The code is contained in the GLSRestApiUtils files attached to this page.
To deploy a Groovy script that uses the utility class, you must include the directory containing GLSRestApiUtils.groovy in the Groovy class path.
Groovy provides several ways to package and distribute source files, including the following methods:
Call Groovy with the -classpath (or -cp) parameter.
Add to the CLASSPATH environment variable.
Create a ~/.groovy/lib directory for jar files for common libraries.
If you would like to experiment with the Cookbook examples, you can also copy the file into the same directory as the example script.
Library functions
The HTTP method calls for the Python version of the library resemble the following:
Unlike with the Groovy library, the rest functions in the Python library require XML (text) as input (not DOM nodes). The return values of the GET, PUT, and POST functions are also XML text.
If a script must work with a running process or step, it is normal to use either the {processURI:v2} or the {stepURI:v2} tokens. The following example has the {stepURI:v2} token:
In Clarity LIMS v4 and above, these tokens sometimes resolve to https://localhost:9080/api/v2/... instead of the expected HOSTNAME. Setting up the API object with a hostname other than https://localhost:9080 can cause Access Denied errors. To avoid this issue, alter the API authentication code slightly as follows.
TThe changes are highlighted in red. This code takes the resolved {stepURI:v2} token (assumed to be stored in the args object) and resets the HOSTNAME variable to the new value (eg, https://localhost:9080) before authenticating.
These changes are fully backward-compatible with Clarity LIMS v4 or earlier. The EPP/automation URI tokens resolve to the expected hostname, and the setupGlobalsFromURI() function still parses it correctly.
NOTE: On GitHub, in addition to the libraries, a basic_complete_recipe.py script that contains the skeleton code is needed to get started with the Python API. This script also includes the modifications required to work with Clarity LIMS v4 and later. The legacy Groovy library can still be obtained using the attachment.
Attachments
GLSRestApiUtils.groovy:
At the completion of a process (using API v2 r21 or later), EPP can invoke any external program that runs from a command line. In this example, a process with a reference to a declared EPP program is configured and executed entirely via the API.
EPP automation/support is compatible with API v2 r21 and later.
The API documentation includes the terms External Program Integration Plug-in (EPP) and EPP node.
As of Clarity LIMS v5.0, these terms are deprecated.
EPP has been replaced with automation.
EPP node is referred to as the Automation Worker or Automation Worker node. These components are used to trigger and run scripts, typically after lab activities are recorded in the LIMS.
You have defined a process that has:
An input of type analyte.
A single output per input.
A single shared result file.
The process type is associated with an external program that has the following requirements:
At least one process-parameter defined - named TestProcessParam.
A parameter string of:
bash -c "echo HelloWorld > {compoundOutputFileLuid0}.txt"
Samples have been added to the LIMS.
To run a process on a sample, you must first identify the sample to be used as the input to the process.
For this example, run the process on the first highlighted sample.
After you have identified the sample, you can use its LIMS ID to as a parameter for the script. The artifact URI is then used as the input in constructing the XML to POST and executing a process.
The following code block outlines this action and obtains the URI of the container for the process execution POST.
NOTE: As shown in other examples, you can use StreamingMarkupBuilder to construct the XML needed for the POST.
You now have all the pieces of data to construct the XML for the process execution. The following is an example of what this XML looks like.
Executing a process uses the processexecution (prx) namespace. The following elements are required for a successful POST:
type - the name of the process being run
technician uri - the URI for the technician that will be listed as running the process
input-output-map - one input-output-map element for each pair of inputs and outputs
input uri - the URI for the input artifact
output type - the type of artifact of the output
If the outputs of the process are analytes, then the following elements are also required:
container uri - the URI for the container the output will be placed in
value - the well placement for the output
To use the configured EPP process, the process-parameter element is required. This element is the name of the configured EPP that is executed when this process is posted.
The following elements that match the processParamName variable must exist in the system before the process can be executed:
Process type
Technician
Input artifact
Container
EPP parameter
With analyte outputs, if there are no containers with empty wells in the system, you must create one before running the process.
The XML constructed must match the configuration of the process type. For example, if the process is configured to have both analytes and a shared result file as outputs, you must have the following:
An input-output-map for each pair of analyte inputs and outputs.
An additional input-output-map for the shared result file.
The name on the process execution XML must match one of the possibly declared EPP parameter names. This requirement is true for any EPP parameters.
If the POST is Successful, then the process XML is returned.
In the following example, there are two <input-output-map> elements. The second instance has the output-generation-type of PerAllInputs. This element indicates that the result file is shared and only one is produced, regardless of the number of inputs.
If the POST is Not Successful, then the XML that is returned contains the error that occurred when the POST completed. The following example shows this error:
Attachments
ExecuteProcessWithEPP.groovy:
autocomplete-process.py:
Automations (formerly referred to as EPP triggers or automation actions) allow lab scientists to invoke scripts as part of their workflow. These scripts must successfully complete for the lab scientist to proceed to the next step of the workflow.
EPP automation/support is compatible with API v2 r21 and later.
The API documentation includes the terms External Program Integration Plug-in (EPP) and EPP node.
As of Clarity LIMS v5.0, these terms are deprecated.
EPP has been replaced with automation.
EPP node is referred to as the Automation Worker or Automation Worker node. These components are used to trigger and run scripts, typically after lab activities are recorded in the LIMS.
Automations have various uses, including the following:
Workflow enforcement—Makes sure that samples only enter valid protocol steps.
Business logic enforcement—Validates that samples are approved by accounting before work is done on them. This automation can also make sure that selected samples are worked on together.
Automatic file generation—Automates the creation of driver files, sample sheets, or other files specific to your protocol and instrumentation.
Notification—Notifies external systems of lab progress. For example, you can notify Accounting of completed projects so that they can then bill for services rendered.
You can enable automations on master steps in two configuration areas of Clarity LIMS:
On the Automations tab, when adding/configuring an automation. See the Adding and Configuring Automations article in the Automations section of the Clarity LIMS documentation.
On the Lab Work tab, on the master step configuration form. See the _Adding & Configuring Master Steps and Step_s article in the Steps and Master Steps section of the Clarity LIMS documentation.
After it is enabled on a master step, the automation becomes available for use on all steps derived from that master step.
You can configure the automation trigger on the master step, or on the steps derived from that master step.
While executing a script, if more than one script would be triggered for a single user action, they are reported in sequence. This reporting continues until all scripts complete, or one of them fails.
An example scenario would be a step that is configured to execute the following:
One script upon exit of the Placement screen.
A second script upon entry of the Record Details screen.
In this scenario, when the lab scientist advances their protocol step from the Placement screen to the Record Details screen, the scripts are executed in sequence.
The parameter string/automation name configured on the master step is displayed in a progress message. You can use this feature by giving your parameter strings/automations meaningful names that provide you with context about what the script is doing. The following is an example of a progress message.
![In\_Progress.png](https://genologics.zendesk.com/attachments/token/yawon1xdfirt9mm/?name=In+Progress.png)
You cannot proceed until the script completes successfully.
You can request to cancel a script that is not responsive. While canceling abandons the monitoring of script execution, it does not stop the execution of the script.
After canceling a script, follow up with the Clarity LIMS administrator to determine if the AI node/automation worker must be restarted.
The scientific programmers in your facility can provide you with a message upon successful execution of a script. There are two possible non-fatal messages: OK and WARNING. These messages can be set using the step program status REST API endpoint.
Message boxes display the script name, followed by a message that is set by the script using the step program status REST API endpoint. Line breaks are permitted in the custom message. The following is an example of a success message:
After you select OK, you are permitted to proceed in the workflow.
When a script fails, a message box displays. There are two ways to produce fatal messages:
By using the step program status REST API endpoint (informing FAILURE as the status)
By generating output to the console and returning a non-zero exit code.
For example, when beginning a step, if the script does not allow you to work on the samples together in Ice Bucket, the samples will be returned to Ice Bucket after acknowledging the error message. In this case, the step is prevented from being tracked. The following is an example of a failure message:
If you attempt to advance a step from the Pooling screen, but an error is detected, the error state prevents you from continuing. The following is an example of this type of message:
After you select OK, you are prevented from proceeding in the workflow. Instead, you must return to the Pooling screen and address the problem before proceeding.
In high throughput labs, samples are worked on in batches and some work is executed by a robot. Sometimes, a set of plates must be rearrayed to one larger plate before the robot can begin the lab step.
This example accomplishes this using two scripts. One script is configured on a derived sample automation, while the second script is included in a command line configured on a step automation.
Before you follow the example, make sure that you have the following items:
A project containing samples assigned to a workflow in Clarity LIMS.
The workflow name.
Given samples are assigned to the same workflow stage.
This example demonstrates the following scripts:
AssignToRearrayWf.groovy—Executed as a derived sample automation, this script assigns selected samples to the rearray step.
AssignToLastRemoved.groovy—Executed after the rearray step, this script assigns the samples to the stage to which they were originally assigned. The script is included in a command line configured on a step automation.
In Clarity LIMS, under Configuration, select the Automation tab.
Select the Derived Sample Automation tab.
Select New Automation and create an automation that prompts the user for the workflow stage name to be used.
In the example, note the following:
The {groovy_bin_location} and {script_location} parameters must be customized to reflect the locations on your computer.
The –w option allows for user input to be passed to the script as a command-line variable.
The AssignToRearrayWf script has a list of artifact (sample) LIMS IDs given on the command line. To begin, use this script to build a list of artifact nodes.
The following code example builds a list of artifact URIs using the artifact LIMS ID list and the getArtifactNodes function. The resulting artifact URI list can then be used for a batchGET call to return the artifact nodes.
In this example, you can assume that the workflow name is known by the user and is passed to the script by user input when the automation is initiated.
The workflow can then be queried for using the passed workflow name. The workflow name is first encoded, and from this, you can retrieve the workflow URI.
For the samples to be placed in the same container, they must all belong to the same workflow and be currently queued to the same stage in that workflow.
Using the workflow name passed in by the user, do the following:
Search the workflow stage list of the first artifact and store the URI of the most recent stage that is part of the workflow, if it is queued. Otherwise, the script exits with an error message.
After storing the workflow stage URI of the first artifact, use the checkMatch function check against the remaining artifacts in the list to verify they are all currently queued to the same stage.
If all artifacts are queued for the stage, they are removed from the queue of the stage under the lastWfStageURI function.
In this example, all the artifacts are unassigned from the previous workflow stage returned and assigned to the rearray stage using the queuePlacementStep function. The previous methods have verified that the artifacts in the list can be rearrayed together.
The returned XML node is then posted using httpPOST.
In Clarity LIMS, under Configuration, select the Lab Work tab.
Create a master step of Standard step type.
From Configuration, select the Automation tab.
Select the Step Automation tab.
Create an automation for the AssignToLastRemoved.groovy script.
The {groovy_bin_location} and {script_location} parameters must be customized to reflect the locations on your computer.
Enable the automation on the master step you created in step 2.
Configure a new protocol and step as follows.
On the Lab Work tab, create a non-QC protocol.
In the Protocols list, select the new protocol and then add a new step to it. Base the new step on the master step you created in step 2.
On the Step Settings form, in the Automation section, you see the step automation you configured. Configure the automation triggers as follows.
Trigger Location—Step
Trigger Style—Automatic upon exit
On the Placement milestone, Add 96 well plate and 384 well plate as the permitted destination container types for the step.
Remove the default Tube container type.
Save the step.
Configure a new workflow as follows:
On the Lab Work tab, create a workflow.
Add the protocol you created to the workflow.
The first step of AssignToLastRemovedStage script is the same as for the AssignToRearrayWf script: return the artifact node list.
However, in this script, you are not directly given the artifact LIMS IDs. Instead, because you receive the step URI from the process parameter command line, you can collect the artifact URIs from the inputs of the step details input-output map using the getArtifactNodes function.
An example step details URI might be {hostname}/api/v2/steps/{stepLIMSID}/details.
Each artifact in the list was removed from this stage before going through the rearray step.
With this in mind, and because the Clarity LIMS API stores artifact history by time (including stage history), the stage to which you now want to assign the samples to be the second-to-last stage in the workflow-stage list.
The following method finds the stage from which the artifacts were removed using the getLastRemoved function:
You can then check to make sure all artifacts originated in this stage. This helps you avoid the scenario where the AssignToRearrayStage.groovy script was run on two groups of artifacts queried while in different workflow stages.
Function: assignStage
This returned stage URI is then used to build the assignment XML to assign all the samples back to this stage with the assignStage function.
After posting this XML node, the samples are assigned back to the stage in which they began.
In the Projects Dashboard, select the samples to be rearrayed and run the 'Assign to Rearray' automation.
On automation trigger, the {userinput} phrase will invoke a dialog that prompts for the full name of the workflow.
The samples assigned by the Assign to Rearray automation is available to assign to a new container.
Add the samples to the Ice Bucket and begin work.
The placement screen opens, allowing you to place the samples into the new container, in your desired placement pattern.
Proceed to the Record Details screen, then on to Next Steps. Do not perform any actions on these screens.
In the next step drop-down list, select Mark Protocol as Complete and select Apply.
Selec Next. This initiates the 'Assign to last removed' trigger, which assigns the samples back to the step from which they were removed.
AssignToRearrayWf.groovy:
AssignToLastRemoved.groovy:
-u {user}
LIMS username
-{password}
LIMS password
-l {stepURI}
LIMS stepURI—the URI of the transient step API resource that invokes the script.
-s {status}
The status the script is reporting (OK, WARNING, or ERROR).
-m {message}
The descriptive message displayed to the user.
When working with submitted samples, you can do the following:
You can rename samples in the system using API (v2 r21 and later). The amount of information provided in the sample name is sometimes minimal. After the sample is in the system, you can add additional information to the name. For example, you can help lab scientists understand what they must do with a sample, or where it is in processing.
As of Clarity LIMS v5, the term user-defined field (UDF) has been replaced with custom field in the user interface. However, the API resource is still called UDF.
There are two types of custom fields:
Master step fields—Configured on master steps. Master step fields only apply to the following:
The master step on which the fields are configured.
The steps derived from those master steps.
Global fields—Configured on entities (eg, submitted sample, derived sample, measurement, etc.). Global fields apply to the entire Clarity LIMS system.
Clarity LIMS displays detailed information for each sample, including its name, container, well, and date submitted.
In this example, the sample name is Colon-1. To help keep context when samples are processed by default, the submitted sample name is used for the downstream samples (or derived samples) generated by a step in Clarity LIMS.
Before you rename a sample, you must first request the resource via a GET. As REST resources are self-contained entities, always request the full XML representation before editing the portion that you wish to change.
The XML representations of individual REST resources are self-contained entities. Always request the full XML representation before editing any portion of the XML. If you do not use the complete XML when you update the resource, you can inadvertently change data.
The following GET method below returns the full XML structure for the sample:
The variable sample now holds the complete XML structure returned from the sampleURI.
The following example shows the XML for the sample, with the name element on the second line. In this particular case, the Clarity LIMS configuration has expanded the sample with 18 custom fields that provide sample information.
Renaming the sample consists of the following:
The name change in the XML
The PUT call to update the sample resource
The name change is executed with the nameNode XML element node, which references the XML element containing the name of the sample.
The PUT method updates the individual sample resource using the complete XML representation, which includes the new name. Such complete updates provide a simple interaction between client and server.
The updated sample view displays the new name. You can also view the results in a web browser via the URI at
http://<YourIPaddress>/api/v2/samples/<SampleLIMSID>
RenamingSample.groovy:
You can use the API (v2 r21 and later) to automate the process of assigning samples to a workflow. This example shows how to create the required XML. The example also provides a brief introduction on how to use the route/artifacts endpoint, which is the endpoint used to perform the sample assignment.
The example takes two samples that exist in Clarity LIMS and assigns each of them to a different workflow.
Define the assignment endpoint URI using the following example. The assignment endpoint allows you to assign the artifacts to the desired workflow.
You can also retrieve the base artifact URIs of the samples using the following example:
Use the following example to gather the workflow URIs:
Next, you can construct the XML that is posted to perform the workflow assignment. You can do this construction by using the StreamingMarkupBuilder and the following example.
Assign the analyte (derived sample) artifact of the sample to a workflow as follows.
Create an assign tag with the URI of the destination workflow as an attribute.
Create an artifact tag inside the assign tag with the URI of the analyte as an attribute.
After the assignment XML is defined, you can POST it to the API. This POST performs the sample assignment.
After the script has run, the samples display in the first step of the first protocol in the specified workflows.
AssigningArtifactsToWorkflows.groovy:
Before downloading your first script, do the following actions:
Familiarize yourself with the API Cookbook prerequisites and key concepts found in and .
Use a non-production server for script development.
Familiarize yourself with the coding language.
Use the GLSRestApiUtils file to assist with recipe development.
Review .
The example script recipes really come to life when you change them and see what happens. Running the scripts often requires new custom fields and master steps to be added to the system. You need unrestricted access to development and test servers (licensed as non-production servers) with Groovy (a coding language). You also need an AI node/automation worker installed so that you can experiment freely.
For more information and recommendations for deploying and copying scripts in development, test, and product environments, refer to .
The Cookbook Recipe Examples are written in Groovy. Many of our examples use the following Groovy concepts:
Closures: Groovy closures are essentially blocks of code that can be stored for later use.
The each method: The each method takes a closure as an argument. It then iterates through each element in a collection, performing the closure on the element, which is (by default) stored in the 'it' variable. For example:
Python
The Cookbook also provides a few examples written in Python, which uses the minidom module. The following script shows how the minidom module is used:
This same functionality can be obtained using any programming language capable of interacting with the Web API. For more information on the minidom module, refer to Python Minidom.
In addition to the Groovy file example attached to each Cookbook recipe page, most recipes require the glsapiutil.py file, which is available on our GitHub repository. The mature glsapiutil.py library is strictly for Python 2. A newer version, glsapiutil3.py, works with Python 3.
You can add samples to the system using API (v2 r21 and later). This example assumes that you have sample information in a file that is difficult to convert into a format suitable for importing into Clarity LIMS. The aim is to add the samples, and all associated data, into Clarity LIMS without having to translate the file manually. You can use the REST API to add the samples.
Follow the instructions provided in the following examples:
To add a sample in Clarity LIMS, you must assign it to a project and place it into a container. This example assumes that you are adding a new project and container for the samples being created.
As shown in , you define a project by using StreamingMarkupBuilder. StreamingMarkupBuilder is a built-in Groovy data structure designed to build XML structures. This structure creates the XML that is used in a POST to the projects resource:
If the POST to projects is successful, the following XML is returned:
If the POST to containers is successful, the following XML is returned:
Now that you have the project and container, you can use StreamingMarkupBuilder to create the sample. The XML created to add the sample uses the URIs for the project and container that were created in the previous steps.
This POST to the samples resource creates a sample in Clarity LIMS, adding it to the project and container specified in the POST.
In Clarity LIMS Projects and Samples dashboard, open the project to find the new sample in its container.
PostSample.groovy:
When working with containers, you can do the following:
In the Clarity LIMS API (v2 r21 or later), the initial submitted sample is referred to as a sample (or root artifact). Any derived sample output from a process/step is referred to as an analyte, or artifact of type analyte. This example demonstrates the relationship between samples and analyte artifacts. You must have a sample in the system and one or more processes/steps done that output analyte (derived sample) artifacts.
As of Clarity LIMS v5, the term user-defined field (UDF) has been replaced with custom field in the user interface. However, the API resource is still called UDF.
There are two types of custom fields:
Master step fields—Configured on master steps. Master step fields only apply to the following:
The master step on which the fields are configured.
The steps derived from those master steps.
Global fields—Configured on entities (eg, submitted sample, derived sample, measurement, etc.). Global fields apply to the entire Clarity LIMS system.
The code example does the following when it is used:
Retrieves the URI of an arbitrary analyte artifact.
Retrieves the corresponding sample of the artifact
Retrieves the original root analyte artifact from the sample, as shown in the following example:
You can generate XML for an arbitrary analyte artifact. The analyte artifact is downstream and has a parent-process element (as shown in line 5). The sample artifact is an original artifact. Downstream artifacts relate to at least one sample, but can also relate to more than one sample, like with pooling or shared result files. The following is an example of XML generated for an analyte artifact:
You can also generate XML for a submitted sample. Every submitted sample has exactly one corresponding original root artifact. A sample representation does not link to downstream artifacts, but you can find them using query parameters in the artifacts list resource. The following is an example of XML generated for a submitted sample:
Lastly, you can generate XML for an original sample artifact called a root artifact. The following is an example of XML generated from an original sample artifact. In this case, both the downstream artifact and the original root artifact point to the same original sample (eg, LIMS ID EXA2241A1).
SampleAndAnalyteRelations.groovy:
The most important information about a sample is often recorded in custom fields in API (v2 r21 and later). These fields often contain information that is critical to the processing of the sample, such as species or sample type.
When samples come into the lab, you can provide lab scientists with information about priority or quality. You can provide this information by changing the value of specific sample custom fields.
This example shows how to change the value of a sample custom field called Priority after you have entered a submitted sample into the system.
As of Clarity LIMS v5, the term user-defined field (UDF) has been replaced with custom field in the user interface. However, the API resource is still called UDF.
There are two types of custom fields:
Master step fields—Configured on master steps. Master step fields only apply to the following:
The master step on which the fields are configured.
The steps derived from those master steps.
Global fields—Configured on entities (eg, submitted sample, derived sample, measurement, etc.). Global fields apply to the entire Clarity LIMS system.
In Clarity LIMS, you can display detailed information for a sample, including the following:
Name
Clarity LIMS ID
Custom fields
In the following figure, you can see that the sample name is DNA Sample-1 and the field named Priority has the value High.
In this example, change the value of the Priority custom field to Critical.
Before you can change the value of the field, you must first request the resource via a GET method.
To change a sample submitted in Clarity LIMS, use the individual sample resource. The XML returned from a GET on the individual sample resource contains the information about the sample.
The following GET method returns the full XML structure for the sample:
The sample variable now holds the complete XML structure returned from the sample GET request.
The XML representations of individual REST resources are self-contained entities. Always request the complete XML representation before editing any portion of the XML. If you do not use the complete XML when you update the resource, you can inadvertently change data.
The following shows XML returned for the sample, with the Priority field shown in red in the second to last line. In this example:
The Clarity LIMS configuration has added three fields to the expanded sample information.
The UDFs are named Sample Type, Phenotypic Information, and Priority.
When updating the Priority field, you need to do the following:
Change the value in the XML.
Use a PUT method to update the sample resource.
You can change the value for Priority to Critical by using the utility files setUdfValue method.
The subsequent PUT method updates the sample resource at the specified URI using the complete XML representation, which includes the new custom field value for XML.
A successful PUT returns the new XML in the returnNode. The results can also be reviewed in a web browser at <YourIPaddress>/api/v2/samples/<SampleLIMSID> URI.
An unsuccessful PUT returns the HTTP response code and message in the returnNode XML .
NOTE: The values for the other two fields, Sample Type and Phenotypic Information, did not change. These values did not change because they were included in the XML used in the PUT (eg, they were held in the sample variable as part of the complete XML structure).
If those custom fields had not been included in the XML, they would have been updated to have no value.
The following XML from our example shows the expected output:
In Clarity LIMS, the updated sample details now show the new Priority value.
UpdateSampleUDF.groovy:
Samples in the lab are always in a container (eg, a tube, plate, or flow cell). When a container holds more than one sample, it is often easier to track the container rather than the individual samples. These containers can be found in API (v2 r21 or later).
In Clarity LIMS, containers are identified the LIMS ID or by name. The best way to find a container in the API is with the LIMS ID. However, the API also supports searching for containers by name by using a filter.
LIMS ID—This is a unique ID. The container resource with LIMS ID 27-42 can be found at\
Name—Container names can be unique, depending on how the server software was set up. In some labs, container names are reused to show when a container is recycled or when samples are submitted in containers.
The following example shows a container list filtered by name. Your system contains a series of containers, named with a specific naming convention.
the queried containers are named Smith553 and 001TGZ.
The request for a container with a specific name is structured in the same way as the request for all containers, but also includes a parameter to filter by name:
The name parameter is repeatable, and the results returned match any of the names queried:
The GET method returns the full XML structure for the list of containers matching the query. In this case, the method returns the XML structure for containers with the names Smith553 and 001TGZ.
The XML contains a list of container elements. The .each method goes through each container node in the list and prints the container LIMS ID.
The XML returned is placed in the variable containers:
If the system has no containers named Smith553 or 001TGZ, then containers.container is an empty list. The .each method does nothing, as expected.
When execution completes, the code returns the list of LIMS IDs associated with the container names Smith553 and 001TGZ. The name and LIMS IDs are different in this case (eg, 27-505 27-511).
GetContainerNameFilter.groovy:
In Clarity LIMS, derived sample automations are automations that users can run on derived samples directly from the Projects Dashboard.
The following example uses an automation to initiate a script that removes multiple derived samples from workflows. The example also describes the main functions included in the script, and shows how to configure the automation in Clarity LIMS and run it from the Projects Dashboard.
Before removing samples from the workflows, make sure you have the following items:
A project containing at least one sample assigned to a workflow.
A step has been run on a sample, resulting in a derived sample.
The derived sample is associated with one or more workflows.
The attached UnassignSamplesFromWorkflows.groovy script uses the derived sample automations feature to remove selected derived samples from their associated workflows. The following actions must be done when removing samples from these workflows.
This getSampleNodes function is passed a list of derived sample LIMS IDs (as a command-line argument) to build a list containing the XML representations of the samples. A for-each loop on the derived sample list makes a GET call for each sample and creates the sample node list. The following command-line example shows how the getSampleNodes function works:
This list is used to retrieve the sample URIs and the workflow-stage URIs. These URIs are required to build the unassignment XML.
The for loop does the following actions:
Makes a GET call for each workflow-stage to which the passed sample is assigned.
Retrieves the associated workflow URIs.
Returns a list containing all URIs for the workflows with which the sample is associated.
Now that the functions used to retrieve both the derived sample URIs and the workflow URIs have been built, you can use StreamingMarkupBuilder to create the XML and then POST to the unassignment URI. This process can be done with the unassignSamplesFromWorkflows and unassignSamplesXML functions.
To unassign the derived samples, you can POST to the artifacts URI at ${hostname}v2/route/artifacts. Nested loops create the declaration for each sample and their associated workflows. The following example shows the declaration built in the format of the workflow URI, with the unassign flag followed by the URI of the sample being unassigned.
Now that the XML is built, convert the XML to a node and post it as follows.
Use GLSRestApiUtils to convert the XML to a node
POST the node using the following command:
Automations can be configured and run using Clarity LIMS
In Clarity LIMS, under Configuration, select the Automation tab.
Select the Derived Sample Automation tab.
Select New Automation and enter the following information:
Automation Name—This is the name that displays to the user running the automation from the Projects Dashboard. Choose a descriptive name that reflects the functionality/purpose (eg, Remove from Workflows).
Channel Name—Enter the channel name.
Command Line—Enter the command line required to invoke the script.\
Select Save.
Run the automation as follows.
Open the Projects Dashboard.
Select a project containing in-progress samples. Select In-progress samples.
In the sample list, you see the submitted and derived samples that are currently in progress for this project.
Select one or more derived samples.
Selecting samples activates the Action button and drop-down list.
In the Action drop-down list, select the Remove From Workflows automation created in the previous step.
The API of the selected samples now shows an additional workflow stage with a status of REMOVED.
UnassignSamplesFromWorkflows.groovy:
As samples are processed in the lab, they are kept in a container. Some of these containers hold multiple samples, and lab scientists often must switch between container tracking and sample tracking.
If you process several containers each day and track them in a list, you would need to find which samples are in those containers. This way, you can record specifics from these container-based activities in relation to the samples from Clarity LIMS.
The example finds which sample is in a given well of a multi-well container using Clarity LIMS and API (v2 r21 or later).
Before you follow the example, make sure that you have the following items:
Several samples exist in the Clarity LIMS.
A step has been run on the samples.
The outputs of the step have been placed in a 96-well plate.
Clarity LIMS captures detailed information for a container (eg, its name, LIMS ID, and the names of the sample in each of its wells). Information about the container and what it currently contains is available in the individual XML resource for the container.
The individual container resource contains a placement element for each sample placed on the container. Each placement element has a child element named value that describes one position on the container (eg, the placement elements for a 96-well plate include A:1, B:5, E:2).
In the script, the GET request retrieves the container specified by the container LIMS ID provided as input to the {containerLIMSID} parameter. The XML representation returned from the API is stored as the value of the container variable:
The following example shows the XML format returned for a container. The XML includes a placement element for each artifact that is placed in a well location in the container.
When you look for the artifact at the target location, the script searches through the placement elements for one with a value element that matches the target. If a match is found, it is stored as the value of the contents variable.
The >uri attribute of the matching placement element is the URI of the artifact that is in the target well location. This is stored as the value of the artifactURI variable, and printed as the output of the script:
Running the script in a console produces the artifact at
GetContentsOfWellLocation.groovy:
When a lab processes samples, the samples are always in a container of some sort (eg, a tube, a 96-well plate, or a flow cell). In Clarity LIMS, this processing is modeled by placing all samples into containers. Because the Clarity LIMS interface relies on container placement for the display of many of its screens, adding containers is a critical step when running a process or adding samples through the API (v2 r21 or later).
The following example demonstrates how to add an empty container, of a predefined container type, to Clarity LIMS through the API.
If you would like to add a batch of containers to the system, you can increase the script execution speed by using batch operations. For more information, refer to the and the articles in the section.
Before you can add a container to the system, you must first define the container to be created. You can construct the XML that defines the container using StreamingMarkupBuilder, a built-in Groovy data structure designed to build XML structures.
To construct the XML, you must declare the container namespace because you are building a container. The minimum information that can be specified to create a container are the container name and container type.
If you also want to add custom field values to the container you are creating, you must declare the userdefined namespace.
NOTE: As of Clarity LIMS v5, the term user-defined field (UDF) has been replaced with custom field in the user interface. However, the API resource is still called udf.
The POST command posts the XML constructed by StreamingMarkupBuilder to the containers resource of the API. The POST command also adds a link from the containers URI (the list of containers) to the new container.
The XML for the new container is as follows.
The XML for the list of containers, with the newly added container shown at the end of the list, is as follows.
For Clarity LIMS v5 and above, the Operations Interface Java client has been deprecated, and there is no equivalent Containers view screen in which to view empty containers added via the API. However, if you intend to add samples to Clarity LIMS through the API, this example is still relevant, as you must first add containers in which to place those samples.
PostContainer.groovy:
When working with containers, you can do the following:
As processing occurs in the lab, associated processes and steps are run in Clarity LIMS. Often, key data must be recorded for the derived samples (referred to as analytes in the API) generated by these steps.
The following example explains how to change the value of an analyte UDF/global custom field.
If you would like to update a batch of output derived samples (analytes), you can increase the script execution speed by using batch operations. For more information, see .
In Clarity LIMS v5 or later, the key data fields are configured as global custom fields on derived samples. If you are using Clarity LIMS v5 or later, make sure you have the following items:
A defined global custom field named Library Size on the Derived Sample object.
A configured Library Prep step to apply Library Size to generated derived samples.
A Library Prep process that has been run and has generated derived samples.
As of Clarity LIMS v5, the term user-defined field (UDF) has been replaced with custom field in the user interface. However, the API resource is still called UDF.
There are two types of custom fields:
Master step fields—Configured on master steps. Master step fields only apply to the following:
The master step on which the fields are configured.
The steps derived from those master steps.
Global fields—Configured on entities (eg, submitted sample, derived sample, measurement, etc.). Global fields apply to the entire Clarity LIMS system.
In Clarity LIMS v5 and later, the Record Details screen displays the information about the derived samples generated by a step. You can view the global fields associated with the derived samples in the Sample Table.
The following screenshot shows the Library Size values for the derived samples.
Derived sample information is stored in the API in the analyte resource. Step information is stored in the process resource. Each global field value is stored as an udf.
An analyte resource contains specific derived sample details that are recorded in lab steps. Those details are typically stored in global custom fields (configured in Clarity LIMS on the Derived Sample object) and then associated with the step.
When you update the information for a derived sample by updating the analyte API resource, only the global fields that are associated with the step can be updated.
To update the derived samples generated by a step, you must first request the process resource through a GET method.
The following GET method provides the full XML structure for the step:
The process variable now holds the complete XML structure returned from the GET request.
The XML returned from a GET on the process resource contains the URIs of the process output artifacts (the derived samples generated by the step). You can use these URIs to query for each individual artifact resource.
The process resource contains many input-output-map elements, where each element represents an artifact. The following snippet of the XML shows the process:
Because processes with multiple inputs and outputs tend to be large, many of the input-output-map nodes have been omitted from this example.
After you have retrieved each individual artifact resource, you can use this information to update the UDFs/custom fields for each output analyte after you request its resource.
Request the analyte output resource and update the UDF/custom field as follows.
If the output-type is analyte, then run through each input-output-map and request the output artifact resource.
Use a GET to return the XML for each artifact and store it in a variable.
When you have the analytes stored, change the analyte UDF/custom field through the following methods:
The UDF/custom field change in the XML.
The http PUT call to update the artifact resource.
The UDF/custom field change can be achieved with the Library Size UDF/custom field XML element defined in the following code. In this example, the Library Size value is updated to 25.
The PUT method updates the artifact resource at the specified URI using the complete XML representation, including the UDF/custom field. The setUdfValue method of the util library is used to perform this in a safe manner.
The output-type attribute is the user-defined name for each of the output types generated by a process/step. This is not equivalent to the type element of an artifact whose value is one of several hard-coded artifact types.
If you must filter inputs or outputs from the input-output-map based on the artifact type, you will must GET each artifact in question to discover its type.
It is important that you remove the state from each of the analyteURIs before you GET them, to make sure that you are working with the most recent state.
Otherwise, when you PUT the analyteURI back with your UDF/custom field changes, you can inadvertently revert information, such as QC, volume, and concentration, to previous values.
The results can be reviewed in a web browser through the following URI:
In Clarity LIMS v5 or later, in the Record Details screen, the Sample table now shows the updated Library Size.
UpdateProcessUDFInfo.groovy:
UpdateUDFAnalyteOutput.groovy:
Important!
Property Name | Usage | Example Value | Required | Description |
---|---|---|---|---|
The example is similar to this example. The differences are that this example has minimal input/output and posts a reference to a pre-defined EPP process-parameter.
In addition, this example requires a container in which to store the results of the process execution. An example of how to do this is included in the Groovy script under .
In Clarity LIMS, under Lab View, select the protocol you created in .
For more information on these files, see .
As shown in the example, you can add a container by using StreamingMarkupBuilder to create the XML for a new container. This creates the XML that is used in a POST to the containers resource:
A sample can be associated with one or many workflows, and each derived sample has a list of the workflow-stages to which it is assigned. Making a GET call on each workflow-stage URI retrieves its XML representation, from which the workflow URI can be acquired and added to a list. The getWorkflowURIs function calls for each sample node included in the list (eg, sampleURIList, username, and password from ).
Directory
${baseName}.dir
/limsdata
True
The highest level directory in which it is valid to access files. Files outside this directory are not attached.
Hostname/IP
${baseName}.host
YourHTTPHost
True
The hostname or IP address to use when accessing the files.
Port
${baseName}.port
80
True
The port to use when accessing the files.
Scheme
${baseName}.scheme
http
True
The scheme of the URI used to access the files. Examples are HTTP, HTTPS, FTP, and SFTP.
Domain
${baseName}.domain
YourAuthDomain
False
The domain to use when authenticating access to the files.
Username
${baseName}.user
fileUser
False
The username to use when authenticating access to the files.
Password
${baseName}.password
filePassword
False
The password to use when authenticating access to the files.
It is highly recommended that you encrypt your password. See the following section for details.
When working with process and step outputs, you can do the following:
When working with projects and accounts, you can do the following:
When working with multiplexing, you can do the following:
Reagent labels are artifact resource elements and can be applied using a PUT. To apply a reagent label to an artifact using REST, the following steps are required:
GET the artifact representation.
Insert a reagent-label element with the intended label name.
PUT the modified artifact representation back.
You can apply the reagent label to the original analyte (sample) artifact or to a downstream sample or result file.
Before you follow the example, make sure that you have the following items:
Reagent types that are configured in Clarity LIMS and are named index 1 through index 6.
Reagents of type index 1 through index 6 that have been added to Clarity LIMS.
A compatible version of API (v2 r14 to v2 r24).
In this example, you can adjust the following code:
By inserting the reagent-label element, you end up with the following code.
Although it is not mandatory, it is recommended that you name reagent labels after reagent types using the Index special type. This allows you to relate the reagent label back to its sequence.
In the BaseSpace Clarity LIMS web interface, in the Custom Fields configuration screen, administrators can add user-defined information by adding custom fields (global fields or master step fields). At this time, user-defined types (UDTs) are only supported in the API.
Use these custom fields to configure storage locations for data that annotates project, submitted sample, step, derived sample, measurement, and file information recorded in a workflow.
All XML element attributes and values are text. Before using the value in a script, you may want to convert to a strongly-typed variable such as a number or date type.
For details on the formats used in XML, see Working with User-Defined Fields (UDF) and Types (UDT).
When updating multiple process outputs or containers, you can increase the script execution speed by using batch operations.
When working with multiplexing, you can do the following:
When samples are processed in the lab, they generally produce child samples that are altered in some way. Eventually, the samples are analyzed on an instrument, with the result being a data file. Often these data are analyzed further, which produces additional data files.
The sample processing that occurs in the lab is modeled as steps in the Clarity LIMS web interface. In the REST API (v2 r21 or later), this processing is modeled as processes, and the samples and files that are processed are represented as artifacts. Understanding the representation of inputs and outputs within the XML for an individual process is critical to being able to use the REST API effectively.
If you are using Clarity LIMS v5 or later, make sure that you have done the following actions:
Added samples to the LIMS.
Configured a step that generates derived samples in the Lab Work tab.
Configured a file placeholder for a sample measurement file to be generated and attached by an automation script at run time. This configuration is done in the Master Step Settings of the step on the Record Details milestone.
Configured an automation that generates the sample measurement file and have enabled it on the step. This configuration is done in the Automation tab.
Configured the automation triggers. This configuration is done in the Step Settings screen, under the Record Details milestone.
Run the step on some samples.
As of Clarity LIMS v5, the Operations Interface Java client has been deprecated. In LIMS v5 and later, there is no equivalent screen to the Input/Output Explorer where you can select step inputs/outputs and generated files and view their corresponding inputs/outputs and files.
However, the following API code example is still relevant and will produce the same results.
The first step in this example is to request the individual process resource through a GET method. The full XML representation returned includes the input-output-map.
To illustrate the relationships between the inputs and outputs, you can save them using a Groovy Map data structure. This maps the output LIMS IDs to a list of input LIMS IDs associated with each output, as shown in the following example:
The process variable now holds the complete XMLstructure returned from the processURI.
In the following example XML snippet, elements of the input-output-map are labeled with <input-output-map>:
All of the input and output URIs include a ?state= some number. State allows Clarity LIMS to track historical values for QC, volume, and concentration, so you can compare the state of an analyte before and after a process was run. However, when you make changes to an artifact you should always work with the most current state.
To make sure that you are getting the current state when you do a GET request, simply remove the state from the artifact URI.
You can examine each input-output-map to find details about the relationship represented between inputs and outputs. The following code puts the output and input LIMS IDs into an array named outputToInputMap.
As the output type is also important for further processing, outputToInputMap is formatted as follows:
If the output is shared for all inputs (eg, the sample measurement file with LIMS ID 92-13007), the inputs to the process are listed. If the output relates to an individual input, only the LIMS ID for that particular input will be listed.
Outputs are listed in multiple input-output-map elements when they have multiple input files generating them. The first time any particular output LIMS ID is seen, the output type and input LIMS ID in the input-output-map are added to the list, stored in outputToInputMap.
If the output LIMS ID already has a list in outputToInputMap, then the code adds input LIMS ID to the list.
One way to access the information is to print it out. You can run through each key-value pair and print the information it contains, as shown in the following example:
After running the script on the command line, an output similar to the following will be generated, whereby the inputs used to generate each output are listed.
GetProcessInputOutput.groovy:
As samples are processed in the lab, substances are moved from one container to another. Because container locations are sometimes used to reference the sample in data files, tracking the location of these substances within containers is one of the key values that Clarity LIMS provides to the lab.
Within the REST API (v2 r21 or later), analytes represent the substances on which processes/steps are run. These analytes are the substances that are chemically altered and transferred between containers as samples are processed in the lab.
Each individual sample resource has an analyte artifact that describes its container location and is used to run processes.
In Clarity LIMS, steps are not run on the original submitted samples, but are instead run on (and can also generate) derived samples. In the API, derived samples are known as analytes. Each sample resource, which is the original submitted sample in Clarity LIMS, has a corresponding analyte that is used for running processes/steps and describing placement in a container.
For more information on analyte artifacts and other REST resources, see Structure of REST Resources.
For all Clarity LIMS users, make sure you have done the following actions:
Added a sample to Clarity LIMS.
Run a process/step on the sample, with the same process/step generating a derived sample output.
Added the generated derived sample to a multi-well container (eg, a 96-well plate).
The container location information for an individual derived sample/analyte is located within the XML for the individual artifact resource. Because artifacts are generated by running steps in the LIMS, this is a logical place to keep track of the location.
Within a script, you can use a GET method to request the artifact. The resulting XML structure contains all the information related to the artifact, including its container and well location.
In this example, a derived sample named Brain-600 is placed in well A:1 of a container with LIMS ID 27-1259. This information is found in the location element.
The location elements has two child data elements:
One linking to the container URI, which specifies which container the analyte is in.
One for the well location, which has the name 'value' in the XML structure.
Valid values for a well location can be either numeric or alphabetic, and are determined by the configuration of the container in Clarity LIMS.
Well locations are always represented in the row:column format. For example, a 96-well plate can have locations A:1 and C:12, and a tube can have a single well called 1:1.
Use the following XML example to retrieve the artifact:
Because the container position is structured in the row:column format, you can store the row and column in separate variables by splitting the container position on the colon character. You can access the string value of the location value node using the text() method, as shown in the following code:
Running the script in a console produces the following output:
GetContainerAnalyteLocation.groovy:
Derived sample automations are automations that users can run on derived samples directly from the Projects Dashboard in Clarity LIMS.
The following example uses an automation to initiate a script that requeues samples to an earlier step in the workflow. The example also describes the main functions included in the script and demonstrates the configuration options that prompt the user for input. These options allow for greater flexibility during script runs. Before you follow the example, make sure that you have the following items:
A project containing samples assigned to a multi-stage workflow.
Samples that must be requeued. These samples must have completed at least one step in the workflow and must be available for requeue.
The purpose of the attached RequeueSamples.groovy script is to requeue selected derived samples to a previous step in the workflow with the derived sample automations feature.
The getSampleNodes function is passed a list of derived sample LIMS IDs (as a command-line argument) to build a list containing the XML representations of the samples. The resulting sample URI list can then be used with a batchGET to return the sample nodes:
To retrieve the workflow name, you can URL encode the workflow name and use the result to query and retrieve the workflow URI:
The stage names are guaranteed to be unique for each workflow. However, they may not be unique in the Clarity LIMS system. As a result, the stage URI cannot be queried for in the same way as the workflow URI.
Instead, you can navigate through the workflow node to find the stage that matches the stage name specified using the getStageURI function. If a match is found, return the stage URI.
Next, you must make sure that each sample meets the criteria to be requeued using the canRequeue function. The following method checks all workflow stages for the samples:
If a match is found between a workflow stage URI and the stage URI specified, the sample node is added to a list of samples that can be requeued using the requeueList function.
If all the samples have this match and a status that allows for requeue, the list is returned. Otherwise, the script exits with an error message that states the first sample to cause failure.
In this example, both unassignment from and assignment to a workflow stage must occur to complete the requeue. As the samples are requeuing to a previous stage in the workflow and can currently be queued for another stage, you must remove them from these queues.
The getCurrentStageURI and lastStageRun functions check the sample node for its most recent workflow stage. If the node is in a queued status, it returns that stage URI to be unassigned.
Using the previous methods and their results, the following code uses Streaming Markup Builder and the assignmentXML function to build the XML to be posted:
The returned XML node is then posted using httpPOST.
Add and configure the automation
In Clarity LIMS, under Configuration, select the Automation tab.
Select the Derived Sample Automation tab.
Select New Automation and enter the following information:
Automation Name—This is the name that displays to the user running the automation from the Projects Dashboard. Choose a descriptive name that reflects the functionality/purpose (eg, Requeue Samples).
Channel Name—Enter the channel name.
Command Line—Enter the command line required to invoke the script.
Select Save.
Run the automation as follows.
Open the Projects Dashboard.
Select a project containing in-progress samples. Select In-progress samples.
In the sample list, you will see all of the submitted and derived samples that are currently in progress for this project.
Select one or more derived samples. Selecting samples activates the Action button and drop-down list.
In the Action drop-down list, select the Requeue Samples automation.
In this example, the –w and -t {userinput} options invoke a dialog box on automation trigger. The user is required to enter two parameters: the full name of the stage and the workflow for which selected samples are to be requeued. The names must be enclosed in quotation marks.
If the requeue is successful, each requeued sample is marked with a complete tag. Hovering over a sample shows a more detailed message.
RequeueSamples.groovy:
The following example shows you how to remove information from a project using Clarity LIMS and API (compatible with v2 r21 and later).
As of Clarity LIMS v5, the term user-defined field (UDF) has been replaced with custom field in the user interface. However, the API resource is still called UDF.
There are two types of custom fields:
Master step fields—Configured on master steps. Master step fields only apply to the following:
The master step on which the fields are configured.
The steps derived from those master steps.
Global fields—Configured on entities (eg, submitted sample, derived sample, measurement, etc.). Global fields apply to the entire Clarity LIMS system.
Before you follow the example, make sure that you have the following items:
A user-defined field (UDF) / custom field named Objective is defined for projects.
A project name that is unique and does not exist in the system.
This example does the following actions:
POST a new project to the LIMS, with a UDF / custom field value for Objective.
Remove a child XML node from the parent XML representing the project resource.
Update the project resource.
First, set up the information required to perform a successful project POST. The project name must be unique.
The projectNode should contain the response XML from the POST and resemble the following output:
The following code removes the child XML node <udf:field> from the parent XML node <prj:project>:
If multiple nodes of the same type exist, [0] is the first item in this list of same typed nodes (eg, 0 contains 1st item, 1 contains 2nd item, 2 contains 3rd item, and so on).
To remove the 14th udf:field, you would use projectNode?.children()?.remove(projectNode.'udf:field'[13])
RemoveChildNode.groovy:
The large capacity of current Next Generation Sequencing (NGS) instruments means that labs are able to perform multiplexed experiments with multiple samples pooled into a single lane or region of the container. Before being pooled, samples are assigned a unique tag or index. After sequencing and initial analysis are complete, the sequencing results must be demultiplexed to separate data and relate the results back to each individual sample.
Clarity LIMS allows you to track a multiplexing workflow by adding reagents and reagent labels to artifacts, and then using the reagent labels to demultiplex the resulting files.
There are several ways to apply reagent labels. However, all methods involve creating placeholders that link the final sequences back to the original submitted samples. Either the lab scientist or an automated process must determine which file actually belongs with which placeholder. For more information on applying reagent labels, refer to Work with Multiplexing.
This example walks through assigning user-defined field (UDF)/custom field values to the demultiplexed output files based on upstream derived sample (analyte) UDF/custom field values. This includes upwards traversal of a sample history / genealogy, based on assigned reagent labels. This differs from upstream traversal based strictly upon process input-output mappings.
As of Clarity LIMS v5, the term user-defined field (UDF) has been replaced with custom field in the user interface. However, the API resource is still called UDF.
There are two types of custom fields:
Master step fields—Configured on master steps. Master step fields only apply to the following:
The master step on which the fields are configured.
The steps derived from those master steps.
Global fields—Configured on entities (eg, submitted sample, derived sample, measurement, etc.). Global fields apply to the entire Clarity LIMS system.
If you are using Clarity LIMS v5 or later, make sure you have completed the following actions:
Created a project and have added multiple samples to it.
Run the samples through a sequence of steps that perform the following:
Reagent addition / reagent label assignment
Pooling
Demultiplexing (to produce a set of per-reagent-label result file outputs).
Set a Numeric custom field value on each derived sample input to the reagent addition process.
A Numeric custom field with no assigned value exists on each of the per-reagent-label result file outputs. The value of this field will be computed from the set of upstream derived sample custom field values corresponding to the reagent label of the result file.
You also must make sure that API v2 r21 or later is installed.
Due to the complexity of NGS workflows, beginning at the top level submitted sample resource and working down to the result file is not the most efficient way to traverse the sample history/genealogy. It is easier to start with the result file artifact, and then trace upward to find the process with the UDFs/custom fields that you are looking for.
Starting from the per-reagent-label result file, you can traverse upward in the sample history using the parent process URI in the XML returned for each artifact. At each level of the sample history, the number of artifacts returned may increase due to processes that pooled individual artifacts.
In this example:
The upstreamArtifactLUIDs list represents the current set of relevant artifacts.
The foundUpstreamArtifactNodes list stores the target upstream artifact nodes found.
The sample history traversal stops at the inputs to the process that performed the reagent addition/reagent label assignment.
The traversal is executed using a while loop over the contents of the upstreamArtifactLUIDs list.
The list serves as a stack of artifacts. With each iteration of the loop, an artifact is removed from the end of the list and the relevant input artifacts to its parent process are pushed back onto the end of the list.
After the loop has executed, the foundUpstreamArtifactNodes list will contain all of the artifacts that are assigned the reagent label of interest upon execution of the next process in the sample history.
The final step in the script assigns a value to a Numeric UDF / custom field on the per-reagent-label output result file, Mean DNA Prep 260:280 Ratio, by computing the mean value of a Numeric UDF / custom field on each of the foundUpstreamArtifactNodes, DNA prep 260:280 ratio.
First, compute the mean using the following example:
Then, set the UDF/custom field on the per-reagent-label output result file using the following example:
TraversingPooledDemuxGenealogy.groovy:
Projects contain a collection of samples submitted to the lab for a specific goal or purpose. Often, a script needs information recorded at the project level to do its task. In this simple example, an HTTP GET against a project is shown to obtain information on the project in XML.
Before you follow the example, make sure you have the following items:
A project exists with name "HTTP Get Project Name with GLS Utils".
The LIMS ID of the project above, referred to as <project limsid>.
A compatible version of API (v2 r21 or later).
The easiest way to find a project in the system is with its LIMS ID.
If the project was created in the script (with an HTTP POST) then the LIMS ID is returned as part of the 201 response in the XML.
If the LIMS ID is not available, but other information uniquely identifies it, you can use the project (list) resource to GET the projects and select the right LIMS ID from the collection.
Working with list resources generally requires the same script logic, so if you need the list of projects to find a specific project then review Find an Account Registered in the System example. This example demonstrates listing and finding resources for labs, but the same logic applies.
The first step is to determine the URI of the project:
Next, use the project LIMS ID to perform an HTTP GET on the resource, and store the response XML in the variable named projectNode:
The projectNode variable can now be used to access XML elements and/or attributes.
To obtain the project's name ask the projectNode for the text representation of the name element:
GetProjectName.groovy:
Imagine that you use projects in Clarity LIMS to track a collection of sample work that represents a subset of work from a larger translational research study. The translational research study consists of several projects within the LIMS and the information about each of the projects that make up the research study is predefined in another system.
Before the work starts in the lab, you can use the information in the other system to automatically create projects. This reduces errors and means that lab scientists do not have to spend time manually entering data a second time.
This example shows how to automate the creation of a project using a script and the projects resource POST method.
As of Clarity LIMS v5, the term user-defined field (UDF) has been replaced with custom field in the user interface. However, the API resource is still called UDF.
There are two types of custom fields:
Master step fields—Configured on master steps. Master step fields only apply to the following:
The master step on which the fields are configured.
The steps derived from those master steps.
Global fields—Configured on entities (eg, submitted sample, derived sample, measurement, etc.). Global fields apply to the entire Clarity LIMS system.
Before you follow the example, make sure you have the following items:
A user-defined field (UDF) / custom field named Objective is defined for projects.
A project name that is unique and does not exist in the system.
A compatible version of API (v2 r21 or later).
Before you can add a project to the system via the API, you must construct the XML representation for the project you want to create. You can then POST the new project resource.
You can define the project XML using StreamingMarkupBuilder, a built-in Groovy data structure designed to build XML structures.
Declare the project namespace because you are building a project.
If you wish to include values for project UDFs as part of the project XML you are constructing, then you must also declare the userdefined namespace.
In the following example, the project name, open date, researcher, and a UDF / custom field named Objective are included in the XML constructed for the project.
UDFs / custom fields must be configured in the Clarity LIMS before they can be set or updated using the API. You can find a list of the fields defined for a project in your system by using the resource: http://youripaddress/api/v2/configuration/udfs and looking for those with an attach-to-name of 'project'.
For Clarity LIMS v5 or later, UDTs are only supported in the API.
For the POST to the projects resource to be successful, only project name and researcher URI are required. Adding more details is a good practice for keeping your system organized and understanding what must be accomplished for each project.
The following POST command adds a new project resource using the XML constructed by StreamingMarkupBuilder:
The XML returned after a successful POST of the XML built by StreamingMarkupBuilder is the same as the XML representation of the project:
PostProject.groovy:
Imagine that each month the new external accounts with which your facility works are contacted with a Welcome package. In this scenario, it would be helpful to obtain a list of accounts that have been modified in the past month.
NOTE: In Clarity LIMS v2.1 and later, the term Labs was replaced with Accounts. However, the API resource is still called labs.
Before you follow the example, make sure you have the following items:
Several accounts exist in the system.
At least one of the accounts was modified after a specific date.
A compatible version of API (v2 r21 or later).
In LIMS v6.2 and later, in the Configuration > User Management page, the Accounts view lists the account resources available.
To obtain a list of all accounts modified after a specific date, you can use a GET request on the accounts list resource and include the ?last-modified filter.
To specify the last month, a Calendar object is instantiated. This Calendar object is initially set to the date and time of the call, rolled back one month, and then passed as a query parameter to the GET call.
The first GET call returns a list of the first 500 labs that meet the date modified criterion specified. The script iterates through each lab element to look at individual lab details. For each lab, a second GET method populates a lab resource XML node with address information.
The REST list resources are paged. Only the first 500 items are returned when you query for a list of items, (eg, http://youripaddress/api/v2/artifacts).
If you cannot filter the list, it is likely that you must iterate through the pages of a list resource to find the items that you are looking for. The URI for the next page of resources is always the last element on the page of a list resource.
In the following example, the XML returned lists three out of the four labs, excluding one due to the date filter:
One of the labs has 'WA' recorded as the state, adding a second printed line to the output:
GetLab.groovy:
Compatibility: API version 2 revision 21 and later
Important measurements and values are often calculated from other values. Instead of performing these calculations by hand, and then manually entering them into the LIMS (thereby increasing the probability of error), you can develop scripts to perform these calculations and update the data accordingly.
This example demonstrates the use of scripts and user-defined fields (UDFs) / custom fields for information retrieval and recording of calculation results in the LIMS.
NOTE:
Information about a step is stored in the process resource in the API.
Information about a derived sample is stored in the analyte resource in the API. This resource is used as the input and output of a step, and also used to record specific details from lab processing.
As of BaseSpace Clarity LIMS v5.0, the term user-defined field (UDF) has been replaced with custom field in the user interface. However, the API resource is still called udf.
Clarity LIMS v5 and later:
You have defined the following custom global fields on the Derived Sample object:
Concentration
Size (bp)
Conc. nM
You have set the three fields configured in step 1 to display in the Sample table of the Record Details screen.
You have configured a Calc. Prep step to apply Concentration, Size (bp) to generated derived samples.
You have run the Calc. Prep step and it has generated derived samples.
You have input values for the Calculation and Size (bp) fields.
You have configured a Calculation step to apply Conc. nM to generated derived samples.
You have run the Calculation step - with the derived samples generated by the Calc. Prep step as inputs, and it has generated derived samples.
First, the values to be used in the calculation - the Concentration and Size (bp) UDFs / custom fields are applied to the samples by running the Calc. Prep preparation step. You can then enter the values for these fields into the LIMS as follows:
Clarity LIMS v5 and later:
In the Record Details screen, in the Sample table.
After the script has successfully completed, the Conc. nM results will display
(LIMS v5 & later) In the Record Details screen, in the Sample table.
UsingAnalyteUDFForCalculations.groovy:
Pooling steps require that each input analyte artifact (derived sample) in the step be inserted into a pool. You can automate this task by using the API steps pooling endpoint. Automation of pooling allows you to reduce error and user interaction with Clarity LIMS.
In this example, a script pools samples based on the value of the pool id user-defined field (UDF)/custom field of the artifact.
As of Clarity LIMS v5, the term user-defined field (UDF) has been replaced with custom field in the user interface. However, the API resource is still called UDF.
Master step fields—Configured on master steps. Master step fields only apply to the following:
The master step on which the fields are configured.
The steps derived from those master steps.
Global fields—Configured on entities (eg, submitted sample, derived sample, measurement, etc.). Global fields apply to the entire Clarity LIMS system.
To keep this example simple, the script does not handle samples with reagent labels.
In the API, an artifact is an item generated by an earlier step. There are two types of artifacts: analyte (derived sample) and resultfile (measurement). In the Clarity LIMS web interface, the terms artifact, analyte, and resultfile have been replaced with derived sample or measurement.
Before you follow the example, make sure that you have the following items:
A configured analyte UDF/derived-sample custom field named pool id in Clarity LIMS.
Groovy that is installed on the server and accessible at /opt/groovy/bin/groovy
The GLSRestApiUtils.groovy file is stored in /opt/groovy/lib/
The WorkingWithStepsPoolingEndpoint.groovy script that is stored in /opt/gls/clarity/customextensions/
A compatible version of API (v2 r21 or later).
In Clarity LIMS, under Configuration, select the Lab Work tab.
Select an existing Pooling master step or add a new one.
On the master step configuration form, select the Pooling milestone.
On the Pooling Settings form, set the Label Uniqueness toggle switch to Off.
Select Save.
Add a new protocol.
With the protocol selected, add a new Library Pooling step based on the master step you configured.
In Clarity LIMS, under Configuration, select the Automation tab.
Add a new step automation. Associate the automation with the WorkingWithStepsPoolingEndpoint.groovy script. The command line used in this example is as follows.
bash -c "/opt/groovy/bin/groovy -cp /opt/groovy/lib /opt/gls/clarity/customextensions/WorkingWithStepsPoolingEndpoint.groovy -u {username} -p {password} -s {stepURI:v2:http}"
Enable the automation on the configured pooling master step. Select Save.
You can now configure the automation trigger on the step or the master step. If you configure the trigger on the master step, the settings will be locked on all steps derived from the master step.
On the Lab Work tab, select the library pooling step or master step.
On the Step Settings or Master Step Settings form, in the Automation section, configure the automation trigger so that the script is automatically initiated at the beginning of the step:
Trigger Location—Step
Trigger Style—Automatic upon entry
In Clarity LIMS, under Configuration, select the Lab Work tab.
Select the pooling protocol containing the Library Pooling step.
Add the Add Pool ID step that sets the pool id custom field of the samples. Move this step to the top of the Steps list.
Select the Add Pool ID step.
On the Record Details milestone, add the pool id custom field to the Sample Details table.
In Clarity LIMS, under Configuration, select the Lab Work tab.
Create a workflow containing the configured pooling protocol. Activate the workflow.
On the Projects and Samples screen, create a project and add samples to it. Assign the samples to your pooling workflow.
Begin working on the samples. In the first step, enter values into the pool id custom field.
Continue to the Library Pooling step and add samples to the Ice Bucket. Select Begin Work to execute the script.
The script is passed the URI of the pooling step. Then, using the URI, the pool node of the step is retrieved. This node contains an available-inputs node that lists the URIs of the available input artifacts.
The script retrieves all available input artifacts, and then iterates through the list of retrieved artifacts. For each artifact, the script looks for the pool id custom field. If the field is not found, the script moves on to the next artifact. If the field is found, its value is stored in the poolID variable.
When the script encounters a new pool ID, it creates a new pool with a name equal to that ID. Input artifacts are sorted into pools based on the value of its pool id field, and as they are inserted into pools they are removed from the list of available inputs.
After all of the available inputs are iterated through, the updated pool node is sent back to Clarity LIMS:
Artifacts with the same Pool ID UDF / custom field will be automatically added to the same pool.
WorkingWithStepsPoolingEndpoint.groovy:
GLSRestApiUtils.groovy:
A common requirement in applications involving indexed sequencing is to determine the sequence corresponding to a reagent label. This example shows how to configure index reagent types, which you can then use to find the sequence for a reagent label. Before you follow the example, make sure that you have a compatible version of API (v2 r14 to v2 r24).
Reagents and reagent labels are independent concepts in the API. However, the recommended practice is to name reagent labels after reagent types. This allows you to use the label name to look up the sequence information on the reagent type resource. This practice is consistent with the Operations Interface process wizards. When a reagent is applied to a sample in the user interface, a reagent label with the same name of the reagent type is added to the analyte resource.
The following actions are also recommended:
Configure an index reagent type with the correct sequence for each type of index or tag you plan to use.
Use the names of the index reagent types as reagent labels.
Following these practices allows you to find the sequence for a reagent label by looking up the sequence in the corresponding reagent type.
For each index or tag you plan to use in indexed sequencing, configure a corresponding index reagent type as follows.
As administrator, click Configuration > Consumables > Labels.
Add a new label group.
Then, to add labels to the group:
Download a template label list (Microsoft® Excel® file) from the Labels configuration screen.
Add reagent type details to the downloaded template.
Upload the completed label list.
After you have configured reagent types for each indexing sequence you intend to use, and have used those reagent type names as reagent label names, you can easily retrieve the corresponding sequence using the REST API.
The following code snippet shows how to retrieve the index sequences (when available):
For an artifact labeled with Index 1, this would produce the following information:
RetrievingReagentLabelIndex.groovy:
Lab scientists must understand the priority of the samples they are working with. To help them prioritize their work, you can rename the derived samples generated by a step so that they include the priority assigned to the original submitted sample.
If you would like to rename a batch of derived samples, you can increase the script execution speed by using batch operations. You can also use a script to rename a derived sample after a step completes.
If you are using Clarity LIMS v5 and later, make sure that you have done the following actions:
Added samples to the system.
Defined a global custom field named Priority on the Submitted Sample object. The field should have default values sp1, sp2, and sp3, and it should be enabled on a step.
Run samples through the step with the Priority of each sample set to sp1, sp2, or sp3.
In this example, six samples have been added to a project in Clarity LIMS. The submitted samples names are Heart-1 through Heart-6. The samples are run through a step that generates derived samples, and the priority of each sample is set.
By default, the name of the derived samples generated by the step would follow the name of the original submitted samples as shown in the Assign Next Steps screen of the step.
This example appends the priority of the submitted sample to the name of the derived sample output. The priority is defined by the Priority sample UDF (in Clarity LIMS v4.2 or earlier) or the Priority submitted sample custom field (in Clarity LIMS v5 or later).
Renaming the derived sample consists of the following steps:
Request the step information (process resource) for the step that generated the derived sample (analyte resource).
Request the individual analyte resource for the derived sample to be renamed.
Request the sample resource linked from the analyte resource to get the submitted sample UDF/custom field value to use for the update.
Update the individual analyte output resource with the new name.
When using the REST API, you will often start with the LIMS ID for the step that generated a derived sample. The key API concepts are as follows.
Information about a step is stored in the process resource.
In general, automation scripts access information about a step using the processURI, which links to the individual process resource. The input-output-map in the XML returned by the individual process resource gives the script access to the artifacts that were inputs and outputs to the process.
Information about a derived sample is stored in the analyte resource. This is used as the input and output of a step.
Analytes are also used to record specific details from lab processing.
The XML representation for an individual analyte contains a link to the URI of its submitted sample, and to the URI of the process that generated it (parent process).
The following GET method returns the full XML structure for the step.
The process variable now holds the complete XML structure returned from the process GET request, as shown in the following example. The URI for each analyte generated is given in the output node in each input-output-map element. For more information on the input-output-map, see View the Inputs and Outputs of a Process/Step.
Each output node has an output-type attribute that is the user-defined type name of the output. You can iterate through each input-output-map and request the output artifact resource for each output of a particular output-type.
In the code example shown below, we filter on output-type = Analyte
The output-type attribute is the user-defined name for each of the output types generated by a process. This is not equivalent to the type element of an artifact whose value is one of several hard-coded artifact types.
If you must filter inputs or outputs from the input-output-map based on the artifact type, you need to GET each artifact in question to discover its type.
It is important that you remove the state from each of the analyteURIs before you GET them to make sure that you are working with the most recent state. Otherwise, when you PUT the analyteURI back with your UDF changes, you can inadvertently revert information (eg, QC, volume, and concentration) to their previous values.
From the analyte XML, you can use the submitted sample URI to return the sample that maps to that analyte.
Updating Sample Information shows how to set a sample UDF/global field. To get the value of a sample UDF/global field, use the same method to find the field, and then use the .text() method to get the field value.
The value of the UDF is stored in the variable samplePriority so that it is then available for the renaming step described below.
The variable analyte holds the complete XML structure returned from a GET on the URI in the output node. The variable nameNode references the XML element in that structure that contains the artifact's name. The XML for the analyte named Heart-1.
Renaming the derived sample consists of two steps:
The name change in the XML.
The PUT call to update the analyte resource.
The name change can be performed with the nameNode XML element node defined. The following example shows this element defined.
The http PUT command updates the artifact resource using the complete XML representation, including the new name.
After a successful PUT, the results can be reviewed in a web browser at http://yourIPaddress/api/v2/artifacts/TST110A291AP45.
The following XML resource is returned from the PUT command and is stored in returnNode.
In Clarity LIMS, the Assign Next Steps screen shows the new names for the generated derived samples.
This example shows simple renaming of derived samples based on a submitted sample UDF/global field. However, you can use step names, step UDFs (known as master step fields in Clarity LIMS v5 or later), project information, and so on, to rename derived samples and provide critical information to scientists working in the lab.
UpdateAnalyteName.groovy:
When importing sample data into Clarity LIMS using a spreadsheet, you can specify the reagent labels to be applied during the import process. To do this, you must include the reagent label names in the spreadsheet, in a column named Sample/Reagent Label.
Before you follow the example, make sure that you have the following items:
Reagent types that are configured in Clarity LIMS and are named index 1 through index 6.
Reagents of type index 1 through index 6 that have been added to Clarity LIMS.
A compatible version of API (v2 r14 to v2 r24).
The following example spreadsheet would import six samples into the system. These samples are Sample-1 through Sample-6 with reagent labels Index 1 through Index 6:
Although not mandatory, it is recommended that you name reagent labels after reagent types using the Index special type. This allows you to relate the reagent label back to its sequence.
If you examine the REST API representation of the samples imported, you are able to verify the following:
The sample representation shows no indication that reagent labels were applied.
The sample artifact (the analyte artifact linked from the sample representation) will indicate the label applied via the <reagent-label> element.
The following example shows how an imported sample artifact (Sample-1), with reagent label name applied (Index 1), appears when verified via the REST API:
Demultiplexing is the last step in an indexed sequencing workflow. While the specifics depend on the sequencing instrument and analysis software used, taking pooled samples through sequencing and analysis produces result files/metrics per lane/identifier tag.
These results will likely be in the form of multiple files that you can import back into Clarity LIMS. To do this, you need to set up a configured process that generates process outputs that apply to inputs per reagent label, usually in the form of ResultFile artifacts.
Before you follow the example, make sure you have the following items:
Configured reagent types named Index 1 through Index 6 in Clarity LIMS.
Reagents of type Index 1 through Index 6 in Clarity LIMS.
A compatible version of API (v2 r14 to v2 r24).
Configure a process that generates ResultFile with process outputs that apply to inputs per reagent label. It is recommended to name your outputs in a way that clearly identifies the samples to which they correspond (eg, Results for {SubmittedSampleName}-{AppliedReagentLabels}).
Running the demultiplexing process on a labeled pooled input produces a process run in the Operations Interface, similar to the one illustrated below.
Note the following:
There were three reagent labels in the input analyte (sample) artifact. As a result, three outputs were generated (the process was configured to produce one output result file per label per input).
The names of the outputs of the demultiplexing process expose the original sample name and label.
The Operations Interface shows details of the genealogy from the downstream result file all the way back to the original sample.
While reagent labels are not explicitly exposed in the Clarity LIMS client user interface, genealogy views in the Operations Interface are aware of reagent labels and will show the true sample inheritance. As noted above, you can use the {AppliedReagentLabels} output naming variable to show the reagent labels applied to each artifact in the user interface.
Executing a demultiplexing process by issuing a process POST via the REST API is similar to the typical process execution found in Run a Process/Step.
The key difference is that when executing a demultiplexing process through the REST API, outputs per reagent label are automatically generated from the inputs provided. You do not need to explicitly specify them.
For example, when running the demultiplexing process configured against a single (pooled) sample, you could post a process execution representation like this:
The input-output-map only refers to inputs, not outputs, because the demultiplexing process is configured to exclusively produce outputs per reagent label.
If your process produces other outputs, such as shared or per-input outputs, you must explicitly specify input-output-maps for them.
Irrespective of whether you use the user interface or the REST API to run the demultiplexing process, the REST API representation for the process looks something like this:
For each input with reagent labels, one output was created per reagent label.
In the example, the process ran on one pooled input, and produced three outputs (the pooled input included three reagent labels). The following example shows one of the demultiplexed result file outputs:
The output contains only one reagent label, and relates only to the sample that was tagged with the same reagent label. Compare this to the case of a pooled artifact, which has several labels and relates to several samples. This level of traceability (from a demultiplexed output back to its specific original sample) is only possible because the artifacts were labeled before they were pooled.
The artifact name generated by the demultiplexing process output name pattern is ("Results for SAM-3 - Index 3"). You can use the {SubmittedSampleName} naming variable to show true ancestors, and the {AppliedReagentLabels} to show any reagent labels applied to an output.
Workflows, chemistry, hardware, and software are continually changing in the lab. As a result, you can determine which samples were processed after a specific change happened.
Using the processes (list) resource you can construct a query that filters the list using both process type and date modified.
Before you follow the example, make sure you have the following items:
Samples that have been added to the system.
Multiple processes of the Cookbook Example type that have been run on different dates.
A compatible version of API (v2 r21 or later).
In Clarity LIMS, when you search for a specific step type, the search results list shows all steps of that type that have been run, along with detailed information about each one. This information includes the protocol that includes the step, the number of samples in the step, the step LIMS ID, and the date the step was run.
The following screenshot shows the search results for the step type Denature and Anneal RNA (TruSight Tumor 170 v1.0).
The list shows the date run for each step, but not the last modified date. This is because a step can be modified after it was run, without changing the date on which it was run.
To find the steps that meet the two criteria (step type and date modified), you must to do the following steps:
Request a list of all steps (processes), filtered on process type and date modified.
Once you have the list of processes, you can use a script to print the LIMS ID for each process.
To request a list of all processes of a specific type that were modified after a specified date, use a GET method that uses both the ?type and ?last-modified filter on the processes resource:
The GET call returns a list of the first 500 processes that match the filter specified. If more than 500 processes match the filter, only the first 500 are available from the first page.
In the XML returned, each process is an element in the list. Each element contains the URI for the individual process resource, which includes the LIMS ID for the process.
The URI for the list of all processes is http://yourIPaddress/api/processes. In the example code, the list was filtered by appending the following:
This filters the list to show only processes that are of the Cookbook Example type and were modified after the specified date.
The date must be specified in ISO 8601, including the time. In the example, this is accomplished using an instance of a Calendar object and a SimpleDateFormat object, and encoding the date using UTF-8. The date specified is one week prior to the time the code is executed.
All of the REST list resources are paged. Only the first 500 items are returned when you query for a list of items, such as http://youripaddress/api/v2/artifacts.
If you cannot filter the list, you must iterate through the pages of a list resource to find the items that you are looking for. The URI for the next page of resources is always the last element on the page of a list resource.
After requesting an individual process XML resource, you have access to a large collection of data that lets you modify or view each process. Within the process XML, you can also access the artifacts that were inputs or outputs of the process.
After running the script on the command line, output is be generated showing the LIMS ID for each process in the list.
The researcher resource holds the personal details for users and clients in Clarity LIMS.
Suppose that you have a separate system that maintains the contact details for your customers and collaborators. You could use this system to synchronize the details for researchers with the details in Clarity LIMS. This example shows how to update the phone number of a researcher using a PUT to the individual researcher resource.
In the Clarity LIMS user interface, the term Labs has been replaced with Accounts. However, the API resource is still called labs and the Collaborations Interface still refers to Labs rather than Accounts. The term Contact has been replaced with Client. The API resource is still called contact.
The LabLink Collaborations Interface is not supported in Clarity LIMS v5 and later. However, because support for this interface is planned for a future release, the Collaborator user role has not been removed.
Before you follow the example, make sure you have the following items:
A defined client in Clarity LIMS.
A compatible version of API (v2 r21 or later).
For Clarity LIMS v5 and later, in the web interface, the User and Clients screen lists all users and clients in the system.
In the API, information for a particular researcher can be retrieved within a script using a GET call:
In this case, the URI represents the individual researcher resource for the researcher named Sue Erikson. The GET returns an XML representation of the researcher, which populates the groovy node researcher.
The XML representation of individual REST resources are self-contained entities. Always request the complete XML representation before editing any portion of the XML. If you do not use the complete XML when you update the resource, you may inadvertently change data.
The following example shows the XML returned for the Sue Erikson researcher:
Updating the telephone number requires the following steps:
Changing the telephone value in the XML.
Using a PUT call to update the the researcher resource.
The new telephone number for Sue Erikson can be set with the phone value within the Groovy researcher node:
The PUT command updates the research resource at the specified URI using the complete XML representation, including the new phone number. A successful PUT returns the new XML in the returnNode. An unsuccessful PUT returns the http response code and error message in XML in the returnNode.
For a successful update, the resulting XML can also be reviewed in a web browser via the URI:
http://yourIPaddress/api/researchers/103
In the LIMS, the updated user list should show the new phone number.
UpdateContactInfo.groovy:
As previously shown in , you can update the user-defined fields/custom fields of the derived samples (referred to as analytes in the API) generated by a step. This example uses batch operations to improve the performance of that script.
As of Clarity LIMS v5, the term user-defined field (UDF) has been replaced with custom field in the user interface. However, the API resource is still called UDF.
Master step fields—Configured on master steps. Master step fields only apply to the following:
The master step on which the fields are configured.
The steps derived from those master steps.
Global fields—Configured on entities (eg, submitted sample, derived sample, measurement, etc.). Global fields apply to the entire Clarity LIMS system.
Before you follow the example, make sure that you have the following items:
A global custom field named Library Size that on the Derived Sample object.
A configured Library Prep step that applies Library Size to generated derived samples.
A Library Prep step that has been run and has generated derived samples.
A compatible version of API (v2 r21 or later).
In Clarity LIMS, the Record Details screen displays the information about the derived samples generated by a step. You can view the global fields associated with the derived samples in the Sample table.
The screenshot below shows the Library Size values for the derived samples.
Derived sample information is stored in the API in the analyte resource. Step information is stored in the process resource. Each global field value is stored as a udf.
An analyte resource contains specific derived sample details that are recorded in lab steps. Those details are typically stored in global fields, configured in the LIMS on the Derived Sample object and then associated with the step. When you update the information for a derived sample by updating the analyte API resource, only the global fields that are associated with the step can be updated.
To retrieve the process information, you can perform a GET on the created process URI, as follows:
You can now collect all of the output analytes and harvest their URIs:
After you have collected the output analyte URIs, you can retrieve the analytes with a batchGET() operation. The URIs must be unique for the batch operations to succeed.
You can now iterate through our retrieved list of analytes and set each analytes 'Library Size' UDF to 25.
To update the analytes in the system, call batchPUT(). It will attempt to call a PUT for each node in the list. (Note that each node must be unique.)
In the Record Details screen, the Sample table now shows the updated Library Size.
UsingBatchPut.groovy:
If your samples are already in Clarity LIMS, you can assign reagent labels by running the Add Multiple Reagents process/protocol step from the Clarity LIMS user interface. Adding a reagent implicitly assigns a reagent label to every sample artifact. The reagent label applied is derived from the reagent type used.
Before you follow the example, make sure that you have the following items:
Reagent types that are configured in Clarity LIMS and are named index 1 through index 6.
Reagents of type index 1 through index 6 that have been added to Clarity LIMS.
A compatible version of API (v2 r14 to v2 r24).
For more information on indexes with reagent labels, see .
The following illustrations show the Add Multiple Reagents process, as run from the Operations Interface.
In the Add Multiple Reagents wizard panel, reagents (Indexes 1 to 3) are selected and then assigned to the samples (SAM-1 to 3) in the Sample Workspace, using a click and drag process.
The cells of the Sample Workspace represent the wells of the container used for this process.
When the wizard completes, the Add Multiple Reagents process replaces the input sample artifacts with output analyte artifacts.
In the following illustration, the Name column shows the reagent labels applied to the outputs. These are generated by the default output naming pattern for the Add Multiple Reagents process: {InputItemName}-{AppliedReagentLabels}.
When running the Add Multiple Reagents process, the output analyte artifact names show the reagent label applied, as the output naming pattern in the process configuration uses the {AppliedReagentLabels} variable.
By examining the REST API representation of the Add Multiple Reagents process, you can verify the following information:
The output analyte artifacts show a reagent-label element matching the name of the reagent type used.
The input analyte artifacts are not modified and do not have reagent labels added.
The input analyte artifacts do not have a location element, as they were displaced by the outputs.
You can only determine that reagent labels were applied. You cannot determine which reagent was applied.
The following shows an example of an output from an Add Multiple Reagents process when viewed with the REST API:
Although adding a reagent to a sample automatically assigns a reagent label, reagents and reagent labels are independent concepts in Clarity LIMS. There are ways to add reagent labels that do not involve reagents, and that even when using reagents, it is not possible to accurately determine the reagent used based on the reagent label attached to an artifact.
When samples are processed in the lab, they are sometimes re-arrayed in complex ways that are pre-defined.
You can use the REST API and automation functionality that will allow a user to initiate a step that:
Uses a file to define a re-array pattern
Executes the step using that re-array pattern. Since the pattern is pre-defined, this will decrease the likelihood of an error in recording the re-array.
To accomplish this automation, you must be able to execute a step using the REST API. This example shows a simple step execution that you can apply to any automated step execution needed in your lab.
For a high-level overview of REST resource structure in Clarity LIMS, including how processes are the key to tracking work, see .
Before you follow the example, make sure that you have the following items:
Samples that have been added to the system.
A configured step/process that generates analytes (derived samples) and a shared result file.
Samples that have been run through the configured process/step.
A compatible version of API (v2 r21 or later).
Information about a step is stored in the process resource in the API.
Information about a derived sample is stored in the analyte resource in the API. This resource is used as the input and output of a step, and also used to record specific details from lab processing.
To run a step/process on a set of samples, you must first identify the set of samples to be used as inputs.
The samples that are inputs to a step/process can often be identified because they are all in the same container, or because they are all outputs of a previous step / process.
In this example, you run the step/process on the samples listed in the following table.
After you have identified the samples, use their LIMS IDs to construct the URIs for the respective analyte (derived sample) artifacts. The artifact URIs are used as the inputs in constructing the XML to POST and execute a process.
You can use StreamingMarkupBuilder to construct the XML needed for the POST, as shown in the following example code:
Executing a process uses the processexecution (prx) namespace (shown in bold in the code example above).
The required elements for a successful POST are:
type – the name of the process being run
technician uri – the URI for the technician that will be listed as running the process
input-output-map – one input output map element for each pair of inputs and outputs
input uri – the URI for the input artifact
output type – the type of artifact of the output
In addition, if the outputs of the process are analytes, then the following are also needed:
container uri – the URI for the container the output will be placed in
value – the well placement for the output
The process type, technician, input artifact, and container must all exist in the system before the process can be executed. So, for example, if there is no container with an empty well, you must create a container before running the process.
The XML constructed must match the configuration of the process type. For example, if the process is configured to have both samples and a shared result file as outputs, you must have both of the following:
An input-output-map for each pair of sample inputs and outputs
An additional input-output-map for the shared result file
If the POST is successful, the process XML is returned:
If the POST is not successful, the XML returned will contain the error that occurred when the POST completed:
After the step / process has successfully executed, you can open the Record Details screen and see the step outputs.
RunningAProcess.groovy:
In Clarity LIMS, you want to process multiple entities. To accomplish this quickly and effectively, you can use batch operations, which allows you to retrieve multiple entities using a single interaction with the API, instead of iterating over a list and retrieving each entity individually.
Batch operations greatly improve the performance of the script. These methods are available for containers and artifacts. In this example, both entities are retrieved using the batchGet() operation. If you would like to update a batch of output analytes (derived samples), you can increase the script execution speed by using batch operations. For more information, refer to and .
Before you follow the example, make sure that you have the following items:
Several samples have been added to the LIMS.
A process / step that generates derived samples in containers has been run on the samples.
A compatible version of API (v2 r21 or later).
When derived samples ('analyte artifacts' in the API) are run through a process / step, their information can be accessed by examining that process / step. In this example, we will retrieve all of the input artifacts and their respective containers.
To do this effectively using batch operations, we must collect all of the entities' URIs. These URIs must be unique, otherwise the batch operation will fail. Then, all of the entities can be retrieved in one action. It is important to note that only one type of entity can be retrieved in a call.
To retrieve the process step information, use the GET method with the process LIMS ID:
To retrieve the artifact URIs, collect the inputs of the process's input-output-map. A condition of the batchGET operation is that every entity to get must be unique. Therefore, you must call unique on your list.
You can now use batchGET to retrieve the unique input analytes:
The same can be done to gather the analytes' containers:
You have collected the unique containers in which the artifacts are located. By printing the name and URI of each container, an output similar to the following is obtained.
To retrieve the step information, use the GET method with the step LIMS ID:
To retrieve the artifact IDs, collect the inputs of the step's input-output-map. A condition of the batch retrieve operation is that every entity to get must be unique. To do this, you add the LUIDs to a set().
You can now use the function getArtifacts(), which is included in the glsapiutils.py to retrieve the unique input analytes:
UsingBatchGet.groovy:
Batchexample.py:
For a general overview of batch resources, refer to .
When working with batch resources, you can do the following:
Before pooling samples in a multiplexed workflow, apply reagent labels using one of the methods described in . After the analyte (derived sample) artifacts are labeled, they can be pooled together without loss of traceability.
Pooling samples is accomplished either by running a pooling step in the user interface, or by using the process resource in the REST API.
For an overview of how REST resources are structured, and to learn how the process resource is used to track workflow in Clarity LIMS, see and .
Before you follow the example, make sure that you have the following items:
Reagent types that are configured in Clarity LIMS and are named index 1 through index 6.
Reagents of type index 1 through index 6 that have been added to Clarity LIMS.
A compatible version of API (v2 r21 or later).
The following screenshot shows a pooling step run from Clarity LIMS.
In general, automation scripts access information about a step using the processURI, which links to the individual process resource. The input-output-map in the XML returned by the individual process resource gives the script access to the artifacts that were inputs and outputs to the process.
Information about a derived sample is stored in the analyte resource. This is used as the input and output of a step, and also used to record specific details from lab processing. The XML representation for an individual analyte contains a link to the URI of its submitted sample, and to the URI of the process that generated it (parent process).
The following example pools all samples found in a given container into a tube it creates.
NOTE: No special code is required to handle reagent labels. As processes execute, reagent labels automatically flow from inputs to outputs.
Irrespective of whether you use the user interface or the REST API to pool samples, the pooled sample is available via process GET requests.
The following example shows one pooled output (LIMS ID 2-424) created from three inputs - LIMS IDs RCY1A103PA1, RCY1A104PA1, and RCY1A105PA1:
Besides deriving from the ancestral sample artifacts, the resulting pooled sample artifact inherits the reagent labels from all inputs. The pooled output produced by the pooling step appears as follows. The pooled artifact shows multiple reagent labels, and multiple ancestor samples.
As processes are executed, reagent labels flow from inputs to outputs.
PoolingSamplesWithReagents.groovy:
Steps can have user-defined fields (UDFs)/custom fields that can be used to describe properties of the steps.
For example, while a sample UDF/custom field might describe the gender or species of the sample, a process UDF/custom field might describe the room temperature recorded during the step or the reagent lot identifier. Sometimes, some of the information about a step is not be known until it has completed on the instrument, but after it was run in Clarity LIMS.
In this example, we will record the Actual Equipment Start Time as a process UDF/custom field after the step has been run in Clarity LIMS. The ISO 8601 convention is used for recording the date and time. For more information, see .
NOTE: In the API, information about a step is stored in the process resource.
As of Clarity LIMS v5, the term user-defined field (UDF) has been replaced with custom field in the user interface. However, the API resource is still called UDF.
Master step fields—Configured on master steps. Master step fields only apply to the following:
The master step on which the fields are configured.
The steps derived from those master steps.
Global fields—Configured on entities (eg, submitted sample, derived sample, measurement, etc.). Global fields apply to the entire Clarity LIMS system.
Before you follow the example, make sure you have the following items:
Samples added to the system.
A custom field named Actual Equipment Start Time that has been configured on a master step (master step field).
On the master step , you have configured the field to display on the Record Details milestone, in the Master Step Fields section.
You have run samples through a step based on the master step on which the Actual Equipment Start Time field is configured.
Detailed information for each step run in Clarity LIMS, including its name, LIMS ID, and custom fields can be viewed on the Record Details screen.
In the image below, an Actual Equipment Start Time master step field has been configured to display in the Step Details section of the Record Details screen. However, a value for this field has not yet been specified.
Before you can change the value of a process UDF/custom field, you must first request the individual process resource via a GET HTTP call. The XML returned from a GET on the individual process resource contains the information about that process. The following GET method provides the complete XML structure for the process:
The variable processNode now holds the complete XML structure retrieved from the resource at processURI.
The variable startTimeUDF references the XML element node contained in the structure that relates to the Actual Equipment Start Time UDF/custom field (if one exists).
The variable newStartTime is a string initialized with a value from the method parseInstrumentStartTimes. The details of this method are omitted from this example, but its function is to parse the date and time the instrument started from a log file.
The XML representations of individual REST resources are self-contained entities. Always request the complete XML representation before editing any portion of the XML. If you do not use the complete XML when you update the resource, you can inadvertently change data.
The following code shows the XML structure for the process, as stored in the variable processNode. There are no child UDF/custom field nodes.
After modifying the process stored in the variable processNode, you can use a PUT method to update the process resource.
You can check if the UDF/custom field exists by verifying the value of the startTimeUDF variable. If the value is not null, then the field is defined and you can set a new value in the XML. If the field does not exist, you will must append a new node to the process XML resource using the UDF/custom field name and new value.
Before you can append a node to the XML, you must first specify the namespace for the new node. You can use the Groovy built-in QName class to do this. A QName object defines the qualified name of an XML element and specifies its namespace. The node you are specifying is a UDF element, so the namespace is http://genologics.com/ri/userdefined. The local part is field and the prefix is udf for the QName, which specifies the element as a UDF/custom field.
To append a new node to the process using the appendNode method of the variable processNode (which appends a node with the specified QName, attributes, and value). Specify the following attributes for the UDF/custom field element:
the type
the name
Both of these elements must match a UDF/custom field that has been specified in the Configuration window for the process type.
The variable processNode now holds the complete XML structure for the process with the updated, or added, UDF named Actual Equipment Start Time.
You can save the changes you have made to the process using a PUT method on the process resource:
The PUT updates the sample resource at the specified URI using the complete XML representation, including the new value for Actual Instrument Start Time.
If the PUT was successful, it returns the XML resource, as shown below. The updated information is also available at the http://yourIPaddress/api/v2/processes/A22-BMJ-100927-24-2188\ URI.
If the PUT was unsuccessful, an XML resource is returned with contents that detail why the call was unsuccessful. In the following example error, an incorrect UDF/custom field name was specified. A UDF/custom field named Equipment Start Time was created in the process resource, but no UDF/custom field with that name was configured for the process type/master step.\
The Step Details section of the updated Record Details screen now shows the Actual Equipment Start Time value.
The ability to modify process properties allows you to update automatically and store lab activity information as it becomes available. Information from equipment log files or other data sources can be collected in this way.
Updating per-run or per-process/step information is powerful because the information can be used to optimize lab work (eg, by tracking trends over time). The data can be compared by instrument, length of run, lab conditions, and even against quality of molecular results.
UpdateProcessUDFInfo.groovy:
The powerful batch resources included in the Clarity LIMS Rapid Scripting API significantly increase the speed of script execution by allowing batch operations on samples and containers. These resources are useful when working with multiple samples and containers in high throughput labs.
The following simple example uses batch resources to move samples from one workflow queue into another queue.
It is useful to review the .
Use a batch retrieve request to find all the artifacts in an artifact group, and then use a batch update request to move those artifacts into another artifact group.
The following steps are required:
Find all the artifacts that are in a particular artifact group.
Use the artifacts.batch.retrieve (list) resource to retrieve the details for all the artifacts.
Use the artifacts.batch.update (list) resource to update the artifacts and move them into a different artifact group, posting them back as a batch.
NOTE: The only HTTP method for batch resources is POST.
Before you follow the steps, make sure that you have the following items:
Clarity LIMS contains a collection of samples (artifacts) residing in the same workflow queue (artifact group)
A second queue exists into which you can move the collection of samples
A compatible version of API (v2 r21 and later).
In the REST API, artifacts are grouped with the artifact group resource. In Clarity LIMS, an artifact group is displayed as a workflow. Workflows are configured as queues, allowing lab scientists to locate samples to work with on the bench quickly.
To find the samples (artifacts) in a workflow queue (artifact group), use the following request, editing the server details and artifact group name to match those in your system:
This request returns a list of URI links for all artifacts in the artifact group specified. In our example, the my_queue queue contains three artifacts:
To retrieve the detailed XML for all of the artifacts, use a <links> tag to post the set of URI links to the server using a batch retrieve request:
This returns the detailed XML for each of the artifacts in the batch:
The XML returned includes the artifact group name and URI:
<artifact-group name="my_queue" uri="http://your-server-ip/api/v2/artifactgroups/1"/>
To move the artifacts into another queue, simply update the artifact-group name and URI values:
Finally, post the XML back to the server using a batch update request:
Information about a step is stored in the process resource. In general, automation scripts access information about a step using the processURI, which links to the individual process resource. The input-output-map in the XML returned by the individual process resource gives the script access to the artifacts that were inputs and outputs to the process.
Processing a sample in the lab can be complex and is not always linear. This may be because more than one step (referred to as process in the API and in the Operations Interface in Clarity LIMS v4.x and earlier) is run on the same sample, or because a sample has to be modified or restarted because of quality problems.
The following illustration provides a conceptual representation of a Clarity LIMS workflow and its sample/process hierarchy. In this illustration, the terminal processes are circled.
The following illustration provides a conceptual representation of a LIMS workflow and its sample / process hierarchy. In this illustration, the terminal processes are circled.
This example finds all terminal artifact (sample)-process pairs. The main steps are as follows:
All the processes run on a sample are listed with a process (list) GET method using the ?inputartifactlimsid filter.
All the process outputs for an input sample are found with a process (single) GET.
Iteration through the input-output maps finds all outputs for the input of interest.
Before you follow the example, make sure you have the following items:
A sample to the system.
Several steps that have been run, with several steps run on a single output at least one time.
A compatible version of API (v2 r21 or later).
To walk down the hierarchy from a particular sample, you must do the following steps:
List all the processes that used the sample as an input.
For each process on that list, find all the output artifacts that used that particular input. These output artifacts represent the next level down the hierarchy.
To find the artifacts for the next level down, repeat steps 1 and 2, starting with each output artifact from the previous round.
To find all artifacts in the hierarchy, repeat this process until there are no more output artifacts. The last processes found are the terminal processes.
This example starts from the original submitted sample.
The first step is to retrieve the sample resource via a GET call and find its analyte artifact (derived sample) URI. The analyte artifact of the sample is the input to the first process in the sample hierarchy.
The following GET method provides the full XML structure for the sample including the analyte artifact URI:
The sample.artifact.@limsid contains the original analyte LIMS ID of the sample. For each level of the hierarchy, the artifacts are stored in a Groovy Map called artifactMap. The artifactMap uses the process that generated the artifact as the value, and the artifact LIMS ID as the key. At the top sample level, the list is only comprised of the analyte of the original sample. In the map, the process is set to null for this sample analyte.
To find all the processes run on the artifacts, use a GET method on the process (list) resource with the ? inputartifactlimsid filter.
In the last line of the example code, the processURI string sets up the first part of the URI. The artifact LIMSID is added (concatenated) for each GET call in the following while loop:
In the last line of the example code provided above, the processURI string sets up the first part of the URI.
The artifact LIMSID will be added (concatenated) for each GET call in the while loop below.
The while loop evaluates one level of the hierarchy for every iteration. Each artifact at that level is evaluated. If that artifact was not used as an input to a process, an artifact/process key value pair is stored in the lastProcMap. All the Groovy maps in the previous code use this artifact/process pair structure.
The loop continues until there are no artifacts that had outputs generated. For each artifact evaluated, the processes that used the artifact as an input are found and collected in the processes variable. Because a process can be run without producing outputs, a GET call is done for each of the processes to determine if the artifact generated any outputs.
Any outputs found will form the next level of the hierarchy. The outputs are temporarily collected in the outputArtifactMap. If no processes were found for that artifact, then it is an end leaf node of a hierarchy branch. Those artifact/process pairs are collected in the lastProcMap .
You can iterate through each pair of artifact and process LIMS IDs in outputArtifactMap and print the results to standard output.
Running the script in a console produces the following output:
For more information, refer to and .
Pooling samples in the API is accomplished with a process resource. Information about a step is also stored in the process resource. Such a process has many input samples that map to a shared output sample, such that the shared output is a pool of those inputs. This is achieved with , where a single input-output-map element in the XML defines the shared output and all its related inputs.
<TABLE HEADER>
Sample/Name
Container/Type
Container/Name
Sample/Well Location
Sample/Reagent Label
</TABLE HEADER>
<SAMPLE ENTRIES>
Sample-1
96 well plate
labeled-samples
A:1
Index 1
Sample-2
96 well plate
labeled-samples
A:2
Index 2
Sample-3
96 well plate
labeled-samples
A:3
Index 3
Sample-4
96 well plate
labeled-samples
A:4
Index 4
Sample-5
96 well plate
labeled-samples
A:5
Index 5
Sample-6
96 well plate
labeled-samples
A:6
Index 6
</SAMPLE ENTRIES>
Submitted Sample Name | Derived Sample Name | Derived Sample LIMS ID | Container LIMS ID | Container Type | Well |
Soleus-1 | Soleus-1 | AFF853A53AP11 | 27-4056 | 96 well plate | A:1 |
Soleus-2 | Soleus-2 | AFF853A54AP11 | 27-4056 | 96 well plate | A:2 |
Soleus-3 | Soleus-3 | AFF853A55AP11 | 27-4056 | 96 well plate | A:3 |