Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
This article provides hints and tips to help you get the most out of the Cookbook recipes included in this section.
When reading a recipe, look for file attachments. Almost all examples have an attached Groovy script to download.
To use the scripts with a non-production server, edit the script to include your server network address and credentials.
For illustration purposes, most scripts use populated information. You must add your own sample, process (eg, a master step in Clarity LIMS v5 and later), and other data. The non-production server has a directory set up for this purpose at
Using Full Production Scripts
When using full production scripts, the following considerations must be taken:
Cookbook scripts are written to explain concepts. They are not deeply engineered code written in a defensive programming style. Always think through the expected and unexpected input of your scripts when incorporating concepts or code from Cookbook recipe examples.
Full production servers can require different configurations for scripting languages other than Groovy, and for the EPP/automation worker node. For example, your script directory can be accessible by the user account running the EPP/automation worker node for User Interface (UI) triggers.
Discuss the software deployment plans with your system administrator to coordinate between non-production and production servers. For more information on using production scripts, see REST General Concepts and Automation.
Each recipe was written with a specific API version. For information on how to check the version of the API on your system, see Requesting API Version Information.
Apache Groovy is required for most Cookbook examples. It is open source and is available under an Apache license from groovy-lang.org/download.html. It is installed on non-production servers, but you can also install it to your desktop. The Cookbook examples were developed with Groovy v1.7.
Python is required for some Cookbook examples. It is available from www.python.org/download. The Cookbook examples were developed with Python v2.7.
The automation worker node executing the command uses the first instance of Groovy it finds in the executable search path for the limited shell. This is the $PATH variable.
If you have multiple versions of Groovy (or multiple users using different versions) and experience problems with your command-line calls, declare the full path to Groovy/Java in your command.
To see your executable search path, and other environment variables available to you, run the following command:
Compare this command to the full logon shell, which is
For more information on command-line actions, see Supported Command Line Interpreters.
For details on the programming interface methods and data elements available, refer to the following documentation:
Browsing for, and adjusting resources, in Firefox, Chrome, or other browsers is great for getting started or for troubleshooting.
The following plug-ins are available with Firefox:
Text Link—Makes any URI in the XML a hyperlink.
Linkificator—Converts text links into selectable links.
RESTClient—Provides a simple interface to call HTTP methods on REST resources. It is useful for troubleshooting, checking error codes, and for getting comfortable with GET, PUT, and POST requests.
The following plug-ins are available with Chrome:
Advanced REST Client—Provides similar functionality to Poster by Firefox.
XML Tree—Displays XML data in a user-friendly way.
Before downloading your first script, do the following actions:
Familiarize yourself with the API Cookbook prerequisites and key concepts found in Development Prerequisites and REST General Concepts.
Use a non-production server for script development.
Familiarize yourself with the coding language.
Use the GLSRestApiUtils file to assist with recipe development.
Review Tips and Troubleshooting.
The example script recipes really come to life when you change them and see what happens. Running the scripts often requires new custom fields and master steps to be added to the system. You need unrestricted access to development and test servers (licensed as non-production servers) with Groovy (a coding language). You also need an AI node/automation worker installed so that you can experiment freely.
For more information and recommendations for deploying and copying scripts in development, test, and product environments, refer to Useful Tools.
The Cookbook Recipe Examples are written in Groovy. Many of our examples use the following Groovy concepts:
Closures: Groovy closures are essentially blocks of code that can be stored for later use.
The each method: The each method takes a closure as an argument. It then iterates through each element in a collection, performing the closure on the element, which is (by default) stored in the 'it' variable. For example:
Python
The Cookbook also provides a few examples written in Python, which uses the minidom module. The following script shows how the minidom module is used:
This same functionality can be obtained using any programming language capable of interacting with the Web API. For more information on the minidom module, refer to Python Minidom.
In addition to the Groovy file example attached to each Cookbook recipe page, most recipes require the glsapiutil.py file, which is available on our GitHub repository. The mature glsapiutil.py library is strictly for Python 2. A newer version, glsapiutil3.py, works with Python 3.
For more information on these files, see Obtain and Use the REST API Utility Classes.
The Clarity LIMS Cookbook uses example scripts to help you learn how to work with REST and EPP automation scripts. Cookbook recipes are small, specific how-to articles designed to help you understand REST and automation script concepts. Each recipe includes the following:
Explanations about a concept and how a particular programming interface is used in a script.
A snippet of script code to demonstrate the concept.
The API documentation includes the terms External Program Integration Plug-in (EPP) and EPP node.
As of Clarity LIMS v5.0, these terms are deprecated.
EPP has been replaced with automation.
EPP node is referred to as the Automation Worker or Automation Worker node. These components are used to trigger and run scripts, typically after lab activities are recorded in the LIMS.
The best way to get started is to download the example script and try it out. After you have seen how the script works, you can dissect it and use the pieces to create your own script.
Automations (formerly referred to as EPP triggers or automation actions) allow lab scientists to invoke scripts as part of their workflow. These scripts must successfully complete for the lab scientist to proceed to the next step of the workflow.
EPP automation/support is compatible with API v2 r21 and later.
The API documentation includes the terms External Program Integration Plug-in (EPP) and EPP node.
As of Clarity LIMS v5.0, these terms are deprecated.
EPP has been replaced with automation.
EPP node is referred to as the Automation Worker or Automation Worker node. These components are used to trigger and run scripts, typically after lab activities are recorded in the LIMS.
Automations have various uses, including the following:
Workflow enforcement—Makes sure that samples only enter valid protocol steps.
Business logic enforcement—Validates that samples are approved by accounting before work is done on them. This automation can also make sure that selected samples are worked on together.
Automatic file generation—Automates the creation of driver files, sample sheets, or other files specific to your protocol and instrumentation.
Notification—Notifies external systems of lab progress. For example, you can notify Accounting of completed projects so that they can then bill for services rendered.
You can enable automations on master steps in two configuration areas of Clarity LIMS:
On the Automations tab, when adding/configuring an automation. See the Adding and Configuring Automations article in the Automations section of the Clarity LIMS documentation.
On the Lab Work tab, on the master step configuration form. See the _Adding & Configuring Master Steps and Step_s article in the Steps and Master Steps section of the Clarity LIMS documentation.
After it is enabled on a master step, the automation becomes available for use on all steps derived from that master step.
You can configure the automation trigger on the master step, or on the steps derived from that master step.
While executing a script, if more than one script would be triggered for a single user action, they are reported in sequence. This reporting continues until all scripts complete, or one of them fails.
An example scenario would be a step that is configured to execute the following:
One script upon exit of the Placement screen.
A second script upon entry of the Record Details screen.
In this scenario, when the lab scientist advances their protocol step from the Placement screen to the Record Details screen, the scripts are executed in sequence.
The parameter string/automation name configured on the master step is displayed in a progress message. You can use this feature by giving your parameter strings/automations meaningful names that provide you with context about what the script is doing. The following is an example of a progress message.
![In\_Progress.png](https://genologics.zendesk.com/attachments/token/yawon1xdfirt9mm/?name=In+Progress.png)
You cannot proceed until the script completes successfully.
You can request to cancel a script that is not responsive. While canceling abandons the monitoring of script execution, it does not stop the execution of the script.
After canceling a script, follow up with the Clarity LIMS administrator to determine if the AI node/automation worker must be restarted.
The scientific programmers in your facility can provide you with a message upon successful execution of a script. There are two possible non-fatal messages: OK and WARNING. These messages can be set using the step program status REST API endpoint.
Message boxes display the script name, followed by a message that is set by the script using the step program status REST API endpoint. Line breaks are permitted in the custom message. The following is an example of a success message:
After you select OK, you are permitted to proceed in the workflow.
When a script fails, a message box displays. There are two ways to produce fatal messages:
By using the step program status REST API endpoint (informing FAILURE as the status)
By generating output to the console and returning a non-zero exit code.
For example, when beginning a step, if the script does not allow you to work on the samples together in Ice Bucket, the samples will be returned to Ice Bucket after acknowledging the error message. In this case, the step is prevented from being tracked. The following is an example of a failure message:
If you attempt to advance a step from the Pooling screen, but an error is detected, the error state prevents you from continuing. The following is an example of this type of message:
After you select OK, you are prevented from proceeding in the workflow. Instead, you must return to the Pooling screen and address the problem before proceeding.
This page is maintained for posterity, but customers are encouraged to visit the GitHub repository for all subsequent updates to the library (including changelogs). Unless otherwise specified, changes are only made in the Python version of the library.
Dec. 19, 2017:
glsapiutil v3 ALPHA (bleeding-edge library) released on GitHub. GitHub has the most current library.
Links to library removed from this page.
Dec. 15, 2016:
reportScriptStatus() function had a bug that caused it to not work when a <message> node was unavailable. This has been fixed.
deleteObject() functions now available for both v1 and v2 of the library.
getBaseURI() should now return a trailing slash at the end of the URI string.
getFiles() function added to batch retrieve files.
NOTE: The Python glsapiutil.py and glsapiutil3.py classes are now available on GitHub. GitHub has the most current libraries. glsapiutil3.py works with both Python v2 and v3.
The GLSRestApiUtils utility class provides a consistent way to perform common REST operations, such as REST HTTP methods or common XML string manipulation. It is a utility class written in Python and Groovy for the API Cookbook examples. This utility class is specific to the Cookbook examples. The class is not required for the API with Groovy or Python, as there are many other ways to manipulate HTTP and XML in these languages. However, it is required if you want to run the Cookbook examples as written. It is also not part of REST or EPP/automation.
Almost all Cookbook example files use the HTTP methods from the GLSRestApiUtils class.
The HTTP method calls in Groovy resemble the following example:
In this example, the returnNode and inputNode are Groovy nodes containing XML. The XML in the returnNode contains the XML available from the server after a successful method call. If the method call was unsuccessful, the XML contains error information. The following is an example of the XML manipulation functions in the utility:
As you can see from these examples, the utility class is easy to include in your scripting. The code is contained in the GLSRestApiUtils files attached to this page.
To deploy a Groovy script that uses the utility class, you must include the directory containing GLSRestApiUtils.groovy in the Groovy class path.
Groovy provides several ways to package and distribute source files, including the following methods:
Call Groovy with the -classpath (or -cp) parameter.
Add to the CLASSPATH environment variable.
Create a ~/.groovy/lib directory for jar files for common libraries.
If you would like to experiment with the Cookbook examples, you can also copy the file into the same directory as the example script.
Library functions
The HTTP method calls for the Python version of the library resemble the following:
Unlike with the Groovy library, the rest functions in the Python library require XML (text) as input (not DOM nodes). The return values of the GET, PUT, and POST functions are also XML text.
If a script must work with a running process or step, it is normal to use either the {processURI:v2} or the {stepURI:v2} tokens. The following example has the {stepURI:v2} token:
In Clarity LIMS v4 and above, these tokens sometimes resolve to https://localhost:9080/api/v2/... instead of the expected HOSTNAME. Setting up the API object with a hostname other than https://localhost:9080 can cause Access Denied errors. To avoid this issue, alter the API authentication code slightly as follows.
TThe changes are highlighted in red. This code takes the resolved {stepURI:v2} token (assumed to be stored in the args object) and resets the HOSTNAME variable to the new value (eg, https://localhost:9080) before authenticating.
These changes are fully backward-compatible with Clarity LIMS v4 or earlier. The EPP/automation URI tokens resolve to the expected hostname, and the setupGlobalsFromURI() function still parses it correctly.
NOTE: On GitHub, in addition to the libraries, a basic_complete_recipe.py script that contains the skeleton code is needed to get started with the Python API. This script also includes the modifications required to work with Clarity LIMS v4 and later. The legacy Groovy library can still be obtained using the attachment.
Attachments
GLSRestApiUtils.groovy:
At the completion of a process (using API v2 r21 or later), EPP can invoke any external program that runs from a command line. In this example, a process with a reference to a declared EPP program is configured and executed entirely via the API.
EPP automation/support is compatible with API v2 r21 and later.
The API documentation includes the terms External Program Integration Plug-in (EPP) and EPP node.
As of Clarity LIMS v5.0, these terms are deprecated.
EPP has been replaced with automation.
EPP node is referred to as the Automation Worker or Automation Worker node. These components are used to trigger and run scripts, typically after lab activities are recorded in the LIMS.
You have defined a process that has:
An input of type analyte.
A single output per input.
A single shared result file.
The process type is associated with an external program that has the following requirements:
At least one process-parameter defined - named TestProcessParam.
A parameter string of:
bash -c "echo HelloWorld > {compoundOutputFileLuid0}.txt"
Samples have been added to the LIMS.
To run a process on a sample, you must first identify the sample to be used as the input to the process.
For this example, run the process on the first highlighted sample.
After you have identified the sample, you can use its LIMS ID to as a parameter for the script. The artifact URI is then used as the input in constructing the XML to POST and executing a process.
The following code block outlines this action and obtains the URI of the container for the process execution POST.
NOTE: As shown in other examples, you can use StreamingMarkupBuilder to construct the XML needed for the POST.
You now have all the pieces of data to construct the XML for the process execution. The following is an example of what this XML looks like.
Executing a process uses the processexecution (prx) namespace. The following elements are required for a successful POST:
type - the name of the process being run
technician uri - the URI for the technician that will be listed as running the process
input-output-map - one input-output-map element for each pair of inputs and outputs
input uri - the URI for the input artifact
output type - the type of artifact of the output
If the outputs of the process are analytes, then the following elements are also required:
container uri - the URI for the container the output will be placed in
value - the well placement for the output
To use the configured EPP process, the process-parameter element is required. This element is the name of the configured EPP that is executed when this process is posted.
The following elements that match the processParamName variable must exist in the system before the process can be executed:
Process type
Technician
Input artifact
Container
EPP parameter
With analyte outputs, if there are no containers with empty wells in the system, you must create one before running the process.
The XML constructed must match the configuration of the process type. For example, if the process is configured to have both analytes and a shared result file as outputs, you must have the following:
An input-output-map for each pair of analyte inputs and outputs.
An additional input-output-map for the shared result file.
The name on the process execution XML must match one of the possibly declared EPP parameter names. This requirement is true for any EPP parameters.
If the POST is Successful, then the process XML is returned.
In the following example, there are two <input-output-map> elements. The second instance has the output-generation-type of PerAllInputs. This element indicates that the result file is shared and only one is produced, regardless of the number of inputs.
If the POST is Not Successful, then the XML that is returned contains the error that occurred when the POST completed. The following example shows this error:
Attachments
ExecuteProcessWithEPP.groovy:
autocomplete-process.py:
You can configure the automation trigger and use automation to invoke any external program that runs from a command line. Refer to the following for details:
EPP automation/support is compatible with API v2 r21 and later.
The API documentation includes the terms External Program Integration Plug-in (EPP) and EPP node.
As of Clarity LIMS v5.0, these terms are deprecated.
EPP has been replaced with automation.
EPP node is referred to as the Automation Worker or Automation Worker node. These components are used to trigger and run scripts, typically after lab activities are recorded in the LIMS.
When working with submitted samples, you can do the following:
You can add samples to the system using API (v2 r21 and later). This example assumes that you have sample information in a file that is difficult to convert into a format suitable for importing into Clarity LIMS. The aim is to add the samples, and all associated data, into Clarity LIMS without having to translate the file manually. You can use the REST API to add the samples.
Follow the instructions provided in the following examples:
To add a sample in Clarity LIMS, you must assign it to a project and place it into a container. This example assumes that you are adding a new project and container for the samples being created.
As shown in , you define a project by using StreamingMarkupBuilder. StreamingMarkupBuilder is a built-in Groovy data structure designed to build XML structures. This structure creates the XML that is used in a POST to the projects resource:
If the POST to projects is successful, the following XML is returned:
If the POST to containers is successful, the following XML is returned:
Now that you have the project and container, you can use StreamingMarkupBuilder to create the sample. The XML created to add the sample uses the URIs for the project and container that were created in the previous steps.
This POST to the samples resource creates a sample in Clarity LIMS, adding it to the project and container specified in the POST.
In Clarity LIMS Projects and Samples dashboard, open the project to find the new sample in its container.
PostSample.groovy:
You can rename samples in the system using API (v2 r21 and later). The amount of information provided in the sample name is sometimes minimal. After the sample is in the system, you can add additional information to the name. For example, you can help lab scientists understand what they must do with a sample, or where it is in processing.
As of Clarity LIMS v5, the term user-defined field (UDF) has been replaced with custom field in the user interface. However, the API resource is still called UDF.
There are two types of custom fields:
Master step fields—Configured on master steps. Master step fields only apply to the following:
The master step on which the fields are configured.
The steps derived from those master steps.
Global fields—Configured on entities (eg, submitted sample, derived sample, measurement, etc.). Global fields apply to the entire Clarity LIMS system.
Clarity LIMS displays detailed information for each sample, including its name, container, well, and date submitted.
In this example, the sample name is Colon-1. To help keep context when samples are processed by default, the submitted sample name is used for the downstream samples (or derived samples) generated by a step in Clarity LIMS.
Before you rename a sample, you must first request the resource via a GET. As REST resources are self-contained entities, always request the full XML representation before editing the portion that you wish to change.
The XML representations of individual REST resources are self-contained entities. Always request the full XML representation before editing any portion of the XML. If you do not use the complete XML when you update the resource, you can inadvertently change data.
The following GET method below returns the full XML structure for the sample:
The variable sample now holds the complete XML structure returned from the sampleURI.
The following example shows the XML for the sample, with the name element on the second line. In this particular case, the Clarity LIMS configuration has expanded the sample with 18 custom fields that provide sample information.
Renaming the sample consists of the following:
The name change in the XML
The PUT call to update the sample resource
The name change is executed with the nameNode XML element node, which references the XML element containing the name of the sample.
The PUT method updates the individual sample resource using the complete XML representation, which includes the new name. Such complete updates provide a simple interaction between client and server.
The updated sample view displays the new name. You can also view the results in a web browser via the URI at
http://<YourIPaddress>/api/v2/samples/<SampleLIMSID>
RenamingSample.groovy:
The example is similar to this example. The differences are that this example has minimal input/output and posts a reference to a pre-defined EPP process-parameter.
In addition, this example requires a container in which to store the results of the process execution. An example of how to do this is included in the Groovy script under .
As shown in the example, you can add a container by using StreamingMarkupBuilder to create the XML for a new container. This creates the XML that is used in a POST to the containers resource:
When working with containers, you can do the following:
When working with containers, you can do the following:
In the Clarity LIMS API (v2 r21 or later), the initial submitted sample is referred to as a sample (or root artifact). Any derived sample output from a process/step is referred to as an analyte, or artifact of type analyte. This example demonstrates the relationship between samples and analyte artifacts. You must have a sample in the system and one or more processes/steps done that output analyte (derived sample) artifacts.
As of Clarity LIMS v5, the term user-defined field (UDF) has been replaced with custom field in the user interface. However, the API resource is still called UDF.
There are two types of custom fields:
Master step fields—Configured on master steps. Master step fields only apply to the following:
The master step on which the fields are configured.
The steps derived from those master steps.
Global fields—Configured on entities (eg, submitted sample, derived sample, measurement, etc.). Global fields apply to the entire Clarity LIMS system.
The code example does the following when it is used:
Retrieves the URI of an arbitrary analyte artifact.
Retrieves the corresponding sample of the artifact
Retrieves the original root analyte artifact from the sample, as shown in the following example:
You can generate XML for an arbitrary analyte artifact. The analyte artifact is downstream and has a parent-process element (as shown in line 5). The sample artifact is an original artifact. Downstream artifacts relate to at least one sample, but can also relate to more than one sample, like with pooling or shared result files. The following is an example of XML generated for an analyte artifact:
You can also generate XML for a submitted sample. Every submitted sample has exactly one corresponding original root artifact. A sample representation does not link to downstream artifacts, but you can find them using query parameters in the artifacts list resource. The following is an example of XML generated for a submitted sample:
Lastly, you can generate XML for an original sample artifact called a root artifact. The following is an example of XML generated from an original sample artifact. In this case, both the downstream artifact and the original root artifact point to the same original sample (eg, LIMS ID EXA2241A1).
SampleAndAnalyteRelations.groovy:
Samples in the lab are always in a container (eg, a tube, plate, or flow cell). When a container holds more than one sample, it is often easier to track the container rather than the individual samples. These containers can be found in API (v2 r21 or later).
In Clarity LIMS, containers are identified the LIMS ID or by name. The best way to find a container in the API is with the LIMS ID. However, the API also supports searching for containers by name by using a filter.
LIMS ID—This is a unique ID. The container resource with LIMS ID 27-42 can be found at\
Name—Container names can be unique, depending on how the server software was set up. In some labs, container names are reused to show when a container is recycled or when samples are submitted in containers.
The following example shows a container list filtered by name. Your system contains a series of containers, named with a specific naming convention.
the queried containers are named Smith553 and 001TGZ.
The request for a container with a specific name is structured in the same way as the request for all containers, but also includes a parameter to filter by name:
The name parameter is repeatable, and the results returned match any of the names queried:
The GET method returns the full XML structure for the list of containers matching the query. In this case, the method returns the XML structure for containers with the names Smith553 and 001TGZ.
The XML contains a list of container elements. The .each method goes through each container node in the list and prints the container LIMS ID.
The XML returned is placed in the variable containers:
If the system has no containers named Smith553 or 001TGZ, then containers.container is an empty list. The .each method does nothing, as expected.
When execution completes, the code returns the list of LIMS IDs associated with the container names Smith553 and 001TGZ. The name and LIMS IDs are different in this case (eg, 27-505 27-511).
GetContainerNameFilter.groovy:
You can use the API (v2 r21 and later) to automate the process of assigning samples to a workflow. This example shows how to create the required XML. The example also provides a brief introduction on how to use the route/artifacts endpoint, which is the endpoint used to perform the sample assignment.
The example takes two samples that exist in Clarity LIMS and assigns each of them to a different workflow.
Define the assignment endpoint URI using the following example. The assignment endpoint allows you to assign the artifacts to the desired workflow.
You can also retrieve the base artifact URIs of the samples using the following example:
Use the following example to gather the workflow URIs:
Next, you can construct the XML that is posted to perform the workflow assignment. You can do this construction by using the StreamingMarkupBuilder and the following example.
Assign the analyte (derived sample) artifact of the sample to a workflow as follows.
Create an assign tag with the URI of the destination workflow as an attribute.
Create an artifact tag inside the assign tag with the URI of the analyte as an attribute.
After the assignment XML is defined, you can POST it to the API. This POST performs the sample assignment.
After the script has run, the samples display in the first step of the first protocol in the specified workflows.
AssigningArtifactsToWorkflows.groovy:
In high throughput labs, samples are worked on in batches and some work is executed by a robot. Sometimes, a set of plates must be rearrayed to one larger plate before the robot can begin the lab step.
This example accomplishes this using two scripts. One script is configured on a derived sample automation, while the second script is included in a command line configured on a step automation.
Before you follow the example, make sure that you have the following items:
A project containing samples assigned to a workflow in Clarity LIMS.
The workflow name.
Given samples are assigned to the same workflow stage.
This example demonstrates the following scripts:
AssignToRearrayWf.groovy—Executed as a derived sample automation, this script assigns selected samples to the rearray step.
AssignToLastRemoved.groovy—Executed after the rearray step, this script assigns the samples to the stage to which they were originally assigned. The script is included in a command line configured on a step automation.
In Clarity LIMS, under Configuration, select the Automation tab.
Select the Derived Sample Automation tab.
Select New Automation and create an automation that prompts the user for the workflow stage name to be used.
In the example, note the following:
The {groovy_bin_location} and {script_location} parameters must be customized to reflect the locations on your computer.
The –w option allows for user input to be passed to the script as a command-line variable.
The AssignToRearrayWf script has a list of artifact (sample) LIMS IDs given on the command line. To begin, use this script to build a list of artifact nodes.
The following code example builds a list of artifact URIs using the artifact LIMS ID list and the getArtifactNodes function. The resulting artifact URI list can then be used for a batchGET call to return the artifact nodes.
In this example, you can assume that the workflow name is known by the user and is passed to the script by user input when the automation is initiated.
The workflow can then be queried for using the passed workflow name. The workflow name is first encoded, and from this, you can retrieve the workflow URI.
For the samples to be placed in the same container, they must all belong to the same workflow and be currently queued to the same stage in that workflow.
Using the workflow name passed in by the user, do the following:
Search the workflow stage list of the first artifact and store the URI of the most recent stage that is part of the workflow, if it is queued. Otherwise, the script exits with an error message.
After storing the workflow stage URI of the first artifact, use the checkMatch function check against the remaining artifacts in the list to verify they are all currently queued to the same stage.
If all artifacts are queued for the stage, they are removed from the queue of the stage under the lastWfStageURI function.
In this example, all the artifacts are unassigned from the previous workflow stage returned and assigned to the rearray stage using the queuePlacementStep function. The previous methods have verified that the artifacts in the list can be rearrayed together.
The returned XML node is then posted using httpPOST.
In Clarity LIMS, under Configuration, select the Lab Work tab.
Create a master step of Standard step type.
From Configuration, select the Automation tab.
Select the Step Automation tab.
Create an automation for the AssignToLastRemoved.groovy script.
The {groovy_bin_location} and {script_location} parameters must be customized to reflect the locations on your computer.
Enable the automation on the master step you created in step 2.
Configure a new protocol and step as follows.
On the Lab Work tab, create a non-QC protocol.
In the Protocols list, select the new protocol and then add a new step to it. Base the new step on the master step you created in step 2.
On the Step Settings form, in the Automation section, you see the step automation you configured. Configure the automation triggers as follows.
Trigger Location—Step
Trigger Style—Automatic upon exit
On the Placement milestone, Add 96 well plate and 384 well plate as the permitted destination container types for the step.
Remove the default Tube container type.
Save the step.
Configure a new workflow as follows:
On the Lab Work tab, create a workflow.
Add the protocol you created to the workflow.
The first step of AssignToLastRemovedStage script is the same as for the AssignToRearrayWf script: return the artifact node list.
However, in this script, you are not directly given the artifact LIMS IDs. Instead, because you receive the step URI from the process parameter command line, you can collect the artifact URIs from the inputs of the step details input-output map using the getArtifactNodes function.
An example step details URI might be {hostname}/api/v2/steps/{stepLIMSID}/details.
Each artifact in the list was removed from this stage before going through the rearray step.
With this in mind, and because the Clarity LIMS API stores artifact history by time (including stage history), the stage to which you now want to assign the samples to be the second-to-last stage in the workflow-stage list.
The following method finds the stage from which the artifacts were removed using the getLastRemoved function:
You can then check to make sure all artifacts originated in this stage. This helps you avoid the scenario where the AssignToRearrayStage.groovy script was run on two groups of artifacts queried while in different workflow stages.
Function: assignStage
This returned stage URI is then used to build the assignment XML to assign all the samples back to this stage with the assignStage function.
After posting this XML node, the samples are assigned back to the stage in which they began.
In the Projects Dashboard, select the samples to be rearrayed and run the 'Assign to Rearray' automation.
On automation trigger, the {userinput} phrase will invoke a dialog that prompts for the full name of the workflow.
The samples assigned by the Assign to Rearray automation is available to assign to a new container.
Add the samples to the Ice Bucket and begin work.
The placement screen opens, allowing you to place the samples into the new container, in your desired placement pattern.
Proceed to the Record Details screen, then on to Next Steps. Do not perform any actions on these screens.
In the next step drop-down list, select Mark Protocol as Complete and select Apply.
Selec Next. This initiates the 'Assign to last removed' trigger, which assigns the samples back to the step from which they were removed.
AssignToRearrayWf.groovy:
AssignToLastRemoved.groovy:
As samples are processed in the lab, they are kept in a container. Some of these containers hold multiple samples, and lab scientists often must switch between container tracking and sample tracking.
If you process several containers each day and track them in a list, you would need to find which samples are in those containers. This way, you can record specifics from these container-based activities in relation to the samples from Clarity LIMS.
The example finds which sample is in a given well of a multi-well container using Clarity LIMS and API (v2 r21 or later).
Before you follow the example, make sure that you have the following items:
Several samples exist in the Clarity LIMS.
A step has been run on the samples.
The outputs of the step have been placed in a 96-well plate.
Clarity LIMS captures detailed information for a container (eg, its name, LIMS ID, and the names of the sample in each of its wells). Information about the container and what it currently contains is available in the individual XML resource for the container.
The individual container resource contains a placement element for each sample placed on the container. Each placement element has a child element named value that describes one position on the container (eg, the placement elements for a 96-well plate include A:1, B:5, E:2).
In the script, the GET request retrieves the container specified by the container LIMS ID provided as input to the {containerLIMSID} parameter. The XML representation returned from the API is stored as the value of the container variable:
The following example shows the XML format returned for a container. The XML includes a placement element for each artifact that is placed in a well location in the container.
When you look for the artifact at the target location, the script searches through the placement elements for one with a value element that matches the target. If a match is found, it is stored as the value of the contents variable.
The >uri attribute of the matching placement element is the URI of the artifact that is in the target well location. This is stored as the value of the artifactURI variable, and printed as the output of the script:
Running the script in a console produces the artifact at
GetContentsOfWellLocation.groovy:
When a lab processes samples, the samples are always in a container of some sort (eg, a tube, a 96-well plate, or a flow cell). In Clarity LIMS, this processing is modeled by placing all samples into containers. Because the Clarity LIMS interface relies on container placement for the display of many of its screens, adding containers is a critical step when running a process or adding samples through the API (v2 r21 or later).
The following example demonstrates how to add an empty container, of a predefined container type, to Clarity LIMS through the API.
If you would like to add a batch of containers to the system, you can increase the script execution speed by using batch operations. For more information, refer to the and the articles in the section.
Before you can add a container to the system, you must first define the container to be created. You can construct the XML that defines the container using StreamingMarkupBuilder, a built-in Groovy data structure designed to build XML structures.
To construct the XML, you must declare the container namespace because you are building a container. The minimum information that can be specified to create a container are the container name and container type.
If you also want to add custom field values to the container you are creating, you must declare the userdefined namespace.
NOTE: As of Clarity LIMS v5, the term user-defined field (UDF) has been replaced with custom field in the user interface. However, the API resource is still called udf.
The POST command posts the XML constructed by StreamingMarkupBuilder to the containers resource of the API. The POST command also adds a link from the containers URI (the list of containers) to the new container.
The XML for the new container is as follows.
The XML for the list of containers, with the newly added container shown at the end of the list, is as follows.
For Clarity LIMS v5 and above, the Operations Interface Java client has been deprecated, and there is no equivalent Containers view screen in which to view empty containers added via the API. However, if you intend to add samples to Clarity LIMS through the API, this example is still relevant, as you must first add containers in which to place those samples.
PostContainer.groovy:
The most important information about a sample is often recorded in custom fields in API (v2 r21 and later). These fields often contain information that is critical to the processing of the sample, such as species or sample type.
When samples come into the lab, you can provide lab scientists with information about priority or quality. You can provide this information by changing the value of specific sample custom fields.
This example shows how to change the value of a sample custom field called Priority after you have entered a submitted sample into the system.
As of Clarity LIMS v5, the term user-defined field (UDF) has been replaced with custom field in the user interface. However, the API resource is still called UDF.
There are two types of custom fields:
Master step fields—Configured on master steps. Master step fields only apply to the following:
The master step on which the fields are configured.
The steps derived from those master steps.
Global fields—Configured on entities (eg, submitted sample, derived sample, measurement, etc.). Global fields apply to the entire Clarity LIMS system.
In Clarity LIMS, you can display detailed information for a sample, including the following:
Name
Clarity LIMS ID
Custom fields
In the following figure, you can see that the sample name is DNA Sample-1 and the field named Priority has the value High.
In this example, change the value of the Priority custom field to Critical.
Before you can change the value of the field, you must first request the resource via a GET method.
To change a sample submitted in Clarity LIMS, use the individual sample resource. The XML returned from a GET on the individual sample resource contains the information about the sample.
The following GET method returns the full XML structure for the sample:
The sample variable now holds the complete XML structure returned from the sample GET request.
The XML representations of individual REST resources are self-contained entities. Always request the complete XML representation before editing any portion of the XML. If you do not use the complete XML when you update the resource, you can inadvertently change data.
The following shows XML returned for the sample, with the Priority field shown in red in the second to last line. In this example:
The Clarity LIMS configuration has added three fields to the expanded sample information.
The UDFs are named Sample Type, Phenotypic Information, and Priority.
When updating the Priority field, you need to do the following:
Change the value in the XML.
Use a PUT method to update the sample resource.
You can change the value for Priority to Critical by using the utility files setUdfValue method.
The subsequent PUT method updates the sample resource at the specified URI using the complete XML representation, which includes the new custom field value for XML.
A successful PUT returns the new XML in the returnNode. The results can also be reviewed in a web browser at <YourIPaddress>/api/v2/samples/<SampleLIMSID> URI.
An unsuccessful PUT returns the HTTP response code and message in the returnNode XML .
NOTE: The values for the other two fields, Sample Type and Phenotypic Information, did not change. These values did not change because they were included in the XML used in the PUT (eg, they were held in the sample variable as part of the complete XML structure).
If those custom fields had not been included in the XML, they would have been updated to have no value.
The following XML from our example shows the expected output:
In Clarity LIMS, the updated sample details now show the new Priority value.
UpdateSampleUDF.groovy:
In Clarity LIMS, derived sample automations are automations that users can run on derived samples directly from the Projects Dashboard.
The following example uses an automation to initiate a script that removes multiple derived samples from workflows. The example also describes the main functions included in the script, and shows how to configure the automation in Clarity LIMS and run it from the Projects Dashboard.
Before removing samples from the workflows, make sure you have the following items:
A project containing at least one sample assigned to a workflow.
A step has been run on a sample, resulting in a derived sample.
The derived sample is associated with one or more workflows.
The attached UnassignSamplesFromWorkflows.groovy script uses the derived sample automations feature to remove selected derived samples from their associated workflows. The following actions must be done when removing samples from these workflows.
This getSampleNodes function is passed a list of derived sample LIMS IDs (as a command-line argument) to build a list containing the XML representations of the samples. A for-each loop on the derived sample list makes a GET call for each sample and creates the sample node list. The following command-line example shows how the getSampleNodes function works:
This list is used to retrieve the sample URIs and the workflow-stage URIs. These URIs are required to build the unassignment XML.
The for loop does the following actions:
Makes a GET call for each workflow-stage to which the passed sample is assigned.
Retrieves the associated workflow URIs.
Returns a list containing all URIs for the workflows with which the sample is associated.
Now that the functions used to retrieve both the derived sample URIs and the workflow URIs have been built, you can use StreamingMarkupBuilder to create the XML and then POST to the unassignment URI. This process can be done with the unassignSamplesFromWorkflows and unassignSamplesXML functions.
To unassign the derived samples, you can POST to the artifacts URI at ${hostname}v2/route/artifacts. Nested loops create the declaration for each sample and their associated workflows. The following example shows the declaration built in the format of the workflow URI, with the unassign flag followed by the URI of the sample being unassigned.
Now that the XML is built, convert the XML to a node and post it as follows.
Use GLSRestApiUtils to convert the XML to a node
POST the node using the following command:
Automations can be configured and run using Clarity LIMS
In Clarity LIMS, under Configuration, select the Automation tab.
Select the Derived Sample Automation tab.
Select New Automation and enter the following information:
Automation Name—This is the name that displays to the user running the automation from the Projects Dashboard. Choose a descriptive name that reflects the functionality/purpose (eg, Remove from Workflows).
Channel Name—Enter the channel name.
Command Line—Enter the command line required to invoke the script.\
Select Save.
Run the automation as follows.
Open the Projects Dashboard.
Select a project containing in-progress samples. Select In-progress samples.
In the sample list, you see the submitted and derived samples that are currently in progress for this project.
Select one or more derived samples.
Selecting samples activates the Action button and drop-down list.
In the Action drop-down list, select the Remove From Workflows automation created in the previous step.
The API of the selected samples now shows an additional workflow stage with a status of REMOVED.
UnassignSamplesFromWorkflows.groovy:
Derived sample automations are automations that users can run on derived samples directly from the Projects Dashboard in Clarity LIMS.
The following example uses an automation to initiate a script that requeues samples to an earlier step in the workflow. The example also describes the main functions included in the script and demonstrates the configuration options that prompt the user for input. These options allow for greater flexibility during script runs. Before you follow the example, make sure that you have the following items:
A project containing samples assigned to a multi-stage workflow.
Samples that must be requeued. These samples must have completed at least one step in the workflow and must be available for requeue.
The purpose of the attached RequeueSamples.groovy script is to requeue selected derived samples to a previous step in the workflow with the derived sample automations feature.
The getSampleNodes function is passed a list of derived sample LIMS IDs (as a command-line argument) to build a list containing the XML representations of the samples. The resulting sample URI list can then be used with a batchGET to return the sample nodes:
To retrieve the workflow name, you can URL encode the workflow name and use the result to query and retrieve the workflow URI:
The stage names are guaranteed to be unique for each workflow. However, they may not be unique in the Clarity LIMS system. As a result, the stage URI cannot be queried for in the same way as the workflow URI.
Instead, you can navigate through the workflow node to find the stage that matches the stage name specified using the getStageURI function. If a match is found, return the stage URI.
Next, you must make sure that each sample meets the criteria to be requeued using the canRequeue function. The following method checks all workflow stages for the samples:
If a match is found between a workflow stage URI and the stage URI specified, the sample node is added to a list of samples that can be requeued using the requeueList function.
If all the samples have this match and a status that allows for requeue, the list is returned. Otherwise, the script exits with an error message that states the first sample to cause failure.
In this example, both unassignment from and assignment to a workflow stage must occur to complete the requeue. As the samples are requeuing to a previous stage in the workflow and can currently be queued for another stage, you must remove them from these queues.
The getCurrentStageURI and lastStageRun functions check the sample node for its most recent workflow stage. If the node is in a queued status, it returns that stage URI to be unassigned.
Using the previous methods and their results, the following code uses Streaming Markup Builder and the assignmentXML function to build the XML to be posted:
The returned XML node is then posted using httpPOST.
Add and configure the automation
In Clarity LIMS, under Configuration, select the Automation tab.
Select the Derived Sample Automation tab.
Select New Automation and enter the following information:
Automation Name—This is the name that displays to the user running the automation from the Projects Dashboard. Choose a descriptive name that reflects the functionality/purpose (eg, Requeue Samples).
Channel Name—Enter the channel name.
Command Line—Enter the command line required to invoke the script.
Select Save.
Run the automation as follows.
Open the Projects Dashboard.
Select a project containing in-progress samples. Select In-progress samples.
In the sample list, you will see all of the submitted and derived samples that are currently in progress for this project.
Select one or more derived samples. Selecting samples activates the Action button and drop-down list.
In the Action drop-down list, select the Requeue Samples automation.
In this example, the –w and -t {userinput} options invoke a dialog box on automation trigger. The user is required to enter two parameters: the full name of the stage and the workflow for which selected samples are to be requeued. The names must be enclosed in quotation marks.
If the requeue is successful, each requeued sample is marked with a complete tag. Hovering over a sample shows a more detailed message.
RequeueSamples.groovy:
In Clarity LIMS, under Lab View, select the protocol you created in .
A sample can be associated with one or many workflows, and each derived sample has a list of the workflow-stages to which it is assigned. Making a GET call on each workflow-stage URI retrieves its XML representation, from which the workflow URI can be acquired and added to a list. The getWorkflowURIs function calls for each sample node included in the list (eg, sampleURIList, username, and password from ).
When working with projects and accounts, you can do the following:
When working with process and step outputs, you can do the following:
When working with multiplexing, you can do the following:
As samples are processed in the lab, substances are moved from one container to another. Because container locations are sometimes used to reference the sample in data files, tracking the location of these substances within containers is one of the key values that Clarity LIMS provides to the lab.
Within the REST API (v2 r21 or later), analytes represent the substances on which processes/steps are run. These analytes are the substances that are chemically altered and transferred between containers as samples are processed in the lab.
Each individual sample resource has an analyte artifact that describes its container location and is used to run processes.
In Clarity LIMS, steps are not run on the original submitted samples, but are instead run on (and can also generate) derived samples. In the API, derived samples are known as analytes. Each sample resource, which is the original submitted sample in Clarity LIMS, has a corresponding analyte that is used for running processes/steps and describing placement in a container.
For more information on analyte artifacts and other REST resources, see Structure of REST Resources.
For all Clarity LIMS users, make sure you have done the following actions:
Added a sample to Clarity LIMS.
Run a process/step on the sample, with the same process/step generating a derived sample output.
Added the generated derived sample to a multi-well container (eg, a 96-well plate).
The container location information for an individual derived sample/analyte is located within the XML for the individual artifact resource. Because artifacts are generated by running steps in the LIMS, this is a logical place to keep track of the location.
Within a script, you can use a GET method to request the artifact. The resulting XML structure contains all the information related to the artifact, including its container and well location.
In this example, a derived sample named Brain-600 is placed in well A:1 of a container with LIMS ID 27-1259. This information is found in the location element.
The location elements has two child data elements:
One linking to the container URI, which specifies which container the analyte is in.
One for the well location, which has the name 'value' in the XML structure.
Valid values for a well location can be either numeric or alphabetic, and are determined by the configuration of the container in Clarity LIMS.
Well locations are always represented in the row:column format. For example, a 96-well plate can have locations A:1 and C:12, and a tube can have a single well called 1:1.
Use the following XML example to retrieve the artifact:
Because the container position is structured in the row:column format, you can store the row and column in separate variables by splitting the container position on the colon character. You can access the string value of the location value node using the text() method, as shown in the following code:
Running the script in a console produces the following output:
GetContainerAnalyteLocation.groovy:
The large capacity of current Next Generation Sequencing (NGS) instruments means that labs are able to perform multiplexed experiments with multiple samples pooled into a single lane or region of the container. Before being pooled, samples are assigned a unique tag or index. After sequencing and initial analysis are complete, the sequencing results must be demultiplexed to separate data and relate the results back to each individual sample.
Clarity LIMS allows you to track a multiplexing workflow by adding reagents and reagent labels to artifacts, and then using the reagent labels to demultiplex the resulting files.
There are several ways to apply reagent labels. However, all methods involve creating placeholders that link the final sequences back to the original submitted samples. Either the lab scientist or an automated process must determine which file actually belongs with which placeholder. For more information on applying reagent labels, refer to Work with Multiplexing.
This example walks through assigning user-defined field (UDF)/custom field values to the demultiplexed output files based on upstream derived sample (analyte) UDF/custom field values. This includes upwards traversal of a sample history / genealogy, based on assigned reagent labels. This differs from upstream traversal based strictly upon process input-output mappings.
As of Clarity LIMS v5, the term user-defined field (UDF) has been replaced with custom field in the user interface. However, the API resource is still called UDF.
There are two types of custom fields:
Master step fields—Configured on master steps. Master step fields only apply to the following:
The master step on which the fields are configured.
The steps derived from those master steps.
Global fields—Configured on entities (eg, submitted sample, derived sample, measurement, etc.). Global fields apply to the entire Clarity LIMS system.
If you are using Clarity LIMS v5 or later, make sure you have completed the following actions:
Created a project and have added multiple samples to it.
Run the samples through a sequence of steps that perform the following:
Reagent addition / reagent label assignment
Pooling
Demultiplexing (to produce a set of per-reagent-label result file outputs).
Set a Numeric custom field value on each derived sample input to the reagent addition process.
A Numeric custom field with no assigned value exists on each of the per-reagent-label result file outputs. The value of this field will be computed from the set of upstream derived sample custom field values corresponding to the reagent label of the result file.
You also must make sure that API v2 r21 or later is installed.
Due to the complexity of NGS workflows, beginning at the top level submitted sample resource and working down to the result file is not the most efficient way to traverse the sample history/genealogy. It is easier to start with the result file artifact, and then trace upward to find the process with the UDFs/custom fields that you are looking for.
Starting from the per-reagent-label result file, you can traverse upward in the sample history using the parent process URI in the XML returned for each artifact. At each level of the sample history, the number of artifacts returned may increase due to processes that pooled individual artifacts.
In this example:
The upstreamArtifactLUIDs list represents the current set of relevant artifacts.
The foundUpstreamArtifactNodes list stores the target upstream artifact nodes found.
The sample history traversal stops at the inputs to the process that performed the reagent addition/reagent label assignment.
The traversal is executed using a while loop over the contents of the upstreamArtifactLUIDs list.
The list serves as a stack of artifacts. With each iteration of the loop, an artifact is removed from the end of the list and the relevant input artifacts to its parent process are pushed back onto the end of the list.
After the loop has executed, the foundUpstreamArtifactNodes list will contain all of the artifacts that are assigned the reagent label of interest upon execution of the next process in the sample history.
The final step in the script assigns a value to a Numeric UDF / custom field on the per-reagent-label output result file, Mean DNA Prep 260:280 Ratio, by computing the mean value of a Numeric UDF / custom field on each of the foundUpstreamArtifactNodes, DNA prep 260:280 ratio.
First, compute the mean using the following example:
Then, set the UDF/custom field on the per-reagent-label output result file using the following example:
TraversingPooledDemuxGenealogy.groovy:
When samples are processed in the lab, they generally produce child samples that are altered in some way. Eventually, the samples are analyzed on an instrument, with the result being a data file. Often these data are analyzed further, which produces additional data files.
The sample processing that occurs in the lab is modeled as steps in the Clarity LIMS web interface. In the REST API (v2 r21 or later), this processing is modeled as processes, and the samples and files that are processed are represented as artifacts. Understanding the representation of inputs and outputs within the XML for an individual process is critical to being able to use the REST API effectively.
If you are using Clarity LIMS v5 or later, make sure that you have done the following actions:
Added samples to the LIMS.
Configured a step that generates derived samples in the Lab Work tab.
Configured a file placeholder for a sample measurement file to be generated and attached by an automation script at run time. This configuration is done in the Master Step Settings of the step on the Record Details milestone.
Configured an automation that generates the sample measurement file and have enabled it on the step. This configuration is done in the Automation tab.
Configured the automation triggers. This configuration is done in the Step Settings screen, under the Record Details milestone.
Run the step on some samples.
As of Clarity LIMS v5, the Operations Interface Java client has been deprecated. In LIMS v5 and later, there is no equivalent screen to the Input/Output Explorer where you can select step inputs/outputs and generated files and view their corresponding inputs/outputs and files.
However, the following API code example is still relevant and will produce the same results.
The first step in this example is to request the individual process resource through a GET method. The full XML representation returned includes the input-output-map.
To illustrate the relationships between the inputs and outputs, you can save them using a Groovy Map data structure. This maps the output LIMS IDs to a list of input LIMS IDs associated with each output, as shown in the following example:
The process variable now holds the complete XMLstructure returned from the processURI.
In the following example XML snippet, elements of the input-output-map are labeled with <input-output-map>:
All of the input and output URIs include a ?state= some number. State allows Clarity LIMS to track historical values for QC, volume, and concentration, so you can compare the state of an analyte before and after a process was run. However, when you make changes to an artifact you should always work with the most current state.
To make sure that you are getting the current state when you do a GET request, simply remove the state from the artifact URI.
You can examine each input-output-map to find details about the relationship represented between inputs and outputs. The following code puts the output and input LIMS IDs into an array named outputToInputMap.
As the output type is also important for further processing, outputToInputMap is formatted as follows:
If the output is shared for all inputs (eg, the sample measurement file with LIMS ID 92-13007), the inputs to the process are listed. If the output relates to an individual input, only the LIMS ID for that particular input will be listed.
Outputs are listed in multiple input-output-map elements when they have multiple input files generating them. The first time any particular output LIMS ID is seen, the output type and input LIMS ID in the input-output-map are added to the list, stored in outputToInputMap.
If the output LIMS ID already has a list in outputToInputMap, then the code adds input LIMS ID to the list.
One way to access the information is to print it out. You can run through each key-value pair and print the information it contains, as shown in the following example:
After running the script on the command line, an output similar to the following will be generated, whereby the inputs used to generate each output are listed.
GetProcessInputOutput.groovy:
As processing occurs in the lab, associated processes and steps are run in Clarity LIMS. Often, key data must be recorded for the derived samples (referred to as analytes in the API) generated by these steps.
The following example explains how to change the value of an analyte UDF/global custom field.
If you would like to update a batch of output derived samples (analytes), you can increase the script execution speed by using batch operations. For more information, see Working with Batch Resources.
In Clarity LIMS v5 or later, the key data fields are configured as global custom fields on derived samples. If you are using Clarity LIMS v5 or later, make sure you have the following items:
A defined global custom field named Library Size on the Derived Sample object.
A configured Library Prep step to apply Library Size to generated derived samples.
A Library Prep process that has been run and has generated derived samples.
As of Clarity LIMS v5, the term user-defined field (UDF) has been replaced with custom field in the user interface. However, the API resource is still called UDF.
There are two types of custom fields:
Master step fields—Configured on master steps. Master step fields only apply to the following:
The master step on which the fields are configured.
The steps derived from those master steps.
Global fields—Configured on entities (eg, submitted sample, derived sample, measurement, etc.). Global fields apply to the entire Clarity LIMS system.
In Clarity LIMS v5 and later, the Record Details screen displays the information about the derived samples generated by a step. You can view the global fields associated with the derived samples in the Sample Table.
The following screenshot shows the Library Size values for the derived samples.
Derived sample information is stored in the API in the analyte resource. Step information is stored in the process resource. Each global field value is stored as an udf.
An analyte resource contains specific derived sample details that are recorded in lab steps. Those details are typically stored in global custom fields (configured in Clarity LIMS on the Derived Sample object) and then associated with the step.
When you update the information for a derived sample by updating the analyte API resource, only the global fields that are associated with the step can be updated.
To update the derived samples generated by a step, you must first request the process resource through a GET method.
The following GET method provides the full XML structure for the step:
The process variable now holds the complete XML structure returned from the GET request.
The XML returned from a GET on the process resource contains the URIs of the process output artifacts (the derived samples generated by the step). You can use these URIs to query for each individual artifact resource.
The process resource contains many input-output-map elements, where each element represents an artifact. The following snippet of the XML shows the process:
Because processes with multiple inputs and outputs tend to be large, many of the input-output-map nodes have been omitted from this example.
After you have retrieved each individual artifact resource, you can use this information to update the UDFs/custom fields for each output analyte after you request its resource.
Request the analyte output resource and update the UDF/custom field as follows.
If the output-type is analyte, then run through each input-output-map and request the output artifact resource.
Use a GET to return the XML for each artifact and store it in a variable.
When you have the analytes stored, change the analyte UDF/custom field through the following methods:
The UDF/custom field change in the XML.
The http PUT call to update the artifact resource.
The UDF/custom field change can be achieved with the Library Size UDF/custom field XML element defined in the following code. In this example, the Library Size value is updated to 25.
The PUT method updates the artifact resource at the specified URI using the complete XML representation, including the UDF/custom field. The setUdfValue method of the util library is used to perform this in a safe manner.
The output-type attribute is the user-defined name for each of the output types generated by a process/step. This is not equivalent to the type element of an artifact whose value is one of several hard-coded artifact types.
If you must filter inputs or outputs from the input-output-map based on the artifact type, you will must GET each artifact in question to discover its type.
It is important that you remove the state from each of the analyteURIs before you GET them, to make sure that you are working with the most recent state.
Otherwise, when you PUT the analyteURI back with your UDF/custom field changes, you can inadvertently revert information, such as QC, volume, and concentration, to previous values.
The results can be reviewed in a web browser through the following URI:
In Clarity LIMS v5 or later, in the Record Details screen, the Sample table now shows the updated Library Size.
UpdateProcessUDFInfo.groovy:
UpdateUDFAnalyteOutput.groovy:
A common requirement in applications involving indexed sequencing is to determine the sequence corresponding to a reagent label. This example shows how to configure index reagent types, which you can then use to find the sequence for a reagent label. Before you follow the example, make sure that you have a compatible version of API (v2 r14 to v2 r24).
Reagents and reagent labels are independent concepts in the API. However, the recommended practice is to name reagent labels after reagent types. This allows you to use the label name to look up the sequence information on the reagent type resource. This practice is consistent with the Operations Interface process wizards. When a reagent is applied to a sample in the user interface, a reagent label with the same name of the reagent type is added to the analyte resource.
The following actions are also recommended:
Configure an index reagent type with the correct sequence for each type of index or tag you plan to use.
Use the names of the index reagent types as reagent labels.
Following these practices allows you to find the sequence for a reagent label by looking up the sequence in the corresponding reagent type.
For each index or tag you plan to use in indexed sequencing, configure a corresponding index reagent type as follows.
As administrator, click Configuration > Consumables > Labels.
Add a new label group.
Then, to add labels to the group:
Download a template label list (Microsoft® Excel® file) from the Labels configuration screen.
Add reagent type details to the downloaded template.
Upload the completed label list.
After you have configured reagent types for each indexing sequence you intend to use, and have used those reagent type names as reagent label names, you can easily retrieve the corresponding sequence using the REST API.
The following code snippet shows how to retrieve the index sequences (when available):
For an artifact labeled with Index 1, this would produce the following information:
RetrievingReagentLabelIndex.groovy:
Imagine that you use projects in Clarity LIMS to track a collection of sample work that represents a subset of work from a larger translational research study. The translational research study consists of several projects within the LIMS and the information about each of the projects that make up the research study is predefined in another system.
Before the work starts in the lab, you can use the information in the other system to automatically create projects. This reduces errors and means that lab scientists do not have to spend time manually entering data a second time.
This example shows how to automate the creation of a project using a script and the projects resource POST method.
As of Clarity LIMS v5, the term user-defined field (UDF) has been replaced with custom field in the user interface. However, the API resource is still called UDF.
There are two types of custom fields:
Master step fields—Configured on master steps. Master step fields only apply to the following:
The master step on which the fields are configured.
The steps derived from those master steps.
Global fields—Configured on entities (eg, submitted sample, derived sample, measurement, etc.). Global fields apply to the entire Clarity LIMS system.
Before you follow the example, make sure you have the following items:
A user-defined field (UDF) / custom field named Objective is defined for projects.
A project name that is unique and does not exist in the system.
A compatible version of API (v2 r21 or later).
Before you can add a project to the system via the API, you must construct the XML representation for the project you want to create. You can then POST the new project resource.
You can define the project XML using StreamingMarkupBuilder, a built-in Groovy data structure designed to build XML structures.
Declare the project namespace because you are building a project.
If you wish to include values for project UDFs as part of the project XML you are constructing, then you must also declare the userdefined namespace.
In the following example, the project name, open date, researcher, and a UDF / custom field named Objective are included in the XML constructed for the project.
For Clarity LIMS v5 or later, UDTs are only supported in the API.
For the POST to the projects resource to be successful, only project name and researcher URI are required. Adding more details is a good practice for keeping your system organized and understanding what must be accomplished for each project.
The following POST command adds a new project resource using the XML constructed by StreamingMarkupBuilder:
The XML returned after a successful POST of the XML built by StreamingMarkupBuilder is the same as the XML representation of the project:
PostProject.groovy:
The following example shows you how to remove information from a project using Clarity LIMS and API (compatible with v2 r21 and later).
As of Clarity LIMS v5, the term user-defined field (UDF) has been replaced with custom field in the user interface. However, the API resource is still called UDF.
There are two types of custom fields:
Master step fields—Configured on master steps. Master step fields only apply to the following:
The master step on which the fields are configured.
The steps derived from those master steps.
Global fields—Configured on entities (eg, submitted sample, derived sample, measurement, etc.). Global fields apply to the entire Clarity LIMS system.
Before you follow the example, make sure that you have the following items:
A user-defined field (UDF) / custom field named Objective is defined for projects.
A project name that is unique and does not exist in the system.
This example does the following actions:
POST a new project to the LIMS, with a UDF / custom field value for Objective.
Remove a child XML node from the parent XML representing the project resource.
Update the project resource.
First, set up the information required to perform a successful project POST. The project name must be unique.
The projectNode should contain the response XML from the POST and resemble the following output:
The following code removes the child XML node <udf:field> from the parent XML node <prj:project>:
If multiple nodes of the same type exist, [0] is the first item in this list of same typed nodes (eg, 0 contains 1st item, 1 contains 2nd item, 2 contains 3rd item, and so on).
To remove the 14th udf:field, you would use projectNode?.children()?.remove(projectNode.'udf:field'[13])
RemoveChildNode.groovy:
Lab scientists must understand the priority of the samples they are working with. To help them prioritize their work, you can rename the derived samples generated by a step so that they include the priority assigned to the original submitted sample.
If you would like to rename a batch of derived samples, you can increase the script execution speed by using batch operations. You can also use a script to rename a derived sample after a step completes.
If you are using Clarity LIMS v5 and later, make sure that you have done the following actions:
Added samples to the system.
Defined a global custom field named Priority on the Submitted Sample object. The field should have default values sp1, sp2, and sp3, and it should be enabled on a step.
Run samples through the step with the Priority of each sample set to sp1, sp2, or sp3.
In this example, six samples have been added to a project in Clarity LIMS. The submitted samples names are Heart-1 through Heart-6. The samples are run through a step that generates derived samples, and the priority of each sample is set.
By default, the name of the derived samples generated by the step would follow the name of the original submitted samples as shown in the Assign Next Steps screen of the step.
This example appends the priority of the submitted sample to the name of the derived sample output. The priority is defined by the Priority sample UDF (in Clarity LIMS v4.2 or earlier) or the Priority submitted sample custom field (in Clarity LIMS v5 or later).
Renaming the derived sample consists of the following steps:
Request the step information (process resource) for the step that generated the derived sample (analyte resource).
Request the individual analyte resource for the derived sample to be renamed.
Request the sample resource linked from the analyte resource to get the submitted sample UDF/custom field value to use for the update.
Update the individual analyte output resource with the new name.
When using the REST API, you will often start with the LIMS ID for the step that generated a derived sample. The key API concepts are as follows.
Information about a step is stored in the process resource.
In general, automation scripts access information about a step using the processURI, which links to the individual process resource. The input-output-map in the XML returned by the individual process resource gives the script access to the artifacts that were inputs and outputs to the process.
Information about a derived sample is stored in the analyte resource. This is used as the input and output of a step.
Analytes are also used to record specific details from lab processing.
The XML representation for an individual analyte contains a link to the URI of its submitted sample, and to the URI of the process that generated it (parent process).
The following GET method returns the full XML structure for the step.
The process variable now holds the complete XML structure returned from the process GET request, as shown in the following example. The URI for each analyte generated is given in the output node in each input-output-map element. For more information on the input-output-map, see View the Inputs and Outputs of a Process/Step.
Each output node has an output-type attribute that is the user-defined type name of the output. You can iterate through each input-output-map and request the output artifact resource for each output of a particular output-type.
In the code example shown below, we filter on output-type = Analyte
The output-type attribute is the user-defined name for each of the output types generated by a process. This is not equivalent to the type element of an artifact whose value is one of several hard-coded artifact types.
If you must filter inputs or outputs from the input-output-map based on the artifact type, you need to GET each artifact in question to discover its type.
It is important that you remove the state from each of the analyteURIs before you GET them to make sure that you are working with the most recent state. Otherwise, when you PUT the analyteURI back with your UDF changes, you can inadvertently revert information (eg, QC, volume, and concentration) to their previous values.
From the analyte XML, you can use the submitted sample URI to return the sample that maps to that analyte.
Updating Sample Information shows how to set a sample UDF/global field. To get the value of a sample UDF/global field, use the same method to find the field, and then use the .text() method to get the field value.
The value of the UDF is stored in the variable samplePriority so that it is then available for the renaming step described below.
The variable analyte holds the complete XML structure returned from a GET on the URI in the output node. The variable nameNode references the XML element in that structure that contains the artifact's name. The XML for the analyte named Heart-1.
Renaming the derived sample consists of two steps:
The name change in the XML.
The PUT call to update the analyte resource.
The name change can be performed with the nameNode XML element node defined. The following example shows this element defined.
The http PUT command updates the artifact resource using the complete XML representation, including the new name.
After a successful PUT, the results can be reviewed in a web browser at http://yourIPaddress/api/v2/artifacts/TST110A291AP45.
The following XML resource is returned from the PUT command and is stored in returnNode.
In Clarity LIMS, the Assign Next Steps screen shows the new names for the generated derived samples.
This example shows simple renaming of derived samples based on a submitted sample UDF/global field. However, you can use step names, step UDFs (known as master step fields in Clarity LIMS v5 or later), project information, and so on, to rename derived samples and provide critical information to scientists working in the lab.
UpdateAnalyteName.groovy:
The researcher resource holds the personal details for users and clients in Clarity LIMS.
Suppose that you have a separate system that maintains the contact details for your customers and collaborators. You could use this system to synchronize the details for researchers with the details in Clarity LIMS. This example shows how to update the phone number of a researcher using a PUT to the individual researcher resource.
In the Clarity LIMS user interface, the term Labs has been replaced with Accounts. However, the API resource is still called labs and the Collaborations Interface still refers to Labs rather than Accounts. The term Contact has been replaced with Client. The API resource is still called contact.
The LabLink Collaborations Interface is not supported in Clarity LIMS v5 and later. However, because support for this interface is planned for a future release, the Collaborator user role has not been removed.
Before you follow the example, make sure you have the following items:
A defined client in Clarity LIMS.
A compatible version of API (v2 r21 or later).
For Clarity LIMS v5 and later, in the web interface, the User and Clients screen lists all users and clients in the system.
In the API, information for a particular researcher can be retrieved within a script using a GET call:
In this case, the URI represents the individual researcher resource for the researcher named Sue Erikson. The GET returns an XML representation of the researcher, which populates the groovy node researcher.
The XML representation of individual REST resources are self-contained entities. Always request the complete XML representation before editing any portion of the XML. If you do not use the complete XML when you update the resource, you may inadvertently change data.
The following example shows the XML returned for the Sue Erikson researcher:
Updating the telephone number requires the following steps:
Changing the telephone value in the XML.
Using a PUT call to update the the researcher resource.
The new telephone number for Sue Erikson can be set with the phone value within the Groovy researcher node:
The PUT command updates the research resource at the specified URI using the complete XML representation, including the new phone number. A successful PUT returns the new XML in the returnNode. An unsuccessful PUT returns the http response code and error message in XML in the returnNode.
For a successful update, the resulting XML can also be reviewed in a web browser via the URI:
http://yourIPaddress/api/researchers/103
In the LIMS, the updated user list should show the new phone number.
UpdateContactInfo.groovy:
Projects contain a collection of samples submitted to the lab for a specific goal or purpose. Often, a script needs information recorded at the project level to do its task. In this simple example, an HTTP GET against a project is shown to obtain information on the project in XML.
Before you follow the example, make sure you have the following items:
A project exists with name "HTTP Get Project Name with GLS Utils".
The LIMS ID of the project above, referred to as <project limsid>.
A compatible version of API (v2 r21 or later).
The easiest way to find a project in the system is with its LIMS ID.
If the project was created in the script (with an HTTP POST) then the LIMS ID is returned as part of the 201 response in the XML.
If the LIMS ID is not available, but other information uniquely identifies it, you can use the project (list) resource to GET the projects and select the right LIMS ID from the collection.
Working with list resources generally requires the same script logic, so if you need the list of projects to find a specific project then review example. This example demonstrates listing and finding resources for labs, but the same logic applies.
The first step is to determine the URI of the project:
Next, use the project LIMS ID to perform an HTTP GET on the resource, and store the response XML in the variable named projectNode:
The projectNode variable can now be used to access XML elements and/or attributes.
To obtain the project's name ask the projectNode for the text representation of the name element:
GetProjectName.groovy:
When importing sample data into Clarity LIMS using a spreadsheet, you can specify the reagent labels to be applied during the import process. To do this, you must include the reagent label names in the spreadsheet, in a column named Sample/Reagent Label.
Before you follow the example, make sure that you have the following items:
Reagent types that are configured in Clarity LIMS and are named index 1 through index 6.
Reagents of type index 1 through index 6 that have been added to Clarity LIMS.
A compatible version of API (v2 r14 to v2 r24).
The following example spreadsheet would import six samples into the system. These samples are Sample-1 through Sample-6 with reagent labels Index 1 through Index 6:
Although not mandatory, it is recommended that you name reagent labels after reagent types using the Index special type. This allows you to relate the reagent label back to its sequence.
If you examine the REST API representation of the samples imported, you are able to verify the following:
The sample representation shows no indication that reagent labels were applied.
The sample artifact (the analyte artifact linked from the sample representation) will indicate the label applied via the <reagent-label> element.
The following example shows how an imported sample artifact (Sample-1), with reagent label name applied (Index 1), appears when verified via the REST API:
Imagine that each month the new external accounts with which your facility works are contacted with a Welcome package. In this scenario, it would be helpful to obtain a list of accounts that have been modified in the past month.
NOTE: In Clarity LIMS v2.1 and later, the term Labs was replaced with Accounts. However, the API resource is still called labs.
Before you follow the example, make sure you have the following items:
Several accounts exist in the system.
At least one of the accounts was modified after a specific date.
A compatible version of API (v2 r21 or later).
In LIMS v6.2 and later, in the Configuration > User Management page, the Accounts view lists the account resources available.
To obtain a list of all accounts modified after a specific date, you can use a GET request on the accounts list resource and include the ?last-modified filter.
To specify the last month, a Calendar object is instantiated. This Calendar object is initially set to the date and time of the call, rolled back one month, and then passed as a query parameter to the GET call.
The first GET call returns a list of the first 500 labs that meet the date modified criterion specified. The script iterates through each lab element to look at individual lab details. For each lab, a second GET method populates a lab resource XML node with address information.
The REST list resources are paged. Only the first 500 items are returned when you query for a list of items, (eg, http://youripaddress/api/v2/artifacts).
If you cannot filter the list, it is likely that you must iterate through the pages of a list resource to find the items that you are looking for. The URI for the next page of resources is always the last element on the page of a list resource.
In the following example, the XML returned lists three out of the four labs, excluding one due to the date filter:
One of the labs has 'WA' recorded as the state, adding a second printed line to the output:
GetLab.groovy:
Demultiplexing is the last step in an indexed sequencing workflow. While the specifics depend on the sequencing instrument and analysis software used, taking pooled samples through sequencing and analysis produces result files/metrics per lane/identifier tag.
These results will likely be in the form of multiple files that you can import back into Clarity LIMS. To do this, you need to set up a configured process that generates process outputs that apply to inputs per reagent label, usually in the form of ResultFile artifacts.
Before you follow the example, make sure you have the following items:
Configured reagent types named Index 1 through Index 6 in Clarity LIMS.
Reagents of type Index 1 through Index 6 in Clarity LIMS.
A compatible version of API (v2 r14 to v2 r24).
Configure a process that generates ResultFile with process outputs that apply to inputs per reagent label. It is recommended to name your outputs in a way that clearly identifies the samples to which they correspond (eg, Results for {SubmittedSampleName}-{AppliedReagentLabels}).
Running the demultiplexing process on a labeled pooled input produces a process run in the Operations Interface, similar to the one illustrated below.
Note the following:
There were three reagent labels in the input analyte (sample) artifact. As a result, three outputs were generated (the process was configured to produce one output result file per label per input).
The names of the outputs of the demultiplexing process expose the original sample name and label.
The Operations Interface shows details of the genealogy from the downstream result file all the way back to the original sample.
While reagent labels are not explicitly exposed in the Clarity LIMS client user interface, genealogy views in the Operations Interface are aware of reagent labels and will show the true sample inheritance. As noted above, you can use the {AppliedReagentLabels} output naming variable to show the reagent labels applied to each artifact in the user interface.
The key difference is that when executing a demultiplexing process through the REST API, outputs per reagent label are automatically generated from the inputs provided. You do not need to explicitly specify them.
For example, when running the demultiplexing process configured against a single (pooled) sample, you could post a process execution representation like this:
The input-output-map only refers to inputs, not outputs, because the demultiplexing process is configured to exclusively produce outputs per reagent label.
If your process produces other outputs, such as shared or per-input outputs, you must explicitly specify input-output-maps for them.
Irrespective of whether you use the user interface or the REST API to run the demultiplexing process, the REST API representation for the process looks something like this:
For each input with reagent labels, one output was created per reagent label.
In the example, the process ran on one pooled input, and produced three outputs (the pooled input included three reagent labels). The following example shows one of the demultiplexed result file outputs:
The output contains only one reagent label, and relates only to the sample that was tagged with the same reagent label. Compare this to the case of a pooled artifact, which has several labels and relates to several samples. This level of traceability (from a demultiplexed output back to its specific original sample) is only possible because the artifacts were labeled before they were pooled.
The artifact name generated by the demultiplexing process output name pattern is ("Results for SAM-3 - Index 3"). You can use the {SubmittedSampleName} naming variable to show true ancestors, and the {AppliedReagentLabels} to show any reagent labels applied to an output.
UDFs / custom fields must be configured in the Clarity LIMS before they can be set or updated using the API. You can find a list of the fields defined for a project in your system by using the resource: and looking for those with an attach-to-name of 'project'.
Executing a demultiplexing process by issuing a process POST via the REST API is similar to the typical process execution found in .
<TABLE HEADER> |
Sample/Name | Container/Type | Container/Name | Sample/Well Location | Sample/Reagent Label |
</TABLE HEADER> |
<SAMPLE ENTRIES> |
Sample-1 | 96 well plate | labeled-samples | A:1 | Index 1 |
Sample-2 | 96 well plate | labeled-samples | A:2 | Index 2 |
Sample-3 | 96 well plate | labeled-samples | A:3 | Index 3 |
Sample-4 | 96 well plate | labeled-samples | A:4 | Index 4 |
Sample-5 | 96 well plate | labeled-samples | A:5 | Index 5 |
Sample-6 | 96 well plate | labeled-samples | A:6 | Index 6 |
</SAMPLE ENTRIES> |
Reagent labels are artifact resource elements and can be applied using a PUT. To apply a reagent label to an artifact using REST, the following steps are required:
GET the artifact representation.
Insert a reagent-label element with the intended label name.
PUT the modified artifact representation back.
You can apply the reagent label to the original analyte (sample) artifact or to a downstream sample or result file.
Before you follow the example, make sure that you have the following items:
Reagent types that are configured in Clarity LIMS and are named index 1 through index 6.
Reagents of type index 1 through index 6 that have been added to Clarity LIMS.
A compatible version of API (v2 r14 to v2 r24).
In this example, you can adjust the following code:
By inserting the reagent-label element, you end up with the following code.
Although it is not mandatory, it is recommended that you name reagent labels after reagent types using the Index special type. This allows you to relate the reagent label back to its sequence.
In the BaseSpace Clarity LIMS web interface, in the Custom Fields configuration screen, administrators can add user-defined information by adding custom fields (global fields or master step fields). At this time, user-defined types (UDTs) are only supported in the API.
Use these custom fields to configure storage locations for data that annotates project, submitted sample, step, derived sample, measurement, and file information recorded in a workflow.
All XML element attributes and values are text. Before using the value in a script, you may want to convert to a strongly-typed variable such as a number or date type.
For details on the formats used in XML, see Working with User-Defined Fields (UDF) and Types (UDT).
When updating multiple process outputs or containers, you can increase the script execution speed by using batch operations.
When working with multiplexing, you can do the following:
The powerful batch resources included in the Clarity LIMS Rapid Scripting API significantly increase the speed of script execution by allowing batch operations on samples and containers. These resources are useful when working with multiple samples and containers in high throughput labs.
The following simple example uses batch resources to move samples from one workflow queue into another queue.
It is useful to review the Work with Batch Resources.
Use a batch retrieve request to find all the artifacts in an artifact group, and then use a batch update request to move those artifacts into another artifact group.
The following steps are required:
Find all the artifacts that are in a particular artifact group.
Use the artifacts.batch.retrieve (list) resource to retrieve the details for all the artifacts.
Use the artifacts.batch.update (list) resource to update the artifacts and move them into a different artifact group, posting them back as a batch.
NOTE: The only HTTP method for batch resources is POST.
Before you follow the steps, make sure that you have the following items:
Clarity LIMS contains a collection of samples (artifacts) residing in the same workflow queue (artifact group)
A second queue exists into which you can move the collection of samples
A compatible version of API (v2 r21 and later).
In the REST API, artifacts are grouped with the artifact group resource. In Clarity LIMS, an artifact group is displayed as a workflow. Workflows are configured as queues, allowing lab scientists to locate samples to work with on the bench quickly.
To find the samples (artifacts) in a workflow queue (artifact group), use the following request, editing the server details and artifact group name to match those in your system:
This request returns a list of URI links for all artifacts in the artifact group specified. In our example, the my_queue queue contains three artifacts:
To retrieve the detailed XML for all of the artifacts, use a <links> tag to post the set of URI links to the server using a batch retrieve request:
This returns the detailed XML for each of the artifacts in the batch:
The XML returned includes the artifact group name and URI:
<artifact-group name="my_queue" uri="http://your-server-ip/api/v2/artifactgroups/1"/>
To move the artifacts into another queue, simply update the artifact-group name and URI values:
Finally, post the XML back to the server using a batch update request:
For a general overview of batch resources, refer to Introduction to Batch Resources.
When working with batch resources, you can do the following:
Compatibility: API version 2 revision 21 and later
Important measurements and values are often calculated from other values. Instead of performing these calculations by hand, and then manually entering them into the LIMS (thereby increasing the probability of error), you can develop scripts to perform these calculations and update the data accordingly.
This example demonstrates the use of scripts and user-defined fields (UDFs) / custom fields for information retrieval and recording of calculation results in the LIMS.
NOTE:
Information about a step is stored in the process resource in the API.
Information about a derived sample is stored in the analyte resource in the API. This resource is used as the input and output of a step, and also used to record specific details from lab processing.
As of BaseSpace Clarity LIMS v5.0, the term user-defined field (UDF) has been replaced with custom field in the user interface. However, the API resource is still called udf.
Clarity LIMS v5 and later:
You have defined the following custom global fields on the Derived Sample object:
Concentration
Size (bp)
Conc. nM
You have set the three fields configured in step 1 to display in the Sample table of the Record Details screen.
You have configured a Calc. Prep step to apply Concentration, Size (bp) to generated derived samples.
You have run the Calc. Prep step and it has generated derived samples.
You have input values for the Calculation and Size (bp) fields.
You have configured a Calculation step to apply Conc. nM to generated derived samples.
You have run the Calculation step - with the derived samples generated by the Calc. Prep step as inputs, and it has generated derived samples.
First, the values to be used in the calculation - the Concentration and Size (bp) UDFs / custom fields are applied to the samples by running the Calc. Prep preparation step. You can then enter the values for these fields into the LIMS as follows:
Clarity LIMS v5 and later:
In the Record Details screen, in the Sample table.
After the script has successfully completed, the Conc. nM results will display
(LIMS v5 & later) In the Record Details screen, in the Sample table.
UsingAnalyteUDFForCalculations.groovy:
Steps can have user-defined fields (UDFs)/custom fields that can be used to describe properties of the steps.
For example, while a sample UDF/custom field might describe the gender or species of the sample, a process UDF/custom field might describe the room temperature recorded during the step or the reagent lot identifier. Sometimes, some of the information about a step is not be known until it has completed on the instrument, but after it was run in Clarity LIMS.
In this example, we will record the Actual Equipment Start Time as a process UDF/custom field after the step has been run in Clarity LIMS. The ISO 8601 convention is used for recording the date and time. For more information, see Filter Processes by Date and Type.
NOTE: In the API, information about a step is stored in the process resource.
As of Clarity LIMS v5, the term user-defined field (UDF) has been replaced with custom field in the user interface. However, the API resource is still called UDF.
Master step fields—Configured on master steps. Master step fields only apply to the following:
The master step on which the fields are configured.
The steps derived from those master steps.
Global fields—Configured on entities (eg, submitted sample, derived sample, measurement, etc.). Global fields apply to the entire Clarity LIMS system.
Before you follow the example, make sure you have the following items:
Samples added to the system.
A custom field named Actual Equipment Start Time that has been configured on a master step (master step field).
On the master step , you have configured the field to display on the Record Details milestone, in the Master Step Fields section.
You have run samples through a step based on the master step on which the Actual Equipment Start Time field is configured.
Detailed information for each step run in Clarity LIMS, including its name, LIMS ID, and custom fields can be viewed on the Record Details screen.
In the image below, an Actual Equipment Start Time master step field has been configured to display in the Step Details section of the Record Details screen. However, a value for this field has not yet been specified.
Before you can change the value of a process UDF/custom field, you must first request the individual process resource via a GET HTTP call. The XML returned from a GET on the individual process resource contains the information about that process. The following GET method provides the complete XML structure for the process:
The variable processNode now holds the complete XML structure retrieved from the resource at processURI.
The variable startTimeUDF references the XML element node contained in the structure that relates to the Actual Equipment Start Time UDF/custom field (if one exists).
The variable newStartTime is a string initialized with a value from the method parseInstrumentStartTimes. The details of this method are omitted from this example, but its function is to parse the date and time the instrument started from a log file.
The XML representations of individual REST resources are self-contained entities. Always request the complete XML representation before editing any portion of the XML. If you do not use the complete XML when you update the resource, you can inadvertently change data.
The following code shows the XML structure for the process, as stored in the variable processNode. There are no child UDF/custom field nodes.
After modifying the process stored in the variable processNode, you can use a PUT method to update the process resource.
You can check if the UDF/custom field exists by verifying the value of the startTimeUDF variable. If the value is not null, then the field is defined and you can set a new value in the XML. If the field does not exist, you will must append a new node to the process XML resource using the UDF/custom field name and new value.
Before you can append a node to the XML, you must first specify the namespace for the new node. You can use the Groovy built-in QName class to do this. A QName object defines the qualified name of an XML element and specifies its namespace. The node you are specifying is a UDF element, so the namespace is http://genologics.com/ri/userdefined. The local part is field and the prefix is udf for the QName, which specifies the element as a UDF/custom field.
To append a new node to the process using the appendNode method of the variable processNode (which appends a node with the specified QName, attributes, and value). Specify the following attributes for the UDF/custom field element:
the type
the name
Both of these elements must match a UDF/custom field that has been specified in the Configuration window for the process type.
The variable processNode now holds the complete XML structure for the process with the updated, or added, UDF named Actual Equipment Start Time.
You can save the changes you have made to the process using a PUT method on the process resource:
The PUT updates the sample resource at the specified URI using the complete XML representation, including the new value for Actual Instrument Start Time.
If the PUT was successful, it returns the XML resource, as shown below. The updated information is also available at the http://yourIPaddress/api/v2/processes/A22-BMJ-100927-24-2188\ URI.
If the PUT was unsuccessful, an XML resource is returned with contents that detail why the call was unsuccessful. In the following example error, an incorrect UDF/custom field name was specified. A UDF/custom field named Equipment Start Time was created in the process resource, but no UDF/custom field with that name was configured for the process type/master step.\
The Step Details section of the updated Record Details screen now shows the Actual Equipment Start Time value.
The ability to modify process properties allows you to update automatically and store lab activity information as it becomes available. Information from equipment log files or other data sources can be collected in this way.
Updating per-run or per-process/step information is powerful because the information can be used to optimize lab work (eg, by tracking trends over time). The data can be compared by instrument, length of run, lab conditions, and even against quality of molecular results.
UpdateProcessUDFInfo.groovy:
Pooling steps require that each input analyte artifact (derived sample) in the step be inserted into a pool. You can automate this task by using the API steps pooling endpoint. Automation of pooling allows you to reduce error and user interaction with Clarity LIMS.
In this example, a script pools samples based on the value of the pool id user-defined field (UDF)/custom field of the artifact.
As of Clarity LIMS v5, the term user-defined field (UDF) has been replaced with custom field in the user interface. However, the API resource is still called UDF.
Master step fields—Configured on master steps. Master step fields only apply to the following:
The master step on which the fields are configured.
The steps derived from those master steps.
Global fields—Configured on entities (eg, submitted sample, derived sample, measurement, etc.). Global fields apply to the entire Clarity LIMS system.
To keep this example simple, the script does not handle samples with reagent labels.
In the API, an artifact is an item generated by an earlier step. There are two types of artifacts: analyte (derived sample) and resultfile (measurement). In the Clarity LIMS web interface, the terms artifact, analyte, and resultfile have been replaced with derived sample or measurement.
Before you follow the example, make sure that you have the following items:
A configured analyte UDF/derived-sample custom field named pool id in Clarity LIMS.
Groovy that is installed on the server and accessible at /opt/groovy/bin/groovy
The GLSRestApiUtils.groovy file is stored in /opt/groovy/lib/
The WorkingWithStepsPoolingEndpoint.groovy script that is stored in /opt/gls/clarity/customextensions/
A compatible version of API (v2 r21 or later).
In Clarity LIMS, under Configuration, select the Lab Work tab.
Select an existing Pooling master step or add a new one.
On the master step configuration form, select the Pooling milestone.
On the Pooling Settings form, set the Label Uniqueness toggle switch to Off.
Select Save.
Add a new protocol.
With the protocol selected, add a new Library Pooling step based on the master step you configured.
In Clarity LIMS, under Configuration, select the Automation tab.
Add a new step automation. Associate the automation with the WorkingWithStepsPoolingEndpoint.groovy script. The command line used in this example is as follows.
bash -c "/opt/groovy/bin/groovy -cp /opt/groovy/lib /opt/gls/clarity/customextensions/WorkingWithStepsPoolingEndpoint.groovy -u {username} -p {password} -s {stepURI:v2:http}"
Enable the automation on the configured pooling master step. Select Save.
You can now configure the automation trigger on the step or the master step. If you configure the trigger on the master step, the settings will be locked on all steps derived from the master step.
On the Lab Work tab, select the library pooling step or master step.
On the Step Settings or Master Step Settings form, in the Automation section, configure the automation trigger so that the script is automatically initiated at the beginning of the step:
Trigger Location—Step
Trigger Style—Automatic upon entry
In Clarity LIMS, under Configuration, select the Lab Work tab.
Select the pooling protocol containing the Library Pooling step.
Add the Add Pool ID step that sets the pool id custom field of the samples. Move this step to the top of the Steps list.
Select the Add Pool ID step.
On the Record Details milestone, add the pool id custom field to the Sample Details table.
In Clarity LIMS, under Configuration, select the Lab Work tab.
Create a workflow containing the configured pooling protocol. Activate the workflow.
On the Projects and Samples screen, create a project and add samples to it. Assign the samples to your pooling workflow.
Begin working on the samples. In the first step, enter values into the pool id custom field.
Continue to the Library Pooling step and add samples to the Ice Bucket. Select Begin Work to execute the script.
The script is passed the URI of the pooling step. Then, using the URI, the pool node of the step is retrieved. This node contains an available-inputs node that lists the URIs of the available input artifacts.
The script retrieves all available input artifacts, and then iterates through the list of retrieved artifacts. For each artifact, the script looks for the pool id custom field. If the field is not found, the script moves on to the next artifact. If the field is found, its value is stored in the poolID variable.
When the script encounters a new pool ID, it creates a new pool with a name equal to that ID. Input artifacts are sorted into pools based on the value of its pool id field, and as they are inserted into pools they are removed from the list of available inputs.
After all of the available inputs are iterated through, the updated pool node is sent back to Clarity LIMS:
Artifacts with the same Pool ID UDF / custom field will be automatically added to the same pool.
WorkingWithStepsPoolingEndpoint.groovy:
GLSRestApiUtils.groovy:
Before pooling samples in a multiplexed workflow, apply reagent labels using one of the methods described in Work with Multiplexing. After the analyte (derived sample) artifacts are labeled, they can be pooled together without loss of traceability.
Pooling samples is accomplished either by running a pooling step in the user interface, or by using the process resource in the REST API.
For an overview of how REST resources are structured, and to learn how the process resource is used to track workflow in Clarity LIMS, see REST General Concepts and Structure of REST Resources.
Before you follow the example, make sure that you have the following items:
Reagent types that are configured in Clarity LIMS and are named index 1 through index 6.
Reagents of type index 1 through index 6 that have been added to Clarity LIMS.
A compatible version of API (v2 r21 or later).
The following screenshot shows a pooling step run from Clarity LIMS.
Pooling samples in the API is accomplished with a process resource. Information about a step is also stored in the process resource. Such a process has many input samples that map to a shared output sample, such that the shared output is a pool of those inputs. This is achieved with Run a Process/Step, where a single input-output-map element in the XML defines the shared output and all its related inputs.
In general, automation scripts access information about a step using the processURI, which links to the individual process resource. The input-output-map in the XML returned by the individual process resource gives the script access to the artifacts that were inputs and outputs to the process.
Information about a derived sample is stored in the analyte resource. This is used as the input and output of a step, and also used to record specific details from lab processing. The XML representation for an individual analyte contains a link to the URI of its submitted sample, and to the URI of the process that generated it (parent process).
The following example pools all samples found in a given container into a tube it creates.
NOTE: No special code is required to handle reagent labels. As processes execute, reagent labels automatically flow from inputs to outputs.
Irrespective of whether you use the user interface or the REST API to pool samples, the pooled sample is available via process GET requests.
The following example shows one pooled output (LIMS ID 2-424) created from three inputs - LIMS IDs RCY1A103PA1, RCY1A104PA1, and RCY1A105PA1:
Besides deriving from the ancestral sample artifacts, the resulting pooled sample artifact inherits the reagent labels from all inputs. The pooled output produced by the pooling step appears as follows. The pooled artifact shows multiple reagent labels, and multiple ancestor samples.
As processes are executed, reagent labels flow from inputs to outputs.
PoolingSamplesWithReagents.groovy:
Workflows, chemistry, hardware, and software are continually changing in the lab. As a result, you can determine which samples were processed after a specific change happened.
Using the processes (list) resource you can construct a query that filters the list using both process type and date modified.
Before you follow the example, make sure you have the following items:
Samples that have been added to the system.
Multiple processes of the Cookbook Example type that have been run on different dates.
A compatible version of API (v2 r21 or later).
In Clarity LIMS, when you search for a specific step type, the search results list shows all steps of that type that have been run, along with detailed information about each one. This information includes the protocol that includes the step, the number of samples in the step, the step LIMS ID, and the date the step was run.
The following screenshot shows the search results for the step type Denature and Anneal RNA (TruSight Tumor 170 v1.0).
The list shows the date run for each step, but not the last modified date. This is because a step can be modified after it was run, without changing the date on which it was run.
To find the steps that meet the two criteria (step type and date modified), you must to do the following steps:
Request a list of all steps (processes), filtered on process type and date modified.
Once you have the list of processes, you can use a script to print the LIMS ID for each process.
To request a list of all processes of a specific type that were modified after a specified date, use a GET method that uses both the ?type and ?last-modified filter on the processes resource:
The GET call returns a list of the first 500 processes that match the filter specified. If more than 500 processes match the filter, only the first 500 are available from the first page.
In the XML returned, each process is an element in the list. Each element contains the URI for the individual process resource, which includes the LIMS ID for the process.
The URI for the list of all processes is http://yourIPaddress/api/processes. In the example code, the list was filtered by appending the following:
This filters the list to show only processes that are of the Cookbook Example type and were modified after the specified date.
The date must be specified in ISO 8601, including the time. In the example, this is accomplished using an instance of a Calendar object and a SimpleDateFormat object, and encoding the date using UTF-8. The date specified is one week prior to the time the code is executed.
All of the REST list resources are paged. Only the first 500 items are returned when you query for a list of items, such as http://youripaddress/api/v2/artifacts.
If you cannot filter the list, you must iterate through the pages of a list resource to find the items that you are looking for. The URI for the next page of resources is always the last element on the page of a list resource.
After requesting an individual process XML resource, you have access to a large collection of data that lets you modify or view each process. Within the process XML, you can also access the artifacts that were inputs or outputs of the process.
After running the script on the command line, output is be generated showing the LIMS ID for each process in the list.
Information about a step is stored in the process resource. In general, automation scripts access information about a step using the processURI, which links to the individual process resource. The input-output-map in the XML returned by the individual process resource gives the script access to the artifacts that were inputs and outputs to the process.
Processing a sample in the lab can be complex and is not always linear. This may be because more than one step (referred to as process in the API and in the Operations Interface in Clarity LIMS v4.x and earlier) is run on the same sample, or because a sample has to be modified or restarted because of quality problems.
The following illustration provides a conceptual representation of a Clarity LIMS workflow and its sample/process hierarchy. In this illustration, the terminal processes are circled.
The following illustration provides a conceptual representation of a LIMS workflow and its sample / process hierarchy. In this illustration, the terminal processes are circled.
This example finds all terminal artifact (sample)-process pairs. The main steps are as follows:
All the processes run on a sample are listed with a process (list) GET method using the ?inputartifactlimsid filter.
All the process outputs for an input sample are found with a process (single) GET.
Iteration through the input-output maps finds all outputs for the input of interest.
Before you follow the example, make sure you have the following items:
A sample to the system.
Several steps that have been run, with several steps run on a single output at least one time.
A compatible version of API (v2 r21 or later).
To walk down the hierarchy from a particular sample, you must do the following steps:
List all the processes that used the sample as an input.
For each process on that list, find all the output artifacts that used that particular input. These output artifacts represent the next level down the hierarchy.
To find the artifacts for the next level down, repeat steps 1 and 2, starting with each output artifact from the previous round.
To find all artifacts in the hierarchy, repeat this process until there are no more output artifacts. The last processes found are the terminal processes.
This example starts from the original submitted sample.
The first step is to retrieve the sample resource via a GET call and find its analyte artifact (derived sample) URI. The analyte artifact of the sample is the input to the first process in the sample hierarchy.
The following GET method provides the full XML structure for the sample including the analyte artifact URI:
The sample.artifact.@limsid contains the original analyte LIMS ID of the sample. For each level of the hierarchy, the artifacts are stored in a Groovy Map called artifactMap. The artifactMap uses the process that generated the artifact as the value, and the artifact LIMS ID as the key. At the top sample level, the list is only comprised of the analyte of the original sample. In the map, the process is set to null for this sample analyte.
To find all the processes run on the artifacts, use a GET method on the process (list) resource with the ? inputartifactlimsid filter.
In the last line of the example code, the processURI string sets up the first part of the URI. The artifact LIMSID is added (concatenated) for each GET call in the following while loop:
In the last line of the example code provided above, the processURI string sets up the first part of the URI.
The artifact LIMSID will be added (concatenated) for each GET call in the while loop below.
The while loop evaluates one level of the hierarchy for every iteration. Each artifact at that level is evaluated. If that artifact was not used as an input to a process, an artifact/process key value pair is stored in the lastProcMap. All the Groovy maps in the previous code use this artifact/process pair structure.
The loop continues until there are no artifacts that had outputs generated. For each artifact evaluated, the processes that used the artifact as an input are found and collected in the processes variable. Because a process can be run without producing outputs, a GET call is done for each of the processes to determine if the artifact generated any outputs.
Any outputs found will form the next level of the hierarchy. The outputs are temporarily collected in the outputArtifactMap. If no processes were found for that artifact, then it is an end leaf node of a hierarchy branch. Those artifact/process pairs are collected in the lastProcMap .
You can iterate through each pair of artifact and process LIMS IDs in outputArtifactMap and print the results to standard output.
Running the script in a console produces the following output:
If your samples are already in Clarity LIMS, you can assign reagent labels by running the Add Multiple Reagents process/protocol step from the Clarity LIMS user interface. Adding a reagent implicitly assigns a reagent label to every sample artifact. The reagent label applied is derived from the reagent type used.
Before you follow the example, make sure that you have the following items:
Reagent types that are configured in Clarity LIMS and are named index 1 through index 6.
Reagents of type index 1 through index 6 that have been added to Clarity LIMS.
A compatible version of API (v2 r14 to v2 r24).
For more information on indexes with reagent labels, see Find the Index Sequence for a Reagent Label.
The following illustrations show the Add Multiple Reagents process, as run from the Operations Interface.
In the Add Multiple Reagents wizard panel, reagents (Indexes 1 to 3) are selected and then assigned to the samples (SAM-1 to 3) in the Sample Workspace, using a click and drag process.
The cells of the Sample Workspace represent the wells of the container used for this process.
When the wizard completes, the Add Multiple Reagents process replaces the input sample artifacts with output analyte artifacts.
In the following illustration, the Name column shows the reagent labels applied to the outputs. These are generated by the default output naming pattern for the Add Multiple Reagents process: {InputItemName}-{AppliedReagentLabels}.
When running the Add Multiple Reagents process, the output analyte artifact names show the reagent label applied, as the output naming pattern in the process configuration uses the {AppliedReagentLabels} variable.
By examining the REST API representation of the Add Multiple Reagents process, you can verify the following information:
The output analyte artifacts show a reagent-label element matching the name of the reagent type used.
The input analyte artifacts are not modified and do not have reagent labels added.
The input analyte artifacts do not have a location element, as they were displaced by the outputs.
You can only determine that reagent labels were applied. You cannot determine which reagent was applied.
The following shows an example of an output from an Add Multiple Reagents process when viewed with the REST API:
Although adding a reagent to a sample automatically assigns a reagent label, reagents and reagent labels are independent concepts in Clarity LIMS. There are ways to add reagent labels that do not involve reagents, and that even when using reagents, it is not possible to accurately determine the reagent used based on the reagent label attached to an artifact.
As previously shown in Update UDF/Custom Field Values for a Derived Sample Output, you can update the user-defined fields/custom fields of the derived samples (referred to as analytes in the API) generated by a step. This example uses batch operations to improve the performance of that script.
As of Clarity LIMS v5, the term user-defined field (UDF) has been replaced with custom field in the user interface. However, the API resource is still called UDF.
Master step fields—Configured on master steps. Master step fields only apply to the following:
The master step on which the fields are configured.
The steps derived from those master steps.
Global fields—Configured on entities (eg, submitted sample, derived sample, measurement, etc.). Global fields apply to the entire Clarity LIMS system.
Before you follow the example, make sure that you have the following items:
A global custom field named Library Size that on the Derived Sample object.
A configured Library Prep step that applies Library Size to generated derived samples.
A Library Prep step that has been run and has generated derived samples.
A compatible version of API (v2 r21 or later).
In Clarity LIMS, the Record Details screen displays the information about the derived samples generated by a step. You can view the global fields associated with the derived samples in the Sample table.
The screenshot below shows the Library Size values for the derived samples.
Derived sample information is stored in the API in the analyte resource. Step information is stored in the process resource. Each global field value is stored as a udf.
An analyte resource contains specific derived sample details that are recorded in lab steps. Those details are typically stored in global fields, configured in the LIMS on the Derived Sample object and then associated with the step. When you update the information for a derived sample by updating the analyte API resource, only the global fields that are associated with the step can be updated.
To retrieve the process information, you can perform a GET on the created process URI, as follows:
You can now collect all of the output analytes and harvest their URIs:
After you have collected the output analyte URIs, you can retrieve the analytes with a batchGET() operation. The URIs must be unique for the batch operations to succeed.
You can now iterate through our retrieved list of analytes and set each analytes 'Library Size' UDF to 25.
To update the analytes in the system, call batchPUT(). It will attempt to call a PUT for each node in the list. (Note that each node must be unique.)
In the Record Details screen, the Sample table now shows the updated Library Size.
UsingBatchPut.groovy:
Many lab processes create small files to summarize results. This example attaches a file to the LIMS server file storage repository, rather than linking to an existing file on the network. Linking to an existing file is covered in . Both examples are useful in practice, depending on your network, storage architecture, and the size of the file.
The file attachment method used in this example is equivalent to a lab scientist manually importing a file and attaching it to a file placeholder in the LIMS user interface.
Before POSTing to the files resource, you must make sure that the file exists in the location referenced by the content-location element. If the file does not exist in this location, the POST fails.
You can also use EPP/automation to handle attaching files. Though this method is not as flexible, it does attach a file automatically. With this method, the attached files are copied to the Clarity LIMS server and attached based on a LIMS ID in the created file name. Automation scripts are powerful, but they are only called when the process/step is created, whereas the code in this example can run at any time.
Before you follow the example, make sure that you have the following items:
Samples that have been added to the system.
A process with analytes (derived samples) as inputs and result files as outputs that has been run on some samples.
The file to be attached (example uses results.csv) that exists in the same directory from which the script is executed.
The JSch library that has been imported onto the classpath. JSch is used by some Groovy closures (refer to fileSFTPHelper) in the attached script for SFTP logic. For more information, refer to
After you run a process with a result file output, a file placeholder icon displays next to the result file.
Attaching a file to a process output requires the use of glsstorage and files resources.
glsstorage assigns a file location, much like a file placeholder in the user interface. The glsstorage POST is a request to create a unique file name (name and directory) for a future disk file. The files resource is used to associate the physical disk location to the ResultFile artifact. In combination, these resources allow flexible management of files, and the integration of external file manipulation and transfer tools.
You can attach a file as follows.
Create a storage location for the file, using a POST to glsstorage. This returns the XML needed to POST to the files resource. The XML includes a content-location.
Copy the file to the new storage location.
POST the file XML to attach the file to the result file placeholder.
The first step in attaching a file to the placeholder is to create a location for the file on the LIMS server, using a POST to the glsstorage resource.
A POST to glsstorage requires the attached-to and original-location child elements to be defined within the file XML content. (These are the only two elements required to post to glsstorage.)
The file XML content is created using Groovy's StreamingMarkupBuilder.
Our example code begins by defining the current location of the file in the variable fileOriginalLocation.
In addition, the artifactURI variable is defined as the URI with which to associate the file. In this case, that resource is a ResultFile artifact.
Before the POST to glsstorage, the previously created XML appears as follows:
The following code shows the POST to glsstorage. The XML returned by the POST to glsstorage is stored in the variable resolvedFileNode.
The XML returned by the POST method includes a new child element – content-location. The content-location is the new directory and file name on the LIMS server to which the file should be copied.
If the POST to glsstorage was unsuccessful, an XML document explaining the error is returned. For example, if the artifact specified by artifactURI does not exist, then the POST will fail, and resolvedFileNode will hold the content shown below:
The next step is to copy the file from current to new location. SFTP is used through the fileSFTPHelper closure.
If there has never been a file attached to the ResultFile placeholder, the directory specified by the content-location element will not exist.
The fileSFTPHelper Groovy closure takes care of creating the directory required, using the destRemoteFileURI parameter. It then copies the file to the new file location using SFTP.
To make sure the copy was successful, you can check that the $sftpSuccess variable was 'true'.
The glsstorage methods do not create, move, or delete disk files. Use file operations that fit within your scripting or system automation practices.
After you SFTP the file to the content-location, you can POST resolvedFileNode to the files (list) REST resource.
POSTs to this resource require file XML content with the attached-to, original-location, and content-location child elements defined.
After the POST to the files resource, the file information is attached to the resource specified in the attached-to child element:
The POST to the files (list) REST resource associates a physical file with the ResultFile artifact. In the user interface, this POST changes the icon from a placeholder to an attached file.
If the POST was successful, the updated XML is in the returnNode variable, and contains a LIMS ID and a URI, as shown in the following example:
The artifact resource for the result file now contains a file element with the file LIMS ID and URI:
When the script completes, an attached file icon will display in the LIMS client (Operations Interface shown here). The lab scientist can download the file from this view.
cookbookExamples.properties:
PostFileToProcess.groovy:
The Clarity LIMS API has batch retrieve endpoints for samples, artifacts, containers, and files. This article talks generically about links for any of those four entities.
When using the batch endpoints, you want to process upwards of hundreds of links. Intuitively, you may think that a single API call with all the links would be the fastest way to retrieve the data. However, analysis of the API performance shows that as the number of links increases beyond a threshold, the time per object increases.
To retrieve the data in the most efficient way, it is best to do multiple POSTs containing the optimal sized batch. A batch call takes longer than a GET to the endpoint of the sample to retrieve the data for a single sample (or other entity). However, after more than one or two samples are needed, the batch endpoint is more efficient.
Before you follow the example, make sure that you are aware of what the optimal batch size is based on the following information:
The optimal size is dependent on your specific server and the amount of UDFs / custom fields or other data attached to the object being retrieved.
The optimal batch size may be different for artifacts, samples, files, and containers. For example, if the optimal size for samples is 500, 10 batches of 500 samples will retrieve the data faster then one batch of 5000.
You must also have a compatible version of API (v2 r21 or later).
Attached below is a simple python script which will time how long batch retrieve take for an array of batch sizes. The efficiency is measured by the duration of the call divided by the number of links posted.
The attached script has hard coded parameters to define the range and increments of batch sizes to test. Additionally, the number of replications for each size is adjustable. These parameters are found on line 110, and may not require any modification since they are already set to the following by default:
For example, the above parameters will test the following sizes: 100, 125, 150, 175, 200, 225, 250, 275.
The parameters which will need to specific to your server are entered at the command line.
An example of the full syntax to invoke the script is as follows:
The script tracks how long each batch call takes to complete. The script outputs a .txt file with the raw numeric data and the batch size that returns the minimum value, and is the most efficient.
Viewing this data in a scatterplot format, you can see the range of optimal batch sizes for the artifacts/batch/retrieve endpoint is about 200 to 300 artifacts. This would be valid for artifacts only and each entity (eg, sample, file, or container) should be evaluated separately.
The shortest time per artifact is the most efficient batch size, as shown in the following example:
By default, LIMS configuration of send and receive timeout is 60 seconds. Very large batch calls will not complete if their duration is greater then the timeout configuration. This configuration is located at
BatchOptimalSizeTest.py:
When samples are processed in the lab, they are sometimes re-arrayed in complex ways that are pre-defined.
You can use the REST API and automation functionality that will allow a user to initiate a step that:
Uses a file to define a re-array pattern
Executes the step using that re-array pattern. Since the pattern is pre-defined, this will decrease the likelihood of an error in recording the re-array.
To accomplish this automation, you must be able to execute a step using the REST API. This example shows a simple step execution that you can apply to any automated step execution needed in your lab.
For a high-level overview of REST resource structure in Clarity LIMS, including how processes are the key to tracking work, see .
Before you follow the example, make sure that you have the following items:
Samples that have been added to the system.
A configured step/process that generates analytes (derived samples) and a shared result file.
Samples that have been run through the configured process/step.
A compatible version of API (v2 r21 or later).
Information about a step is stored in the process resource in the API.
Information about a derived sample is stored in the analyte resource in the API. This resource is used as the input and output of a step, and also used to record specific details from lab processing.
To run a step/process on a set of samples, you must first identify the set of samples to be used as inputs.
The samples that are inputs to a step/process can often be identified because they are all in the same container, or because they are all outputs of a previous step / process.
In this example, you run the step/process on the samples listed in the following table.
After you have identified the samples, use their LIMS IDs to construct the URIs for the respective analyte (derived sample) artifacts. The artifact URIs are used as the inputs in constructing the XML to POST and execute a process.
You can use StreamingMarkupBuilder to construct the XML needed for the POST, as shown in the following example code:
Executing a process uses the processexecution (prx) namespace (shown in bold in the code example above).
The required elements for a successful POST are:
type – the name of the process being run
technician uri – the URI for the technician that will be listed as running the process
input-output-map – one input output map element for each pair of inputs and outputs
input uri – the URI for the input artifact
output type – the type of artifact of the output
In addition, if the outputs of the process are analytes, then the following are also needed:
container uri – the URI for the container the output will be placed in
value – the well placement for the output
The process type, technician, input artifact, and container must all exist in the system before the process can be executed. So, for example, if there is no container with an empty well, you must create a container before running the process.
The XML constructed must match the configuration of the process type. For example, if the process is configured to have both samples and a shared result file as outputs, you must have both of the following:
An input-output-map for each pair of sample inputs and outputs
An additional input-output-map for the shared result file
If the POST is successful, the process XML is returned:
If the POST is not successful, the XML returned will contain the error that occurred when the POST completed:
After the step / process has successfully executed, you can open the Record Details screen and see the step outputs.
RunningAProcess.groovy:
When working with files, you can do the following:
In Clarity LIMS, you want to process multiple entities. To accomplish this quickly and effectively, you can use batch operations, which allows you to retrieve multiple entities using a single interaction with the API, instead of iterating over a list and retrieving each entity individually.
Batch operations greatly improve the performance of the script. These methods are available for containers and artifacts. In this example, both entities are retrieved using the batchGet() operation. If you would like to update a batch of output analytes (derived samples), you can increase the script execution speed by using batch operations. For more information, refer to and .
Before you follow the example, make sure that you have the following items:
Several samples have been added to the LIMS.
A process / step that generates derived samples in containers has been run on the samples.
A compatible version of API (v2 r21 or later).
When derived samples ('analyte artifacts' in the API) are run through a process / step, their information can be accessed by examining that process / step. In this example, we will retrieve all of the input artifacts and their respective containers.
To do this effectively using batch operations, we must collect all of the entities' URIs. These URIs must be unique, otherwise the batch operation will fail. Then, all of the entities can be retrieved in one action. It is important to note that only one type of entity can be retrieved in a call.
To retrieve the process step information, use the GET method with the process LIMS ID:
To retrieve the artifact URIs, collect the inputs of the process's input-output-map. A condition of the batchGET operation is that every entity to get must be unique. Therefore, you must call unique on your list.
You can now use batchGET to retrieve the unique input analytes:
The same can be done to gather the analytes' containers:
You have collected the unique containers in which the artifacts are located. By printing the name and URI of each container, an output similar to the following is obtained.
To retrieve the step information, use the GET method with the step LIMS ID:
To retrieve the artifact IDs, collect the inputs of the step's input-output-map. A condition of the batch retrieve operation is that every entity to get must be unique. To do this, you add the LUIDs to a set().
You can now use the function getArtifacts(), which is included in the glsapiutils.py to retrieve the unique input analytes:
UsingBatchGet.groovy:
Batchexample.py:
When working with controls, you can automate their removal from a workflow. For more information, see .
This example attaches a file to the Clarity LIMS server file storage repository instead of linking to an existing file on the network. For more information on linking to an existing file, refer to is covered in .
Both examples are useful in practice, depending on your network, storage architecture, and the size of the file. The file attachment method used in this example is equivalent to a lab scientist manually importing a file and attaching it to a file placeholder in the Clarity LIMS user interface.
Before you follow the example, make sure you have the following items:
Files can be attached to projects, steps or result file artifacts.
This script uses the python package Requests and relies upon glsapiutil.py.
A compatible version of API (v2).
You can also use EPP/automation to handle attaching files. Though this method is not as flexible, it does attach a file automatically. With this method, the attached files are copied to the Clarity LIMS server and attached based on a LIMS ID in the created file name. Automation scripts are powerful, but they are only called when the process/step is created, whereas the code in this example can run at any time. For more information, see .
Attaching a file requires the use of glsstorage and files resources.
The glsstorage resource assigns a file location, much like a file placeholder in the user interface. The files resource is used to associate the physical disk location to the ResultFile artifact. In combination, these resources allow flexible management of files, and the integration of external file manipulation and transfer tools.
Attaching a file is done in three main steps:
Using a POST, create a storage location for the file to glsstorage. This returns the XML that POSTs to the files resource. The XML includes a content-location.
Link the file to the placeholder (creating a unique file LIMS ID) by POSTing the glsstorage response to the /api/v2/files endpoint.
Using the /api/v2/files/{limsid}/upload endpoint, upload the file.
replace file APPLICATION EXAMPLE.py:
At times, control samples in the lab need only be used in a portion of a workflow. For example, E. coli genomic DNA is often prepared alongside samples in Library Preparation protocols and then validated during a Library Validation QC protocol to confirm that nothing went wrong during Library Preparation.
Because the utility of such a control sample is short-lived, there is no need to spend sequencing effort on it. In this scenario, it is advantageous to prevent control samples from advancing in the workflow automatically.
This can be accomplished through the API by implementing an EPP/automation script that removes the control at the end of a step. This example shows how to automate the removal of control samples at the end of a step and remove them from workflows. You can use this method on any step configured to generate individual derived sample (analyte) or ResultFile (measurement) outputs. The step can have any number of per-all-input result file (shared file) outputs, as they will be ignored by the script.
In the API, an artifact is an item generated by an earlier step. There are two types of artifacts: analyte (derived sample) and resultfile (measurement). In the Clarity LIMS web interface, the terms artifact, analyte, and resultfile have been replaced with derived sample or measurement.
Before you follow the example, make sure that you have done the following actions:
GLSRestApiUtils.groovy is located in your Groovy lib folder
You have downloaded the ControlSampleRemoval.groovy file and have placed it in /opt/gls/clarity/customextensions/
You have completed the required configuration steps, as described below
Before you can use this example script, you will need to complete the following configuration steps.
Complete the following steps in the LIMS Configuration area.
On the Lab Work tab, create a new master step named Cookbook Control Removal.
The master step may be of any type that generates derived samples or measurements.
The master step may generate any number of derived sample or measurement outputs.
On the Lab Work tab, create a new protocol named Cookbook Control Removal Protocol.
Add a new Cookbook Control Removal step to the protocol, basing it on the Cookbook Control Removal master step.
On the Automation tab, configure a step automation and name the automation Remove Controls.
In the Command Line text box, enter the following code example. Modifying the file paths for the Groovy installation on your server.
Enable the automation on the Cookbook Control Removal master step.
You can now configure the automation trigger on the step or the master step. If you configure the trigger on the master step, the settings are locked on all steps derived from the master step.
On the Lab Work tab, select the master step or step.
On the Master Step Settings or Step Settings form, in the Automation section, configure the automation trigger so that the script is initiated automatically at the end of the step.
Trigger Location: Step
Trigger Style: Automatic upon exit
From Consumables, select Control Samples tab.
In the Control Samples tab, enable one or more control samples on the Cookbook Control Removal step.
On the Lab Work tab, create a workflow named Cookbook Control Removal Workflow.
This workflow should contain the Cookbook Control Removal Protocol. You can add the script to any step in any workflow, and you do not need to create a separate step to run it.
For this example, it can also be beneficial to add a second step (any type) after this removal step to make sure that the controls were removed.
The following table defines the three parameters/tokens used by the script. As of Clarity LIMS v5.0, the term command-line parameter has been replaced with token.
After the script has processed the parameters / tokens and ensured that all the required information is available, it can begin to process the samples to determine if they should be removed.
To begin, retrieve the list of permitted actions for the step from the API. This contains a list of the URIs for the input analytes and their next steps.
You can use this information to get the URI of the current step, which allows you to obtain the information for the step itself.
Next, look at the possible next steps that can be used in this step. In doing so, you are able to collect the next-step-uri values that are associated with the 'next step' names within the NEXT_STEPS map.
In this case you are looking for the URI associated with the Remove from workflow option.
After you have retrieved the URIs for the desired next steps, you can iterate through the actions list checking artifacts to see if they are controls. If the artifact associated with any next action is a control, change the next action of the artifact to be the removal next step URI you retrieved previously.
Finally, send the change for the next step information to the desired endpoint and then define our success message to the user. The message allows you to inform the user of the results of the script.
Assuming samples and a control sample have been placed in the Cookbook Control Removal Workflow Ice Bucket, you can proceed as normal through the step.
On the Assign Next Steps screen, provide a variety of Next Step values, if desired.
Proceed with the completion of the step. A message box will display, alerting you to the execution of a custom script.
When the script completes, a success message displays and the controls will have been removed from the workflow.
ControlSampleRemoval.groovy:
In Next Generation Sequencing (NGS), a large amount of data is generated in a single run. Because the volume of data is so large, it makes sense to link to data files as they exist on the network, rather than copying files to the Clarity LIMS file server.
This example shows how you can use the files resource to associate a process output result file with a file on the network using the HTTP protocol.
Before you follow the example, make sure you have the following items:
Samples that have been added to the system.
A NGS process that takes analyte (derived sample) inputs and creates a single result file output for each input.
A flow cell container with 8 rows and 1 column. Both the rows and columns are numbers and start at 1.
Samples that have been added to a flow cell and have run the flow cell through your NGS process.
An HTTP file store that has been set up in the Clarity LIMS configuration file.
The appropriate directory structure and files to be linked to your HTTP file store. For this example, you need a directory with the same name as the flow cell on which you run your process.
You require a subdirectory for each lane named by lane number (eg, 1 containing the file you want to link).
The name of the file must be easily programmatically determined (eg, s_1_export.txt for lane number 1).
A compatible version of API v2.
This example assumes you are using the standard non-production scripting sandbox server, which uses Apache to serve files with HTTP. For more information on this server, see .
If you plan to use an alternative file storage configuration, contact the IT administrator of the Clarity LIMS server.
The administrator uses the omxprops-ConfigTool.jar configuration tool. For more information on this configuration tool, refer to Non-Production Scripting Sandbox Server - IT Admin Guide.
After running a NGS process on a full flow cell, in the Operations Interface, the Input/Output Explorer for the process run shows the relationship between the inputs in the flow cell and the result file output placeholders.
Associating a file that resides on a server that is accessible via the HTTP protocol requires the following steps:
Matching up the file with the correct file placeholder.
Constructing the XML to POST to the files resource.
A POST to the files resource, which associates the file and creates the link between the artifact and the file.
The following code snippet shows how, by starting with only the name of the flow cell, we can obtain the API resource for the flow cell container and the NGS process run.
Because the location of all the files we want to link is known, the combination of the flow cell and NGS process contains all the additional information required to link each file to the correct file placeholder, so that the results are associated with the correct sample.
Get a containers list filtering by container name, where the name is equal to the name of the directory. Your container name must be unique in the system as only one result is expected by this script.
Get the full XML for the container. The container list only contains the URI and LIMS ID of the container and we require all the placements as well.
Using the LIMS ID of one of the placements in the container, use the API to find the processes that have used it as an input. For this example, the expectation is that only one process has been run on the flow cell. If that is not the case, the first one returned is used.
Complete the XML with the input-output-map and make another call to the API to retrieve the complete process. As for the container case, the list resource just gives us the URI and LIMS ID of the process.
The code in this example includes some helper closures and a persistent single HTTP connection. The persistent connection helps improve the performance of the script.
The location of the file to be linked is constructed for each POST, based on the directory structure. For each input-output-map element in the process it is stored in the FileURI variable. The value provided here is used in both the <content-location> and <original-location> elements in the file node that will be posted.
The location of the result file artifact is stored in the variable outputURI. It is obtained from the <output> element in the input-output-map:
The first requirement is to create an XML resource for the file you want to submit to the system.
When you construct the individual file XML, you must specify the content-location, attach-to, and original-location child nodes.
In this example, you are posting a file link to a file in an HTTP file storage location, so the content-location and original-location are the same.
The POST requires an XML node as input. Therefore, you must convert fileDoc from a writable closure to an XML node using GLSGeneusRestApiUtils.xmlStringToNode.
You can then submit the new file resource to the API via a POST.
The following example shows the XML for FileNode, which is used in the POST:
The POST command does the following:
POSTs the XML constructed by StreamingMarkupBuilder to the files resource.
Adds a link from the files list resource to the new file.
After the POST executes, a link to the new file resource is added to the ResultFile artifact resource – because it was specified in the attached-to field of the new file resource.
The following code is the XML for the ResultFile artifact after the POST, which contains a link to the new file resource on the last line:
The file resource available from the API is slightly altered from the XML submitted to the POST. The file is assigned a LIMS ID and a URI.
The following code is the XML resource for the file that is available from the API:
The POST to the files resource associates a physical file to the ResultFile artifact. In the user interface, this POST changes the file icon.
Before POSTing to the files resource, you must make sure that the file exists in the location referenced by the content-location element. If the file does not exist in this location, the POST fails.
The value returned from the POST is stored in the returnNode variable.
If the command was successful, the above XML is returned. If the command was unsuccessful, an XML document explaining the error is returned. For example, if a file was already attached to the result file at artifactURI, the following document is stored in returnNode, as returned from the POST. In this case, the file that is already attached to the artifact must first be removed via the file resource DELETE method.
After a successful POST HTTP command, in the Operations Interface, the process summary view's Input/Output Explorer shows all the attached files.
AttachingFileNotOnClarity.groovy:
-u | username |
---|
For more information, refer to and .
The omxprops-ConfigTool.jar tool is at /opt/gls/clarity/tools/propertytool/. For more information, see .
As in the example from , you create the resource using StreamingMarkupBuilder, which you use in a POST to the files resource.
-p | password |
-s | hostname, including "/api/v2" |
-t | entity (either: artifact, sample, file, container) |
Submitted Sample Name | Derived Sample Name | Derived Sample LIMS ID | Container LIMS ID | Container Type | Well |
Soleus-1 | Soleus-1 | AFF853A53AP11 | 27-4056 | 96 well plate | A:1 |
Soleus-2 | Soleus-2 | AFF853A54AP11 | 27-4056 | 96 well plate | A:2 |
Soleus-3 | Soleus-3 | AFF853A55AP11 | 27-4056 | 96 well plate | A:3 |
-s {stepURI} | The protocol step URI, in the following form: http://<YourIP>/api/v2/steps/<ProtocolStepLimsid> |
-u {username} | The LIMS username (Required) |
-p {password} | The LIMS password (Required) |