Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
This topic forms a natural partner to the https://github.com/illumina-swi/clarity-int-docs/blob/main/docs/api-docs/application-examples/page-15/starting-a-protocol-step-via-the-api.md application example. When protocol steps are being initiated programmatically, we must know how to advance the step through the various states to completion.
Advancing a step is actually quite a simple task. It requires the use of the steps/advance API endpoint - in fact, little else is needed.
Let us consider a partially completed step with ID 24-1234. To advance the step to the next state, the following is required:
Perform a GET to the resource .../api/v2/steps/24-1234, saving the XML response.
POST the XML from step 1 to .../api/v2/steps/24-1234/advance, and monitor the returned XML for success.
If successful, the protocol step advances to its next state, just as if the lab scientist had advanced it via the Clarity LIMS interface.
Advancing a protocol step that is in its final state completes the step.
The Python advanceStep (STEP_URI) method shown below advances a step through its various states. To achieve this, the URI of the step is passed to the method to be advanced/completed.
The glsapiutil.py file is placed on the Clarity LIMS server, in the /opt/gls/clarity/customextensions folder. You will find the latest glsapiutil (and glsapiutil3) Python libraries on our GitHub page.
The example code is provided for illustrative purposes only. It does not contain sufficient exception handling for use 'as is' in a production environment.
Illumina sequencing protocols include a BCL Conversion and Demultiplexing step. This stage allows you to select the command options for running bcl2fastq2. The bcl2fastq must be initiated through a command-line call on the BCL server.
This example allows you to initiate the bcl2fastq2 conversion software by the selection of a button in BaseSpace Clarity LIMS.
Step Configuration
The "out of the box" step is configured to include the following UDFs / custom fields. You can select these options on the Record Details screen. You can also configure additional custom options.
The main method in the script is convertData(). This method performs several operations:
The script determines the run folder. The link to the run folder is attached as a result file to the sequencing step.
The script searches for the appropriate sequencing step and downloads the result file containing the link.
The script changes directories into the run folder.
The script gathers all the step level UDFs / custom fields from the BCL Conversion and Demultiplexing step.
Using the information gathered, the script builds the command that is executed on the BCL server. The command consists of two parts: cd (changing directory) into the run folder.
Executing the bcl2fastq command with the selected options.
This script must be copied to the BCL server because the script is executed on the BCL server remote Automated Informatics (AI) / Automation Worker (AW) node.
By default, the remote AI / AW node does not come with a custom extensions folder. Therefore, if this script is the first script on the server you can create a customextensions folder in /opt/gls/.
It is not recommended to have the customextensions folder in the remoteai folder as the remoteaifolder can get overwritten.
When uploading the script, ensure the following:
The path to the bcl2fastq application is correct (line 17)
The sequencing process type matches exactly the name of the process type / master step the artifact went through (the -d parameter)
The customextensions folder contains both glsapiutil.py and glsfileutil.py modules. See #assumptions-and-notes.
Parameters
The script accepts the following parameters:
An example of the full syntax to invoke the script is as follows:
You are running a version of Python supported by Clarity LIMS, as documented in the Clarity LIMS Technical Requirements.
The attached files are placed on the LIMS server, in the /opt/gls/clarity/customextensions folder.
The Python API Library (glsapiutil.py) is placed on the Clarity LIMS server, in the /opt/gls/clarity/customextensions folder. You can download the latest glsapiutil library from the GitHub page.
The example code is provided for illustrative purposes only. It does not contain sufficient exception handling for use 'as is' in a production environment.
kickoff_bcl2fastq2.py:
glsfileutil.py:
Stakeholders are interested in the progress of samples as they move through a workflow. E-mail alerts of events can provide them with real-time notifications.
Some possible uses of notifications include the following:
Completion of a workflow for billing department
Manager review requests
Notice of new files added via the Collaborations Lablink Interface
Updates on samples that are not following a standard path through a workflow
Clarity LIMS provides a simple way of accomplishing this using a combination of the Clarity LIMS API, EPP / automation triggers, and Simple Mail Transfer Protocol (SMTP).
The send_email() method uses the Python smptlib module to create a Simple Mail Transfer Protocol (SMPT) object to build and send the email. The attached script does the following:
Gathers relevant data from the Clarity LIMS API endpoints.
Generates an email body according to a template.
Calls the send_email() function.
Connect to Clarity SMTP with:
host='localhost', port=25
Because of server restrictions, the script can send emails from only:
noreply.clarity@illumina.com
The automation / EPP command is configured to pass the following parameters:
Example command line:
The script can be executed using a Clarity LIMS automation / EPP command, and trigged by one of the following methods:
Manually, via a button on the Record Details screen.
Automatically, at a step milestone (on entry to or exit from a screen).
The script can also be triggered outside of a Clarity LIMS workflow, using a time-based job scheduler such as cron.
You are running a version of Python that is supported by Clarity LIMS, as documented in the Clarity LIMS Technical Requirements.
The example code is provided for illustrative purposes only. It does not contain sufficient exception handling for use 'as is' in a production environment.
emails_from_clarity.py:
emails_attachmentPOC.py:
Many facilities have linear, high-throughput workflows. Finishing the current step and finding the samples that have been queued for the next step. Then putting the samples into the virtual ice bucket. Then actually starting the next step. All of these steps can be seen as unnecessary overhead.
This solution provides a methodology that allows a script to finish the current step, and start the next.
We have already illustrated how a step can be run in its entirety via the API. Finishing one step and starting the next would seem a straightforward exercise, but, as most groups have discovered, there is a catch.
Consider the step to be autocompleted as Step A, with Step B being the next step to be autostarted.
The script to complete Step A and start Step B will be triggered from Step A (in the solution defined below, it will be triggered manually by clicking a button on the Record Details screen).
As Step A invokes a script:
The step itself cannot be completed - because the script has not completed successfully. - and -
The script cannot successfully complete unless the step has been completed.
To break this circle, the script initiated from Step A must invoke a second script. The first script will then complete successfully, and the second script will be responsible for finishing Step A and started Step B.
The runLater.py script launches the subsequent script (finishStep.py) by handing it off to the Linux 'at' command (see #h_3d84355e-7aa7-45b6-b8e8-54029669870d). This process effectively 'disconnects' the execution of thefinishStep.py script from runLater.py, allowing runLater.py to complete successfully, and then Step A to be completed.
Parameters
The script accepts the following parameter:
Although this script accepts just a single parameter, the value of this parameter is quite complex, since it is the full command line and parameters that need to be passed to the finishStep.py script.
Following is an example of a complete EPP / automation command-line string that will invoke runLater.py, and in turn finishStep.py:
The finishStep.py script completes and starts Steps A and B respectively.
Parameters
The script accepts the following parameters:
The first thing that this script does is go to sleep for five seconds before the main method is called. This delay allows the application to detect that the invoking script has completed successfully, and allow this script to work. The duration of this delay can be adjusted to allow for your server load, etc.
The central method (finishStep()) assumes that the current Step (Step A) is on the Record Details screen. Next, the method callsadvanceStep() to move the step onto the screen to allow the next step to be selected. By default, the routeAnalytes() method selects the first available next step in the protocol / workflow for each analyte. Then theadvanceStep() method is called, which completes Step A.
If the script has been passed a value of 'START' for the -a parameter, the next step (Step B) will now be started. ThestartNextStep() method handles this process, carrying out the following high-level actions:
It determines the default output container type for Step B.
It invokes an instance of Step B on the analytes routed to it via the routeAnalytes() method.
If the Step B instance was invoked successfully, it gathers the LUID of the step.
At this point, it is possible that the apiuser user has started the step. This situation is not ideal. We would like to update it so that it shows up in the Work in Progress section of the GUI for the user that initiated Step A.
This script makes a note of the current user and then updates the new instance of Step B with the current user. This update can fail if Step B has mandatory fields that must be populated on the Record Details screen (see #h_3d84355e-7aa7-45b6-b8e8-54029669870d). It can also fail if the step has mandatory reagent kits that must be populated on the Record Details screen.
If the update does fail, the step has been started, but the apiuser user still owns it. It can be continued as normal.
The attached files are placed on the Clarity LIMS server, in the following location: opt/gls/clarity/customextensions
The attached files are readable by the glsai user.
Updating the HOSTNAME global variable such that it points to your Clarity LIMS server is required.
The example code is provided for illustrative purposes only. It does not contain sufficient exception handling for use 'as is' in a production environment.
The scripts use rudimentary logging. After the scripts are installed and validated, these logs are of limited value, and their creation can likely be removed from the scripts.
The Linux 'at' command might not be installed on your Missing variable reference server. For installation instructions, refer to the Linux documentation.
For a solution to this issue, see Validating Process/Step Level UDFs.
runLater.py:
clarityHelpers.py:
finishStep.py:
Workflows do not always have a linear configuration. There are situations when samples progressing through a workflow must branch off and be directed to different stages, or even different workflows. You can add samples to a queue using the /api/{version}/route/artifacts endpoint.
In the lab, decisions are made dynamically. At the initiation of this example workflow, it is not known whether the sample is destined to be sequenced on a HiSeq or MiSeq instrument. As a result, the derived samples must be routed to a different workflow stage midworkflow.
This programmatic approach to queuing samples can be used many times throughout a workflow. This approach eliminates the need to write multiple complex scripts - each of which must be maintained over time.
This example describes a Python script that takes instruction from a .csv template file. Removing any hard-coded reference to specific UDFs / custom fields or workflow stages from the script allows for easy configuration and support. All business logic can be implemented solely through generation of a template and EPP / automation command.
The template contains UDF / custom field names and values. If samples in the active step have fields with matching values, they are queued for the specified workflow stage.
The step is configured to display the two checkbox analyte UDFs / derived sample custom fields. These fields are used to select the destination workflow stages for each derived sample / analyte. You can queue the sample for HiSeq, MiSeq, or both.
In the preceding example of the Sample Details screen, the user selected:
Two samples to be queued for HiSeq
Two samples for MiSeq
Two samples where routing is not selected
Two samples for both HiSeq and MiSeq
In the preceding example of the Step Details screen, the user selected:
A step level UDF / custom field to assign all samples to a given step. All output samples are routed to the associated step. Samples can be duplicated in the first step of the following protocol if:
Samples are routed as the last step of a protocol and;
The action of the next step is Mark Protocol as Complete.
The duplication is due to the Next Steps action queuing the artifact for its default destination in addition to the script routing the same artifact.
The solution here is to set the default Next Steps action to Remove from Workflow instead. This action can be automated using the Lab Logic Toolkit or the API.
On the protocol configuration screen, make sure that Start Next Step is set to Automatic.
Each UDF name and value pair correspond to a workflow and stage combination as the destination to route the artifact.
The UDF name in the template could be configured as a step or analyte level UDF. If a step level UDF has a value that is specified in the template, all analytes in the step are routed.
The UDFs can be of any type (Numeric, Text, Checkbox ). If a checkbox UDF is used, the available values are true or false.
UDF values in the template are not case-sensitive.
The template requires four columns:
UDF_NAME
UDF_VALUE
WORKFLOW_NAME
STAGE_NAME
For this example, the template values would be:
UDF_NAME, UDF_VALUE, WF_NAME, STAGE_NAME
Route all samples to: Step A, Workflow A, Stage A
Route all samples to: Step B, Workflow B, Stage B
Route A, True, Workflow A, Stage A
Go to HiSeq, True, TruSeq Nano DNA for HiSeq 5.0, Library Normalization (Illumina SBS) 5.0
Go to MiSeq, True, TruSeq DNA PCR-Free for MiSeq 5.0, Sort MiSeq Samples (MiSeq) 5.0
This script might be used numerous times in different EPP / automation scripts, with each referencing a different template.
Due to restrictions on file server access, this script accepts routing template instructions using a string EPP parameter with lines separated by a newline character ('\n'). The following example shows how this parameter string would be added to represent the previous template:
The EPP/automation that calls the script must contain the following parameters:
*Either --template or --template_string is required. If both are provided, --template_string is used.
When the --input parameter is used, the script routes input artifacts instead of the default output artifacts. UDF values of the input artifacts (instead of outputs) are against the template file.
An example of the full syntax to invoke the script is as follows:
Or, if you wish to route the inputs instead of outputs:
When the Record Details screen is entered, the UDF / custom field checkboxes or drop-down options specify to which workflow/stage combination each derived sample is sent.
The first important piece of information required is the URI of the destination stage. There is a unique stage URI assigned to each workflow and stage combination.
A stage URI can change across LIMS instances (such as switching from Dev to Prod). Therefore, the script gathers the stage URI from the workflow and stage name. This process occurs even when the workflows are identical.
The main method in the script is routeAnalytes() and it carries out several operations:
Gathers the information for the process that triggered the script, including output (or input) analytes.
For each analyte, evaluates which UDFs have been set, and adds the analyte to a list of analytes to route.
Creates the XML message for each stage.
Does a POST to the REST API to add the analytes to the queue in Clarity LIMS
This example, while in itself useful, also serves to demonstrate a more general concept. Routing artifacts is valuable in any situation where a sample must be queued for a stage outside of the usual order of a workflow. This routing is applicable even when routing newly submitted samples to the first stage in a workflow.
For more information, see the artifact/route REST API documentation.
You are running a version of Python supported by Clarity LIMS, as documented in the Clarity LIMS Technical Requirements.
The attached files are placed on the LIMS server, in the /opt/gls/clarity/customextensions folder.
The Python API Library (glsapiutil.py) is placed on the Clarity LIMS server, in the /opt/gls/clarity/customextensions folder. You can download the latest glsapiutil library from the GitHub page.
The example code is provided for illustrative purposes only. It does not contain sufficient exception handling for use 'as is' in a production environment.
routing_template.csv:
route_by_template.py:
When processing samples, there can be circumstances in which you must add downstream samples to additional workflows. This sample addition is not easy to achieve using the Server Name interfaces, but is easy to do via the API.
This example provides a Python script that can be used to add samples to an additional workflow step. The example also includes information on the key API interactions involved.
It is the outputs, not the inputs, of the process that are added to the workflow step. (If you would like to add the inputs, changing this step is simple.)
This example is an add function, not a move. If you would like to remove the samples from the current workflow, you can do arrange to do so by building an <assign> element.
The process is configured to produce analyte outputs, and not result files.
The script accepts the following parameters:
The step URI (-s) parameter is used to report a meaningful message and status back to the user. These reports depend upon the outcome of the script.
Initially, the parameters are gathered and the helper object defined in the glsapiutil.py has been instantiated and initialized. Its methods can be called to take care of the RESTful GET/PUT/POST functionality. The script calls the following functions:
Once the parameters have been gathered, and the helper object defined in the glsapiutil.py has been instantiated and initialized, its methods can be called to take care of the RESTful GET/PUT/POST functionality, leaving the script to call the following functions:
getStageURI()
routeAnalytes().
The getStageURI() function converts the workflowname and stagename parameters into a URI that is used as the assign element, for example:
The routeAnalytes() function gathers the outputs of the process, and harvests their URIs to populate in the <artifact> elements. This function also uses the reporting mechanism based upon the step URI parameter.
The crucial resource in this example is the route/artifacts resource. This API endpoint can only be POSTed to, and accepts XML of the following form:
For more information, refer to the route/artifacts REST API documentation. Also useful are the configuration/workflow resources, both single and list.
The attached file is placed on the Clarity LIMS server, in the /opt/gls/clarity/customextensions folder.
The Python API Library (glsapiutil.py) is placed on the Clarity LIMS server, in the /opt/gls/clarity/customextensions folder. You can download the latest glsapiutil library from our GitHub page.
You must update the HOSTNAME global variable such that it points to your Clarity LIMS server.
The example code is provided for illustrative purposes only. It does not contain sufficient exception handling for use 'as is' in a production environment.
addToStep.py:
The BaseSpace Clarity LIMS interface offers tremendous flexibility when placing the outputs of a step into new containers. Consider if your protocol step always involves the placement of samples using the plate map of the input plate. If yes, it makes sense to automate sample placement.
This example provides a script that allows sample 'autoplacement' to occur, and describes how the script can be triggered.
This script is similar in concept to the Automatic Placement of Samples Based on Input Plate Map example. The difference being that this updated example is able to work with multiple input plates, which can be of different types.
In this example, samples are placed according to the following logic:
The process type / master step is configured to produce just one output analyte (derived sample) for every input analyte.
The output analytes are placed on the same type of container as the corresponding source plate.
Each occupied well on the source plate populates the corresponding well on the destination plate.
The destination plate is named such that it has the text 'DEST-' prepended to the name of its corresponding source plate.
In this example, the step is configured to invoke the script on entry to the Sample Placement screen.
The EPP / automation command is configured to pass the following parameters:
An example of the full syntax to invoke the script is as follows:
The main method in the script is autoPlace(). This method executes several operations:
The creation of the destination plates is effected by calls to createContainer().
It harvests just enough information so that the objects required by the subsequent code can retrieve the required objects using the batch API operations. This involves using some additional code to build and manage the cache of objects retrieved in the batch operations, namely:
cacheArtifact()
prepareCache()
getArtifact()
The cached analytes are then accessed. After the source well to which the analyte maps has been determined, the output placement can be set. This information is presented in XML that can be POSTed back to the server in the format required for the placements resource.
After all the analytes have been processed, the placements XML is further supplemented with required information, and POSTed to the ../steps/<stepID>/placements API resource.
Finally, a meaningful message is reported back to the user via the ../steps/<stepID>/programstatus API resource.
The attached file is placed on the Clarity LIMS server, in the /opt/gls/clarity/customextensions folder.
The Python API Library (glsapiutil.py) is placed on the Clarity LIMS server, in the /opt/gls/clarity/customextensions folder. You can download the latest glsapiutil library from our GitHub page.
The HOSTNAME global variable must be updated so that it points to your Clarity LIMS server.
The example code is provided for illustrative purposes only. It does not contain sufficient exception handling for use 'as is' in a production environment.
autoplaceSamplesDefaultMulti.py:
In some facilities, when samples are initially submitted into BaseSpace Clarity LIMS, it has already been determined which samples are combined to give pooled libraries. In such cases, it is desirable to automate the pooling of samples within the protocol step. Doing so means that the lab scientist does not have to manually pool the samples in Clarity LIMS interface. This automatic pooling saves time and effort and reduces errors.
This example provides a script that allows 'autopooling' to occur. It also describes how the script can be triggered, and what the lab scientist sees when the script is running.
The attached script relies upon a user-defined field (UDF) / custom field, named Pooling Group, at the analyte (sample) level.
This UDF is used to determine the constitution of each pool. This determination makes sure that samples combined to create a pool having the name of the Pooling Group value all have a common Pooling Group value.
For example, consider the Operations Interface (LIMS v4.x & earlier) Samples list shown below. The highlighted samples have a common Pooling Group value. Therefore, we can expect that they will be combined to create a pool named 210131122-pg1.
In this example, the Pooling protocol step is configured to invoke the script as soon as the user enters the step's Pooling screen.
The EPP / automation command is configured to pass the following parameters:
An example of the full syntax to invoke the script is as follows:
When the lab scientist enters the Pooling screen, a message similar to the following displays:
When the script has completed, the rightmost Placed Samples area of the Placement screen displays the auto-created pools:
At this point, the lab scientist can review the constituents of each pool, and then complete the protocol step as normal.
The main methods of interest are autoPool() and getPoolingGroup().
The autoPool() method harvests just enough information so that the objects required by the subsequent code in the method can retrieve the required objects using the 'batch' API operations. This involves using additional code to build and manage the cache of objects retrieved in the batch operations, namely:
cacheArtifact()
prepareCache()
getArtifact()
After the cache of objects has been built, each artifact is linked to its submitted sample. The getPoolingGroup function harvests the Pooling Group UDF value of the corresponding submitted sample.
The script now understands which artifacts are to be grouped to produce the requested pools. An appropriate XML payload is constructed and then POSTed to the ../steps/<stepID>/placements API resource.
The attached file is placed on the Clarity LIMS server, in the /opt/gls/clarity/customextensions folder.
The example code is provided for illustrative purposes only. It does not contain sufficient exception handling for use 'as is' in a production environment.
autopoolSamples.py:
Some steps produce data that you would like your collaborators to have access to.
This example provides an alternative method and uses a script to publish the files programmatically via the API.
In this example, suppose we have a protocol step, based upon a Sanger/capillary sequencing workflow, that produces up to two files per sample (a .seq and a .ab1 file).
Our example script runs at the end of the protocol step. The script publishes the output files so that they are available to collaborators in the LabLink Collaborations Interface.
The EPP / automation command is configured to pass the following parameters:
An example of the full syntax used to invoke the script is as follows:
After the script has completed its execution, collaborators are able to view and download the files from the LabLink Collaborations Interface.
The main method used in the script is publishFiles(). The method in turn carries out several operations:
The limsids of the steps' artifacts are gathered, and the artifacts are retrieved, in a single transaction using the 'batch' method.
Each artifact is investigated. If there is an associated file resource, its limsid is stored.
The files resources are retrieved in a single transaction using the 'batch' method.
For each file resource, the value of the <is-published> node is set to 'true'.
The files resources are saved back to Clarity LIMS in a single transaction using the 'batch' method.
The attached file is placed on the Clarity LIMS server, in the /opt/gls/clarity/customextensions folder.
The example code is provided for illustrative purposes only. It does not contain sufficient exception handling for use 'as is' in a production environment.
publishFilesToLabLink.py:
publishFilesToLabLink_v2.py:
Often, a protocol step will have just a single 'next action' that is required to continue the workflow. In such cases, it can be desirable to automatically set the default action of the next step.
This example script shows how this task can be achieved programmatically.
The step is configured to invoke a script that sets the default next action when you exit the Record Details screen of the step.
The automation command is configured to pass the following parameters to the script:
An example of the full syntax to invoke the script is as follows:
When the lab scientist exits the Record Details screen, the script is invoked and the following message displays:
If the script completes successfully, the LIMS displays the default next action. If the current step is the final step of the protocol, it is instead marked as complete:
At this point the lab scientist is able to manually intervene and reroute failed samples accordingly.
NOTE: it is not possible to prevent the user from selecting a particular next-step action. However, it is possible to add a validation script that checks the next actions that have been selected by the user against a list of valid choices.
If the selected next step is not a valid choice, you can configure the script such that it takes one of the following actions:
Replaces the next steps with steps that suit your business rules.
Issues a warning, and prevents the step from completing until the user has changed the next step according to your business rules.
The main method in the script is routeAnalytes(). The method in turn carries out several operations:
The actions resource of the current protocol step is investigated.
The possible next step(s) are identified.
If there is no next step, the current step is set to 'Mark protocol as complete.'
If there are multiple next steps, the first is used to set the next action.
The attached file is placed on the LIMS server, in the /opt/gls/clarity/customextensions folder.
The example code is provided for illustrative purposes only. It does not contain sufficient exception handling for use 'as is' in a production environment.
setDefaultNextAction.py:
There are often cases where empty containers are received and added into Clarity LIMS before being used in a protocol. This application example describes how to use the API to place samples into existing containers automatically. The application uses a CSV file that describes the mapping between the sample and its destination container.
Furthermore, the API allows accessioning into multiple container categories, something that is not possible through the web interface.
If you use Python version 2.6x, you must install the argparse package. Python 2.7 and later include this package by default.
Also make sure that you have the latest glsapiutil.py Python API library on the Clarity LIMS server, in the /opt/gls/clarity/customextensions folder. You can download the latest glsapiutil library from our .
Check the list of allowed containers for the step and make sure that all expected container categories are present. The API cannot place samples into containers that are not allowed for the step!
The suggested input format is a four-column CSV with the following columns:
Sample Name, Container Category, Container Name, Well Position
The sample name should match the name as shown in the Ice Bucket/Queue screen.
First, make sure that the Step Setup screen has been activated and is able to accept a file for upload:
Assuming the file is `compoundOutputFileLuid0`, the EPP / automation command line would be structured as follows:
The automation should be configured to trigger automatically when the Placement screen is entered.
NOTE: The attached Python script uses the prerelease API endpoint (instead of v2), which allows placement of samples into existing containers.
The script performs the following operations:
Parses the file and create an internal map (Python dict) between sample name and container details:
Key: sample name
Value: (container name, well position) tuple
Retrieves the URI of each container.
Accesses the step's 'placements' XML using a GET request.
Performs the following modifications to the XML:
Populates the <selected-containers> node with child nodes for each retrieved container.
Populates each <output> artifact with a <location> node with the container details and well position.
PUTs the placement XML back to Clarity LIMS.
After the script runs, the Placement screen should show the placements, assuming there were no problems executing the script.
The attached script also contains some minimal bulletproofing for the following cases:
Container was not found.
Container is not empty.
Well position is invalid.
Sample in the ice bucket does not have a corresponding entry in the uploaded file.
Sample in the uploaded file is not in the ice bucket.
In all cases, the script reports an error and does not allow the user to proceed.
placeSamplesIntoExistingContainers.py:
In some circumstances, it can be desirable to automate the initiation of a step in Clarity LIMS. In this scenario, the step is thus executed without any user interaction, ie, a liquid-handling robot drives the step to completion. This example provides a solution that allows for automatic invocation of steps via the API.
Before we can invoke a step, we must first employ the queues endpoint.
Every step displayed in the Clarity LIMS web interface has as associated queue, the contents of which can be queried. This image of the Nextera XT Library Prep shows the samples queued for each step in the Nextera XT Library Prep protocol.
For this example, we investigate the queue for the Step 1 - Tagment DNA (Nextera XT DNA) step.
Step 1: Find Step ID
First, we must find the LIMS ID of the step. We query the configuration/workflows resource and hone in on the Nextera XT for MiSeq protocol:
From the XML returned, we can see that the Tagment DNA (Nextera XT DNA) step has an associated stage, with an ID of 691:
Step 2: Find Stage ID
If we now query this stage ID, we see something similar to the following:
We now have the piece of information we need: namely the ID 567 that is associated with this step.
Step 3: Query the Queues Resource
We can use this ID to query the queues resource, which provide us with something similar to the following:
This result matches the information displayed in the Clarity LIMS web interface. In the next image, we can see the derived samples awaiting the step.
Now that we have the contents of the queue, starting the step programmatically is quite simple.
All that is required is a POST to the steps API endpoint. The XML input payload to the POST request will take the following form:
In the Clarity LIMS web interface, two pieces of evidence indicate that the step has been initiated:
The partially completed step is displayed in the Work in Progress area.
The Recent Activities area shows that the protocol step was started.
The XML payload POSTed to the steps resource is quite simple in nature. In fact there are only three variables within the payload:
1. The step to be initiated:
2. The type of output container to be used (if appropriate):
3. The URI(s) of the artifact(s) on which the step should be run, along with the number of replicates the step needs to create (if appropriate):
The Clarity LIMS interface offers tremendous flexibility when placing the outputs of a protocol step into new containers. Consider if your protocol step always involves the placement of samples using the plate map of the input plate. If yes, it makes sense to automate sample placement.
This example provides a script that allows sample 'autoplacement' to occur. It also describes how the script can be triggered, and what the lab scientist sees when the script is running.
For an example script that automates sample placement using multiple plates, see the example.
In this example, samples are placed according to the following logic:
The step produces one output sample for every input sample.
The output samples are placed on a 96 well plate.
Each occupied well on the source 96 well place populates the corresponding well on the destination 96 well plate.
In this example, the step is configured to invoke the script on entry to the Sample Placement screen.
The EPP / automation command is configured to pass the following parameters:
An example of the full syntax to invoke the script is as follows:
When the script has completed, the rightmost Placed Samples area of the Placement screen will display the container of auto-placed samples:
At this point the lab scientist can review the constituents of the container, and complete the step as normal.
The main method in the script is autoPlace(). This method in turn carries out several operations:
A call to createContainer() prompts the creation of the destination 96 well plate.
The method harvests enough information so that the objects required by the subsequent code can retrieve the required objects using the 'batch' API operations. You add additional code to build and manage the cache of objects retrieved in the batch operations:
cacheArtifact()
prepareCache()
getArtifact()
The cached analytes are accessed and the source well to which each analyte maps is determined.
Output placement can then be set. This information is presented in XML that can be POSTed back to the server in the correct format required for the placements resource.
After the analytes have been processed, the placements XML is further supplemented with required information, and POSTed to the ../steps/<stepID>/placements API resource.
Finally, a meaningful message is reported back to the user via the ../steps/<stepID>/programstatus API resource.
The attached file is placed on the Clarity LIMS server, in the /opt/gls/clarity/customextensions folder.
The HOSTNAME global variable must be updated so that it points to your Clarity LIMS server.
The example code is provided for illustrative purposes only. It does not contain sufficient exception handling for use 'as is' in a production environment.
autoplaceSamplesDefault.py:
This method supersedes the use of the processes API endpoint.
The capacity for completing a step programmatically, without having to open the BaseSpace Clarity LIMS web interface, allows for rapid validation of protocols. This method results in streamlined workflows for highly structured lab environments dealing with high throughput.
This example uses the /api/v2/steps endpoint, which allows for more controlled execution of steps. In contrast, a process can be executed using the api/v2/processes endpoint with only one POST. This ability is demonstrated in the example.
The Clarity LIMS API allows for each aspect of a step to be completed programatically. Combining the capabilities of the API into one script allows for the completion of a step with one click.
This example was created for non-pooling, non-indexing process types.
The script accepts the following parameters:
An example of the full syntax to invoke the script is as follows:
The script contains several hard coded variables, as shown in the following example.
step_config_uri Is the stage that is automatically completed. Because this script is starting the step, there is no step limsid needed as input parameter to the script. After the script begins the step, it gathers the step limsid from the APIs response to the step-creation post.
The main() method in the script carries out the following operations:
startStep()
addLot()
addAction()
addPlacement()
advanceStep()
Each of these functions creates an XML payload and interacts with the Clarity LIMS API to complete an activity that a lab user would be doing in the Clarity LIMS interface.
This function creates a 'stp:step-creation' payload.
As written, the script includes all the analytes in the queue for the specified stage.
This function creates a 'stp:lots' payload. This may be skipped if the process does not require reagent lot selection.
This function creates a 'stp:actions' payload. As written, all output analytes are assigned to the same 'next-action'. To see the options available as next actions, see the REST API documentation: Type action-type:
NOTE: This example only supports the following next-actions: 'nextstep', 'remove', 'repeat'.
This function creates a 'stp:placements' payload.
In this example, it is not important where the artifacts are placed, so the analytes are assigned randomly to a well location.
This function relies on the createContainer function, since a step producing replicate analytes may not create enough on-the-fly containers to place all out the output artifacts.
This function creates a 'stp:placements' payload. POSTing this payload to steps/{limsid}/advance is the API equivalent of moving to the next page of the GUI, with the final advance post completing the step.
There is a known bug with advance endpoint that prevents a complete end-to-end programatic progression through a pooling step.
You are running a version of Python that is supported by Clarity LIMS, as documented in the Clarity LIMS Technical Requirements.
The attached file is placed on the LIMS server, in the /opt/gls/clarity/customextensions folder.
The HOSTNAME global variable must be updated so that it points to your Clarity LIMS server.
The example code is provided for illustrative purposes only. It does not contain sufficient exception handling for use 'as is' in a production environment.Attachments
autocomplete-wholestep.py:
Samples progressing through workflows can branch off and must be directed to different workflows or stages within a workflow.
Example: If it is not known at the initiation of a workflow if a sample is to be sequenced on a HiSeq or MiSeq. Rerouting the derived samples could be necessary.
This example provides the user with the opportunity to route samples individually to the HiSeq, MiSeq, or both stages from the Record Details screen.
The step is configured to display two checkbox analyte UDFs / derived sample custom fields. The fields are used to select the destination workflow/stages for each derived sample. You can choose to queue the sample for HiSeq, MiSeq, or both.
In this example, you select the following:
Two samples to be queued for HiSeq
Two samples for MiSeq
Two that are not routed
Two samples for both HiSeq and MiSeq
The script accepts the following parameters:
An example of the full syntax to invoke the script is as follows:
On the Record Details screen, you use the analyte UDF / derived sample custom field checkboxes to decide which workflow/stage combination to send each derived sample.
The first important piece of information required is the URI of the destination stage.
A stage URI can change across LIMS instances (such as switching from Dev to Prod). Therefore, the script gathers the stage URI from the workflow and stage name. This process occurs even when the workflows are identical.
The main method in the script is routeAnalytes. This method in turn carries out several operations:
Gathers the information for the process / master step that triggered the script, including output analytes.
For each analyte, evaluates which UDFs / custom fields have been set, and adds the analyte to a list of analytes to route.
Creates the XML message for each stage.
Does a POST to the REST API in order to add the analytes to the queue in Clarity LIMS.
Modifications
This script can be modified to look for a process level UDF (master step custom field), in which case all outputs from the step would be routed to the same step.
This example also serves to demonstrate a more general concept. Routing artifacts is valuable in any situation where a sample needs to be queued for a stage outside of the usual order of a workflow - or even routing newly submitted samples to the first stage in a workflow.
You are running a version of Python that is supported by Clarity LIMS, as documented in the Clarity LIMS Technical Requirements.
The attached file is placed on the LIMS server, in the /opt/gls/clarity/customextensions folder.
The example code is provided for illustrative purposes only. It does not contain sufficient exception handling for use 'as is' in a production environment.
Samples can be inadvertently duplicated in the next step. This duplication occurs if:
The sample is being routed at the last step of a protocol and;
If the action of next steps is Mark Protocol as Complete.
This duplication is due to:
The next step is routing the artifact to its default destination and;
The script is routing the same artifact.
The solution here is to set the default next steps action to Remove from Workflow instead. This solution can be automated using Lab Logic Toolkit or the API.
Route_to_HiSeq_MiSeq.py:
The Python API Library (glsapiutil.py) is placed on the Clarity LIMS server, in the /opt/gls/clarity/customextensions folder. You can download the latest glsapiutil library from our .
The Python API Library (glsapiutil.py) is placed on the Clarity LIMS server, in the /opt/gls/clarity/customextensions folder. You can download the latest glsapiutil library from our .
The Python API Library (glsapiutil.py) is placed on the Clarity LIMS server, in the /opt/gls/clarity/customextensions folder. You can download the latest glsapiutil library from our .
Clarity LIMS v6.x automation trigger configuration
If the POST operation was successful, the API will return XML of the following form (for details, see section):
When the lab scientist enters the Sample Placement screen, a message similar to the following appears:
The Python API Library (glsapiutil.py) is placed on the Clarity LIMS server, in the /opt/gls/clarity/customextensions folder. You can download the latest glsapiutil library from our .
This function advances the current-state for a step. The current-state is an attribute that is found at the /api/v2/steps/{limsid} endpoint. It is a representation of the page that you see in the user interface. For more information, see and search for the REST API documentation relating to the /{version}/steps/{limsid}/advance endpoint.
The Python API Library (glsapiutil.py) is placed on the Clarity LIMS server, in the /opt/gls/clarity/customextensions folder. You can download the latest glsapiutil library from our .
For more information about how to use the artifact/route endpoint, see .
The Python API Library (glsapiutil.py) is placed on the Clarity LIMS server, in the /opt/gls/clarity/customextensions folder. You can download the latest glsapiutil library from our .
-u
The username of the current user (Required)
-p
The password of the current user (Required)
-s
The URI of the step that launches the script - the {stepURI:v2} token (Required)
-d
The display name of the sequencing step (Required)
-u
The username of the current user (Required)
-p
The password of the current user (Required)
-s
The URI of the step that launches the script (Required)
-u
The username of the current user (Required)
-p
The password of the current user (Required)
-s
The URI of the step that launches the script (Required)
-a
The action, or mode that the script runs under (Required). Accepted values are START (which completes the current step and starts the next) or COMPLETE (completes the current step)
-p
--password
The password of the current user (Required)
-s
--stepURI
The URI of the step that launches the script, the {stepURI:v2} token (Required)
-l
--log
The path, or limsid of the log file, the {compoundOutputFileLuidN} token (Required)
-t
--template
The path to the template file (Required)*
-r
--template_string
A string containing the template information (Required)*
--input
Uses the input artifacts instead of output artifacts. Default = False
-u
The username of the current user (Required)
-p
The password of the current user (Required)
-l
The limsid of the process invoking the script (Required)
-s
The URI of the step that launches the script (Required)
-w
The name of the destination workflow (Required)
-g
The name of the desired stage within the workflow (Required)
-l
The limsid of the process invoking the script (Required)
-u
The username of the current user (Required)
-p
The password of the current user (Required)
-s
The URI of the protocol step that launches the script - the {stepURI:v2:http} token (Required)
-l | The limsid of the process invoking the script (Required) |
-u | The username of the current user (Required) |
-p | The password of the current user (Required) |
-s | The URI of the step that launches the script - the {stepURI:v2:http} token (Required) |
-u | The username of the current user (Required) |
-p | The password of the current user (Required) |
-s | The URI of the step that launches the script - the {stepURI:v2} token (Required) |
-u | The username of the current user (Required) |
-p | The password of the current user (Required) |
-s | The URI of the step that launches the script (Required) | The {stepURI} token |
-a | Sets the next action to a fixed value. (Optional) | This is for advanced use, for example, when you would like to set the next action to a fixed value — 'repeat step', 'remove from workflow', and so on. |
-l | The luid of the process invoking the script (Required) |
-u | The username of the current user (Required) |
-p | The password of the current user (Required) |
-s | The URI of the step that launches the script - the {stepURI:v2:http} token (Required) |
-u | The username of the current user (Required) |
-p | The password of the current user (Required) |
-s | The URI of the step that launches the script - the {stepURI:v2:http} token (Required) |
-u | The username of the current user (Required) |
-p | The password of the current user (Required) |
-s | The URI of the protocol step that launches the script - the {stepURI:v2:http} token (Required) |
A key reason to track samples is to monitor their quality. In Clarity LIMS, samples are flagged with a check mark to indicate good quality (QC Pass) or an X to indicate poor quality (QC Fail).
There are many ways to determine quality, including concentration and DNA 260/280 light absorbance measurements. This example uses a tab-separated value (TSV) results file from a Thermo NanoDrop Spectrophotometer to:
Record concentration and 260/280 measurements, and;
Set quality flags in the LIMS.
In this example, once the script is installed, the user simply runs and records a step and imports the results file. The EPP / automation script does the rest of the work, by reading the file, capturing the measurements in Clarity LIMS, and setting the QC flags.
QC file formats
As spectrophotometers are used for many measurements, within many lab protocols, file formats can vary depending on the instrument software settings. Check your instruments for specific file formats and use the example below to get familiar with the QC example scripts.
The user selects samples and runs the QC Example step.
In the Operations Interface (LIMS v4 & earlier), the user sets the required minimum concentration and/or 260/280 lower and upper bounds.
The QC Example process creates an output artifact (shared ResultsFile) called QC Data File. This file is shown in the Sample Genealogy and Outputs panes. The "!" icon indicates that this entry is a placeholder for a file.
The users loads samples onto the spectrophotometer and follows the instrument's protocol for QC measurement.
After the measurements are complete, the user exports the TSV results file created by the spectrophotometer, using the NanoDrop software.
The user imports the TSV file into the LIMS: As Clarity LIMS parses the file, the measurements are captured and stored as output user-defined fields (UDFs). The QC Pass/Fail flags are then set on the process inputs. The flags are set according to whether they meet the concentration and/or 260/280 bounds specified in Step 2.
A TSV file is created by the NanoDrop spectrophotometer. The specific file used in this example is shown below.
After the TSV file is imported and attached to the QC Example process in Clarity LIMS, a second script is automatically called with EPP. This second EPP script is part of a second process, called QC Example(file handling). The file attachment event triggers the second EPP script. You can see the process in the Sample Genealogy pane:
The example uses the TSV file Sample ID value to locate the plate and well location to which the QC flag is to be applied. In the example file shown in Step 1: The container is QCExamplePlate.
The location A01 maps to the sample on the first well of the container named QCExamplePlate.
The data contained in the TSV file is captured in Clarity LIMS and can be viewed on the Details tab.
Notice that the user ran only one process (QC Example), but two processes were recorded. The first EPP script created the second process, QC Example (file handling), using the REST API. Using REST to create a process is described in the Running a Process Cookbook example.
The NanoDrop QC algorithm in the script compares the concentration and the 260/280 ratio for each sample in the imported TSV file. The values are entered into the process UDFs.
A QC Fail flag is applied:
If the concentration of the sample is less than specified, or;
If its 260/280 ratio is outside the bounds given when running the process.
A QC Pass flag is applied when the sample has values inside the parameters provided.
Samples with no associated values in the TSV file are unaffected. In this example, the minimum concentration is set to 60. A QC Fail flag is applied to sample-2 because its concentration level does not meet the minimum value specified.
Automation/EPP can be used to process files when they are attached to the LIMS. Using file attachment triggers is sometimes called data analysis pipelining. Basically, a series of analysis steps across a chain of processes is triggered from one file attachment.
Download the zip file to the server; on a non-production server use the gls user account.
Unzip the file to the following directory: /opt/gls/clarity/Applications. The contents of the zip file will be installed within that directory, to CookBook/NanoDropQC/.
Next, unzip the config-slicer-<version>-deployment-bundle.zip in /opt/gls/clarity/Applications/CookBook/NanoDropQC/. Replace <version> with the version number of the included config-slicer.
With Clarity LIMS running, run the following server command-line call to import the required configuration into the server (i.e., the process, sample, and fields used by the scripts):
To confirm that the example is correctly installed, follow the steps below to simulate the recording of QC information by a user in the lab:
Start the Clarity LIMS Operations Interface client.
Create a project, and submit a 96 well plate named QCExamplePlate full of samples
Select the samples and run the QC Example process.
Click the Next and Done buttons to complete the wizard.
When the process completes, in the process summary tab's Input/Output Explorer, you'll see the shared output file placeholder (QC Data File) in the Outputs pane and in the Sample Genealogy. Right-click this placeholder and click Import.
Import the example TSV nanodrop-qc-example.tsv file provided in the zip file.
Wait for the QC flags to become visible on a subset of the samples used as inputs to the process (those located from A:1 to A:9).
You can modify the example script to suit your lab's QC experimental methods and calculations. For example, you may want to consider phenotypic information or extra sample data recorded in the LIMS. Two modifications to the example are described below.
The example script writes the measurements into user-defined fields (UDFs) associated with outputs of the process. This allows multiple measurements to be recorded for one sample, by running the process multiple times. Each time the process is run on an input sample, a new process with new output results is recorded in the LIMS.
You may instead want to write the measurements into UDFs associated with the input samples. For example, you may want to keep the data records simple: the greater the number of outputs recorded in the LIMS, the more confusing it becomes for the user to upload files and navigate results. Setting the fields on the inputs provides a single 'golden' value.
To change the configuration and script to set QC flags and field values on inputs:
Change the code in NanoDrop.groovy so that UDFs are set on inputs instead of outputs. That is, replace this line:
with the following:
Since you are no longer changing the outputs, you can comment or delete the line where the output is saved:
Run the QC Example (preparation) process that was included in the configuration package you imported into your system.
The process wizard provides the option for you to either generate a new plate (container) or select a preexisting plate to hold the process outputs.
Note: The results of this process will be placed into a plate that is different from the one in which you originally placed the samples.
Run the QC Process on the plate created in the previous step.
Edit the nanodrop-qc-example.tsv file to reflect the name of this plate (remember that the Sample ID column in the NanoDrop QC data file depends on the plate name):
To do this, for each row, replace QCExamplePlate with the name of the plate that now holds the outputs of the process-generated plate names, for example, would be in a format similar to "27-124")
Import the modified nanodrop-qc-example.tsv into the result file placeholder generated by the QC Example process.
Wait for the QC flags to update on the inputs for which the nanodrop-qc-example.tsv file has measurements.
This time, instead of the measurements appearing in the outputs of the QC Example process, they will instead be collected as UDF values on the inputs of that process. To see these measurements, select the outputs of the parent process, QC Example (preparation), and view the output Details tab.
Most labs use multiple factors to determine sample QC flags. These factors might be associated with the submitted sample, multiple instrument measurements, or even the type of project or sample.
To demonstrate how easy it is to aggregate multiple factors into the QC flag logic, a boolean field called Human is added to the sample configuration. The script logic is modified to only set flags for human samples.
To change the configuration and script to check for human samples:
Change the code in NanoDrop.groovy (where we loop through the input/output pairs adjusting QC flags/updating UDFs), so that we first ensure we are dealing with a Human sample.
To do this, change the loop at the end of the script from this:
to this:
Configure a checkbox UDF on Sample (this was done for you when you imported the configuration package provided in this application example).
Submit a new batch of samples on a 96 well plate.
Edit samples on wells from A:1 to A:3 so that the Human check box field is selected.
Run the QC Example process on all 96 samples in the plate.
In the nanodrop-qc-example.tsv file, update the Sample ID column with the correct plate name.
Import the modified nanodrop-qc-example.tsv into the result file placeholder generated by the QC Example process.
Wait for the QC flags to update on the inputs for which the NanoDrop QC data file has measurements.
Note that only the inputs A:1, A:2 and A:3 will have QC flags assigned. The other inputs, including those for which the NanoDrop QC data file has data, will be left untouched because of the selection criteria implemented.
Clarity LIMS v1 or later (API v2 r14)
Groovy 1.7.4 or later (expected location: /opt/gls/groovy/current/)
All prerequisites are preloaded if you install on a non-production server.
file-qc-2.0-bundle.zip:
The indexing of samples is often performed in patterns, based upon the location of the samples in the container.
This example shows how to automate the default placing of reagents on samples, based on their container position. This greatly reduces the amount of time spent on the Add Labels screen (LIMS v6.x) and also reduces user error.
In this example, reagent labels are assigned to samples in a predetermined pattern as the user enters the Add Reagents screen. This pattern is applied to all containers entering this stage.
The example AssignIndexPattern.groovy script is configured to run on the Adenylate Ends & Ligate Adapters (TruSeq DNA) 4.0 step.
The script accepts the following parameters:
An example command line is shown below:
NOTE: The location of Groovy on your server may be different from the one shown in this example. If this is the case, modify the script accordingly.
In the Clarity LIMS web interface, for the Adenylate Ends & Ligate Adapters (TruSeq DNA) 4.0 step (in the TruSeq DNA Sample Prep protocol), configure Automation as follows:
Clarity LIMS v6.x
Trigger Location: Add Labels
Trigger Style: Automatic upon entry
Assuming the user has added 96 samples and has reached the Adenylate Ends & Ligate Adapters (TruSeq DNA) 4.0 step:
The user transfers all 96 samples to a new 96-well plate and proceeds with step.
When the user enters the Add Labels screen, the script is initiated. A message box alerts the user that a custom script is in progress.
Upon completion, the previously defined success message displays.
When the success message is closed, the Add Labels screen loads, and the pattern shown below is applied to samples.
Once the script has processed the input and ensured that all the required information is available, we can start applying the reagents to our samples.
To begin, we need to define the reagents and pattern to apply.
The storing of reagents can be accomplished by placing the reagents in a Map, comprised of the reagent names indexed by their respective number, i.e. 'AD030' indexed at 30.
The pattern can be stored as a List of Lists. This can be arranged as a visual representation of the pattern to be applied.\
Once we have our reagents and pattern defined, we can start processing the samples:
We start by retrieving the node from the reagent setup endpoint. We use this node as a base for subsequent commands.
We then gather the unique output artifact URIs and retrieve the output artifacts using batchGET:\
Next, we iterate through our list of output artifacts.
For each artifact, we determine its position and use its components to index our pattern. This allows us to determine which reagent should be placed on which sample.
Once we determine the reagent's name, we create a reagent-label node with a name attribute equal to the desired reagent name.
In the list of output-reagents in the reagent setup node, we find the output that corresponds to the output artifact that we are processing and add our reagent-label node to it. NOTE: We must strip off the state from our artifact's URI. The URIs stored in the step setup node are stateless and will not match the URI returned from our output artifact.
Once we have processed all of our output artifacts, we POST our modified setup node to the reagentSetup endpoint. This updates the default placement in the API.
We then define our success message to display to the user upon the script's completion.\
Your configuration conforms with the script requirements documented in #h_42788a7c-1199-4d16-8b43-8ab6ab9ca7a6.
You are running a version of Groovy that is supported by Clarity LIMS, as documented in the Clarity LIMS Technical Requirements.
The attached Groovy file is placed on the LIMS server, in the following location: /opt/gls/clarity/customextensions
GLSRestApiUtils.groovy is placed in your Groovy lib folder.
You have imported the attached Reagent XML file into your system using the Config Slicer tool.
The example code is provided for illustrative purposes only. It does not contain sufficient exception handling for use 'as is' in a production environment.
Single Indexing ReagentTypes.xml:
AssignIndexPattern.groovy:
In the default configuration of Clarity LIMS, at the end of every step, the user is required to choose where the samples will go next - i.e. the 'next step'.
If samples in the lab follow a logical flow based on business logic, this is an unnecessary manual task. This example shows how to automate this next step selection, to reduce error and user interaction.
This example uses the Automatically Assign Next Protocol Step (Example) step, in the Automation Examples (API Cookbook) protocol. The examples shoes how to:
Automate the selection of a sample's Next Steps, as displayed on the Assign Next Steps screen of this step.
Use the Pooling sample UDF / custom field to determine which next step each sample is assigned.
The Automatically Assign Next Protocol Step (Example) step has two permitted Next Steps:
Confirmation of Low-plexity Pooling (Example)
Automated Workflow Assignment (Example)
Depending on the value of a sample's Pooling UDF / custom field, the sample's Next Step will default to one of the permitted next steps:
If the value of the Pooling UDF / custom field is any case combination of No or None, the sample's next step will default to Automated Workflow Assignment (Example).
Otherwise, the sample's next step will default to Confirmation of Low-plexity Pooling (Example). Next step configuration (LIMS v4.x shown)
Automation is configured as follows:
Behavior: Automatically initiated
Stage of Step: On Record Details screen
Timing: When screen is exited
The script takes three basic parameters:
An example command line is shown below.
(Note: The location of groovy on your server may be different from the one shown in this example. If this is the case, modify the script accordingly.)
Assuming samples have been placed in the protocol and are ready to be processed, the user proceeds as normal:
Upon reaching the transition from the Record Details screen to the Assign Next Steps screen, the script is run. A message box alerts the user that a custom script is in progress.
Upon completion of the script, a custom success message is displayed.
Once the success message is closed and the screen has transitioned, the default next steps display for the samples.
Once the script has processed the input and ensured that all the required information is available, we can start to process the samples to determine their next steps.
First, we retrieve the next actions list:\
This endpoint contains a list of the step's output analytes, and a link to its parent step configuration. In this case, we want to retrieve the step configuration so that we can collect the URIs of the expected next steps.
Once we have retrieved the step configuration, we iterate over its possible next steps, gathering their URIs and storing them by name in a Map.
Once we have collected the URIs of our destination steps, we can start analyzing each sample to determine what its default should be.
For each possible 'next-action', we retrieve the target artifact, which then enables us to retrieve that artifact's parent sample.
We then retrieve the value of the sample's Pooling UDF / custom field , if it exists. If it doesn't exist, a default value is given.\
To set the next step, we set the step-uri attribute of the node to the URI of the expected destination step.
We also increment counters, so that we can report to the user what actions were taken on the given samples.
Once this is done, we perform an httpPUT on the action list, adding the changes to the API and allowing our defaults to be set.
Finally, we define the successful output message to the user. This allows the user to check the results.
You are running a version of Groovy that is supported by Clarity LIMS, as documented in the Clarity LIMS Technical Requirements.
The attached Groovy file is placed on the LIMS server, in the folder /opt/gls/clarity/customextensions
GLSRestApiUtils.groovy is placed in your Groovy lib folder.
A single-line text sample UDF named Pooling has been created.
The example code is provided for illustrative purposes only. It does not contain sufficient exception handling for use 'as is' in a production environment.
NextStepAutomation.groovy:
The Clarity LIMS interface offers tremendous flexibility in placing the outputs of a protocol step into new containers.
Sometimes it is necessary that a step produces multiple containers of differing types (for example a 96-well plate and a 384-well plate). Such an interaction is not possible without using the Clarity LIMS API.
This example provides a script that creates a 96-well plate and a 384-well plate in preparation for subsequent manual placement of samples.
In this example, containers are created according to the following logic:
A 96 well plate is produced along with a 384 well plate.
Neither plate has a user-specified name. The LIMS names them using the LIMS ID of the plates.
The step is configured:
To allow samples to be placed in 96 or 384 well plates.
To invoke the script as soon as the user enters the step's Sample Placement screen.
The EPP / automation command is configured to pass the following parameters:
An example of the full syntax to invoke the script is as follows:
When the lab scientist enters the Sample Placement screen, the rightmost Placed Samples area displays the first container created (1 of 2). By selecting the second container, the display shows the second container (2 of 2).
The lab scientist manuallys place the samples into the containers and completes the protocol step as normal.
Two calls to createContainer() create the destination 96-well and 384-well plates.
To create custom containers, supplement the createContainer() method with the configuration details that apply to your instance of Clarity LIMS.
To name the container with the value of the second argument, pass a non-empty string as the second argument to createContainer() method.
An XML payload is created containing only the details of the created containers, ready for the user to record the actual placements in the Clarity LIMS user interface.
This information is POSTed back to the server, in the format required for the placements resource.
A meaningful message is reported back to the user via the ../steps/<stepID>/programstatus API resource.
The attached file is placed on the Clarity LIMS server, in the /opt/gls/clarity/customextensions folder.
The Python API Library (glsapiutil.py) is placed on the Clarity LIMS server, in the /opt/gls/clarity/customextensions folder. You can download the latest glsapiutil library from our GitHub page.
Update the HOSTNAME global variable so that it points to your Clarity LIMS server.
The example code is provided for illustrative purposes only. It does not contain sufficient exception handling for use 'as is' in a production environment.
createMultipleContainerTypes.py:
This example provides a script that can be used to parse lanebarcode.html files from demultiplexing. This script is written to be easily used with the out of the box Bcl Conversion & Demultiplexing (HiSeq 3000/4000) protocol.
Result values are associated with a barcode sequence as well as lane.
Values are attached to the result file output in Clarity LIMS, with matching barcode sequence (index on derived sample input) and lane (container placement of derived sample input).
Script modifications may be needed to match the format of index in Clarity LIMS to the index in the HTML result file.
The script accepts the following parameters:
An example of the full syntax to invoke the script is as follows:
All user defined fields (UDFs) / custom fields must first be defined in the script. Within the UDF / custom field dictionary, the name of the field as it appears in Clarity LIMS (the key) must be associated with the field from the result file (the value).
The fields should be preconfigured in Clarity LIMS for result file outputs.
The UDF / custom field values can be modified before being brought into Clarity LIMS. In the following example, the value in megabases is modified to gigabases.
The script currently checks the flow cell ID for the projects in Clarity LIMS against the flow cell IS in the result file.
NOTE: The script will still complete and attach UDF / custom field values. You may wish to modify the script to not attach the field values if the flow cell ID does not match.
Your configuration conforms with the script's requirements, as documented in the Configuration section of this document.
You are running a version of Python that is supported by Clarity LIMS, as documented in the Clarity LIMS Technical Requirements.
The attached Python file is placed on the LIMS server, in the /opt/gls/clarity/customextensions folder.
The glsapiutil file is placed on the Clarity LIMS server, in the /opt/gls/clarity/customextensions folder.
The example code is provided for illustrative purposes only. It does not contain sufficient exception handling for use 'as is' in a production environment.
demux_stats_parser.py:
demux_stats_parser_4.py:
-i
The URI of the step that launches the script (Required)
The {stepURI:v2:http} token, in the form: http://<Hostname>/api/v2/steps/<ProtocolStepLimsid>
-u
The username of the API user (Required)
The {username} token
-p
The password of the API user (Required)
The {password} token
-u
The username of the API user (Required)
The {username} token
-p
The password of the API user (Required)
The {password} token
-i
The URI of the step that launches the script (Required)
The {stepURI:v2:http} token, in the form: http://<Hostname>/api/v2/steps/<ProtocolStepLimsid>
-l
The limsid of the process invoking the code (Required)
The {processLuid} token
-u
The username of the current user (Required)
The {username} token
-p
The password of the current user (Required)
The {password} token
-s
The URI of the step that launches the script (Required)
The {stepURI:v2:http} token
-u
The username of the current user (Required)
-p
The password of the current user (Required)
-o
The limsid of the result file artifact with attached lanebarecode.html file (Required)
-s
The LIMS IDs of the individual result files. (Required)
To ensure you are working with the latest version of an artifact, do not specify a state when retrieving artifacts via the REST API. In the example NanoDrop.groovy script provided, the stripStateQuery closure strips the state from the input and output URIs as it iterates through the input-output-map.
State on input and output URIs