arrow-left

All pages
gitbookPowered by GitBook
1 of 18

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Receiving and Decrypting Cloud Backup Data

If necessary, the Clarity LIMS Support team can provide a backup of your Clarity LIMS data. The data are contained in an encrypted file, which can be downloaded from a secure SFTP server.

hashtag
Prerequisites

To receive the backup data file, provide the Clarity LIMS Support team with a GNU Privacy Guard (GPG) public key.

For instructions on generating a GPG public key, see the following documentation:

  • For Microsoft® Windows®, see .

  • For Linux® or Mac®, see .

hashtag
Receiving and Decrypting your Backup Data

  1. After the Clarity LIMS Support team has received the GPG public key, they do the following actions:

    • Create a backup file encrypted with the key.

    • Place the backup file on the LIMS SFTP server at sftp.clarity-lims.com.

Database Cleanup Procedure

This section describes the steps for removing projects, samples, workflows, protocols, steps, and other artifacts that were created during training but are not needed for production, from the system.

After initial user training, but before the lab starts to use Clarity LIMS in production, a database cleanup is recommended.

This process removes any projects, samples, workflows, protocols, steps, and other artifacts from the system that were created during training but are not needed for production.

Contact the Clarity LIMS Support team to schedule a time for this cleanup procedure.

hashtag
Preparation

In preparation for the clean-up, complete the following steps:
  • Delete all unwanted custom fields and master steps.

  • Set the status of all unwanted workflows to Archived. If there is an Archived workflow that you would like to keep, temporarily set its status to Active. Note the following:

    • Protocols that are part of an Archived workflow, or set of Archived workflows will be deleted.

    • Protocols that belong to an Active or Pending workflow, in addition to an Archived workflow, will not be deleted.

After these steps are completed, a Clarity LIMS Technical Support Analyst (TSA) helps to perform the cleanup procedure. This procedure takes approximately 15–20 minutes to complete.

Provide a username and password so you can access your data. The backups are added to the FTP server weekly.
  • After downloading the backup file, there are several tools available to decrypt the data.

    • For Windows, use the gpg4win tool. For details, see the Gpg4win Compendiumarrow-up-right.

    • For Mac/Linux, use the GPG command on the command line. For details, see Encrypting and decrypting documentsarrow-up-right.

  • Creating a certificatearrow-up-right
    Generating a new GPG keyarrow-up-right

    Adminstration

    This section provides information and instructions to support Clarity LIMS and administration tasks. Topics covered include

    • Database Cleanup

    • LDAP Integration

    If you require assistance with Clarity LIMS administration, contact the Illumina Support Team.

    Clarity LIMS Log Files
    Password Management
    Config Slicer Tool
    Automation Worker Nodes

    Using the LDAP Checker Tool

    This section explains how to use the LDAP Checker tool a script (ldap-checker.jar) that checks and reports on an LDAP configuration. Instructions for use are also provided in the README.txt file that accompanies the tool.

    The ldap-checker script is included with the Clarity LIMS installation and is available at the following location:

    /opt/gls/clarity/tools/ldap-checker

    The ldap-checker script performs numerous checks of the LDAP configuration and reports on any incorrect items found.

    hashtag
    Script Properties

    Point the script to one or more files containing (at a minimum) the database connection properties. Alternatively, set these properties from the command line.

    The script loads properties from the following sources and in the following order:

    1. Any JDBC properties files specified with -f (see the table for options).

      If multiple properties files specify the same property, the last file is used.

    2. Any Java system properties specified on the command line using

      Properties specified on the command line are only checked if they do not appear in the properties files.

    After the script has the basic database connection properties, it loads further settings from the corresponding Clarity LIMS database.

    The following JDBC properties are required:

    • jdbc.driverClassName

    • jdbc.url

    • jdbc.username

    • jdbc.password

    Options:

    -f
    --files
    Property files to process

    hashtag
    Run the Script

    1. Change to the directory containing ldap-checker tool:

    2. Run the script. To specify a properties file, use the following example:

    hashtag
    Example Properties File

    The tool includes an example database.properties file. This example shows a properties file that is specified with the -f option.

    The following options are available:

    • Edit this file and use it.

    • Provide properties on the command line, using: -D.

      For example:

    hashtag
    Examples

    hashtag
    Example 1: Using SSL

    Specify and provide the path to the keystore:

    hashtag
    Example 2: Checking Users

    To check a set of specific users (even those that have not been provisioned), use the following script:

    hashtag
    Overriding Properties

    To override properties that are typically loaded from the properties table, use command-line system properties or one or more properties files.

    Using system property ( -D options must be specified before the -jar option):

    Using multiple properties files:

    • In this example, Custom-ldap.properties might resemble the following:

    Audit Trail

    Clarity LIMS provides Audit Trail, a robust data-tracking system that allows for tracking of the following:

    • All user activity (ie, who did what, and when).

    • Every action that is written to the database.

    Audit Trail has two capture systems, Event Log and Detail Log.

    Event Log—Records familiar BaseSpace Clarity LIMS user actions and presents this information in a format that is easy to read and understand.

  • Detail Log—Records exacting information about changes resulting from actions recorded in the Event Log. This includes both updated values and previous values.

  • Enabling Audit Trail may result in a small performance hit due to the overhead of writing the entries to the database. It is recommended that you periodically archive the Audit Trail database so that it does not become too large.

    NOTE: Audit Trail is enabled by default.

    circle-info

    Audit Trail feature is available to Enterprise customers only. To enable, validate, or disable audit trail on your Clarity LIMS cloud instance, contact Illumina Technical Support for assistance.

    The properties table in the database.

    The properties table is only checked if the same property is not already specified in the properties file or on the command line.

    -h

    --help

    Show usage information

    -u

    --users

    Usernames to check

    -D<propertyName>=<propertyValue>
    /opt/gls/clarity/tools/ldap-checker
    java -jar ldap-checker-<version>.jar -f database.properties 
    -Djdbc.url=<url to properties file> 
    java -Djavax.net.ssl.trustStore=/path/to/keystore -jar ldap-checker-<version>.jar -f database.properties 
    java -jar ldap-checker-<version>.jar -u usertocheck -f database.properties 
    java -Djavax.net.ssl.trustStore=/path/to/keystore -Dldap.managerPass=mypassword -jar ldap-checker-<version>.jar -u usertocheck -f database.properties
    java -jar ldap-checker-<version>.jar -u usertocheck -f database.properties custom-ldap.properties
    ldap.managerDn=CN=GLS Admin,CN=Users,DC=gls,DC=lan
    
    ldap.managerPass=mypassword

    Customize the Term Used for Projects

    You can configure an alias for the short and long, singular, and plural forms of the term Project, as displayed in the Clarity LIMS interface.

    hashtag
    Configuration Tool and Properties

    Renaming is achieved by configuring the following properties, using the omxprops-ConfigTool.jar tool at:

    Copy

    /opt/gls/clarity/tools/propertytool 

    Properties

    • clarity.alias.project.short.singular - The short term for "Project"

    • clarity.alias.project.short.plural - The short term for "Projects"

    • clarity.alias.project.full.singular - The full term for "Project"

    • clarity.alias.project.full.plural - The full term for "Projects"

    Rules for alias

    • clarity.alias.project.short.singular—Maximum of eight characters.

    • clarity.alias.project.short.plural—Maximum of nine characters.

    • If multiple words are used, capitalize each word (eg, Test Requests).

    Short form singular and plural aliases are truncated to eight characters and nine characters respectively. An ellipsis is used to indicate truncation.

    Upgrading a configuration package/manifest file for compatibility with Config Slicer v3.0.x

    If running Config Slicer v3.0.x against a configuration package or manifest file that was generated with a pre-3.0 version of Config Slicer, an error message displays.

    Scenario 1: Out-of-Date Configuration Package

    In this scenario the error message will resemble the following:

    Copy

    Package was generated with an older version of config slicer incompatible with the current version (3.0.2).
    Please upgrade the configuration package (java -jar upgrade-config-slice.jar 'configuration_package.xml'),
    verify the results, and try again. 

    SOLUTION:

    Upgrade the configuration package file by running upgrade-config-slice.jar against it.

    • This jar file should be located in the same directory as Config Slicer (/tools/config-slicer).

    • The upgrade script will save an upgraded copy of the configuration package file (to the same directory), which can be inspected and imported.

    Example:

    Copy

    To upgrade a manifest file at the same time, simply add it to the script as a second argument:

    Copy

    Scenario 2: Out-of-Date Manifest File with no Corresponding Configuration Package

    In this scenario the error message resembles the following:

    Copy

    SOLUTION:

    This upgrade process is a little more involved:

    1. Extract the list of process types from the manifest file
.

    2. Format them as parameters to Config Slicer's custom manifest generation for process types.

      For example, assuming we extracted the process types Process Type 1 and Process Type 2 in step 1, the following command would be used:

      Copy

    Config Slicer Use Cases

    Create a Configuration Package File for Import to Another System (or for Backup Purposes)

    Use a configuration package file to copy a configuration set from one server to another or to back up a particular working configuration at a particular time.

    Required steps:

    1. Create a configuration manifest file.

    Run the custom manifest generation.
  • Take the list of analyte UDFs and ResultFile UDTs from the manifest file that was generated and add them to the old manifest file
.

  • Add the following line of text to the top of the old manifest file:

    Copy

  • ConfigSlicerVersion=3.0-compatible
    java -jar upgrade-config-slice.jar configuration_package.xml 
    java -jar upgrade-config-slice.jar configuration_package.xml manifest.properties 
    Manifest was generated with an older version of config slicer incompatible with the current version (3.0.2). 
    If you have the corresponding configuration package available the manifest can be automatically upgraded 
    (java -jar upgrade-config-slice.jar configuration_package.xml 'manifest_file.properties'). Otherwise please 
    regenerate the manifest with the current version of config slicer and try again. 
    java -jar config-slicer.jar -o custom -s localhost -u admin -p pass -pt 
    "Process Type 1" -pt "Process Type 2"
    Export to configuration package file.

    Import (install) a Configuration Package File on a Production or Test Server

    This process involves copying a configuration set from one server to another, by importing a configuration package file.

    For example, to move a configuration set to a different environment for testing or troubleshooting purposes, or copy a new configuration set (created and tested on one system) onto another system.

    Required steps:

    1. On the source server, create a configuration manifest file, and then export to configuration package file.

    2. On the destination server, import the configuration package file.

    Compare Differences between a Working Configuration and a Broken Configuration

    There are two approaches:

    • Comparing configuration manifest files provides a way to determine if there are processes or UDFs missing from a system. The information in the manifest files only allows comparing process and UDF names, not the specific way in which a process or UDF is configured.

    • Comparing configuration package files helps check how specific processes are configured. If the systems being compared are meant to be identical, this method is more appropriate to use.

    Required steps:

    1. On each system, create a configuration manifest file, or a configuration package file.

    2. Run a diff comparison on the two files.

    3. Edit the broken manifest file, export it, and import the resulting configuration package file into the system to add the missing entities.

    There are several tools that available to compare files:

    • Meld (graphical), for Linux, port to MacOS

    • Standard Unix diff (Linux, MacOS) (use -q for a quick check).

    • FileMerge (OSX with XCode installed) - /Developer/Applications/Utilities/FileMerge.app

    • WinMerge (graphical), for Windows -

    Merge the Configuration Sets from Multiple Test Systems to a Production System

    Combine configuration sets from multiple systems, merge them into a single configuration package file, and then import the file into a new system.

    Required steps:

    1. On each source server, create and edit a configuration manifest file.

    2. Merge the entities from all files into a single manifest file.

    3. Export the resulting file to a configuration package file.

    4. On the destination server, import the merged configuration package file.

    Back up and Restore a Configuration Set

    Copy a configuration set to restore it on another server and use it for testing/troubleshooting purposes.

    Required steps:

    1. On the server containing the 'broken' source system, create a full manifest file, containing all of the LIMS system configuration.

    2. Export the manifest to a configuration package file. Save file to media/disk.

    3. On the target server, import the configuration package file created on the source system.

    Back up and Upgrade a Configuration Set

    To upgrade or add to a configuration set already installed on a server, two configuration package files are needed: one to back up the working configuration set and one containing the new updated configuration that have been created on a test server.

    Required steps:

    1. On the server you want to upgrade, create a full manifest file and export this to a configuration package file. Save this file as a backup.

    2. On the test server, create a manifest file and edit it so that it only includes the entities you want to import.

    3. On the server you want to upgrade, import the configuration package file.

    Deploy a New Configuration Set from Test to Production Server

    You may want to take a configuration that has been created and tested on one system/site (referred to as the source system in the steps below), and deploy it on another system/site (destination system).

    Take a configuration that has been created and tested on one system/site (referred to as the source system in the following steps) and deploy it on another system/site (destination system).

    As a best practice, make sure that the configuration is backed up by creating a full manifest file and exporting to a configuration package file (see Step 2). The process is also described in Back up and Restore a Configuration Setarrow-up-right.

    Required steps:

    1. On the source system, create a configuration package file containing the tested configuration to import.

    2. On the destination system, create a full manifest file and export this to a configuration package file. Save this file as a backup.

    3. On the destination system, import the configuration package file that was created in step 1.

    Troubleshooting Config Slicer

    If there is an API version mismatch, Config Slicer will log a message at the beginning of an import:

    Copy

    Generally, this message is not a cause for concern.

    However, if there are warnings about configuration differences during the import, changes that have been made to the API in between the package version and the server version may be responsible.

    In addition, if the package being imported is from v4.x, a log message may appear at the beginning of an import:

    Detected package is from a pre-5.0 Clarity system. The package will be upgraded to match new configuration requirements.

    In this case, the package is upgraded and the resulting configuration is output to a folder in the same location as the package being imported. This upgraded configuration package is the v5.x representation of the v4.x configuration and can be used directly to troubleshoot error occurring on import. The updated packaged can also be imported directly.

    hashtag
    Config Slicer Version Updates

    The following changes have been included in the latest release of Config Slicer (v 3.0.51):

    General changes:

    • The entity summary is now shown after the individual differences have written to the log file, as opposed to before these differences.

    • Support was dropped for the following process type attributes which are no longer used:

      • SupportsExternalProgram,

    Note: If upgrading the package fails, import/validation will fail. In general, this will reflect a mistake in the configuration package being imported.

    Changes in support of slicing from 4.x to 5.0:

    The following changes were specifically added to support slicing between Clarity LIMS 4.x and BaseSpace Clarity LIMS 5.0. All of these changes will be saved into a new configuration package that will be written to the same location as the package being sliced:

    • Backwards compatibility for Protocol Step setup configuration.

    • Support for updates of parent entities after both the parent and child entities have been imported.

      • This change is required for updating the defaultProcessTemplate and step-fields step properties on Master Steps. Both require that the Master Step (ProcessType), ProcessTemplate, and Master Step Custom Fields (ProcessType UDFs) exist before they can be set on the Master Step.

    After slicing, it is recommended to examine the configuration closely for any QC Protocol Steps that previously shared a Master Step with nonQC Protocol Steps. This setting may not transfer as expected and there is potential for misconfigured Protocol Steps. In particular, if the Master Step produces measurements, and even just one child step of a Master Step has qcProtocolStep=true, then the Master Step will get qcProtocolStep=true set on it. Thus, all other steps that use that Master Step have qcProtocolStep=true, whether or not it was set before.

    • Support for setting the default container on Steps and Master Steps in such a way that all behaviour is maintained from 4.2

      • If every child step of a Master Step has the default container that was defined as a permitted container on the master (through the 'OutputContainerType' process-type-attribute) then the default will be added as a permitted container on the master step

      • If any child does not have the default container from the Master Step, then the default is removed from the Master and each child that had the container is updated so that the default is the first permitted container (and hence default)

    Automation Worker Nodes

    All Clarity LIMS installations include the installation of a separate component known as an Automation Worker (AW) node (formerly known as Automated Informatics (AI) node).

    When writing automation-based triggers, the code invoked by an automation runs on the AW node.

    While the AW node is a critical component, it typically does not require much attention. However, there are many other options that must be considered. These options include how many AW nodes a Clarity LIMS system uses, and where they are placed. This section discusses some of these options.

    hashtag

    Clarity LIMS Log Files

    hashtag
    Introduction

    Clarity LIMS creates various log files to help with the resolution of issues. During support request investigation, the Support team may ask for the following types of log files:

    Config Slicer Tool

    The Config Slicer tool is used to move small incremental configuration changes, contained in a configuration set, between Clarity LIMS systems. For example, it moves changes between a test system and a production system.

    This configuration tool provides granular export/import functionality that allows the management of configurations that support experimental workflows.

    Use this tool to back up, copy, deploy, and restore configuration sets. By making small incremental changes, make sure that the modifications made to the production system are minimal.

    Review the following key concepts:

    • Configuration set—This item may be created by the Illumina Support team or by the customer. It comprises the items (know as entities) that are added to a Clarity LIMS system to allow for customization for a particular scientific experiment or workflow. The Illumina NGS Extensions Package is a good example. See

    Troubleshooting Automation Worker

    Use the following steps to help troubleshoot the installed Automation Worker framework. A flowchart is provided as a reference.

    hashtag
    Flowchart Reference

    hashtag

    Container Name Uniqueness

    Turning Container Name Uniqueness ON

    This constraint is set at the database level. Container Uniqueness can be turned ON by using the migrator tool, and running the optional migration step called "ContainerNameUniqueConstraints". This can be done by calling this command:

    Notes:

    • Of course, edit the migrator.properties file to ensure that the mode is set to "migrate" not just "validate".

    API version mismatch - package: v2,r20 - server: v2,r21. This may result in some differences between configuration in package and on server.

    ShowInExplorer,

  • ShowInButtonBar,

  • OpenPostProcess,

  • IconConstant

  • Support was dropped for the show-in-tables property on Custom Fields (UDFs) because it is no longer used.

  • There is now support for the new Clarity LIMS 5.0 distinction between Protocol Step names and Master Step (ProcessType) names.

  • It is now possible to slice in and out all the new Master Step settings added in Clarity 5.0, including:

    • Default Process Template

    • Instrument Types

    • Container Types

    • Reagent Kits

    • Reagent Types

    • Control Types

    • Sample Fields

    • Queue Fields

    • Ice Bucket Fields

    • Step Fields

    • Step Properties

  • When importing/validating a slice from 4.x into 5.x, if the package contains any Master Steps (process types) or Protocols, it is upgraded to be compatible with the new configuration available in 5.x. This process is automatic, and the updated package is written out to the directory containing the package being imported or the directory that log file is being written to. If errors occur while importing, this updated package can be manipulated directly and imported to fix them.

  • Support for setting the qcProtocolStep flag on Master Steps, allowing the correct Master Step type to be displayed in the Lab Work Configuration UI. The setting is propagated up from the Protocol Step to the Master Step.

  • Extra containers and any step properties that are no longer valid on a step will be migrated to new properties or removed.

  • Step-setup file configuration has been moved to the Master Step and it not possible to have a different set of step-setup files on a step than are specified on the Master. The master step owns the list of files. Both the search-result-file-index attribute on each file element and the message element for each file are defined by the Master and must be duplicated on every step. When moving a 4.x configuration with step-setup files to a 5.x configuration, the following events will occur:

    • The set of all the step-setup files found on all the child steps in the slice are added to to the Master Step and each child step.

    • If more than one step in the slice defines step-setup files for the same search-result-file-index, then all the messages for that search-result-file-index will be concatenated by newlines.

    • The enabled attribute will be set to true for the step-setup on all child steps.

    • The locked attribute will be set to false for the step-setup on all child steps.

    • In the case of importAndOverwrite, the step-setup of any existing child steps for an overwritten master step will be included.

  • Upon validation after import, the following differences have been reconciled:

    • UDFs that differ by STYLE only

    • missing defaultProcessTemplate

    • missing attemptAutoPlacement

    • missing autoAttachFiles

    • missing qcWithPlacement

  • http://winmerge.org/arrow-up-right

    Browser HTML and JavaScript Logging

    hashtag
    Automation Worker log file

    Automation Worker creates history and log files, and stores them on laboratory computers in the logs folder of the Automation Worker installation directory.

    hashtag
    Windows

    If Automation Worker is installed on a Windows machine using the program default, find the logs folder at the following location:

    hashtag
    Linux

    If Automation Worker is installed on a Linux server, find the \logs folder at the following location:

    The following log files are available:

    • wrapper.log - This log file outputs information on the starting, running, and stopping of the Automatic Informatics service.

    • automatedinformatics.log - This log file outputs messages from installed plug-ins, such as automation commands and ADC directory scans.

    hashtag
    To monitor the automatedinformatics.log:

    • Log on to the server using the glsai user ID and run the following command:

    If the Automation Worker is installed on a server other than the Clarity LIMS server, use the appropriate user credentials.

    hashtag
    Browser HTML and JavaScript Logging

    In the web browser, if the LIMS interface does not display items/elements correctly, provide the information and error messages to the Clarity LIMS Support team.

    Instructions for finding error messages within the browser console are described in the following sections.

    hashtag
    Google Chrome

    To Start the Chrome Console:

    1. Right-click on an element in the browser and select 'inspect element.'

      A sub window opens below the main window in Chrome, showing the source HTML.

    2. Select the Console tab, and reload the troublesome page - any JavaScript errors will be reported there. Include these errors in the Support Request ticket.

    NOTE: Between stages in a protocol step you may see errors of the following type:

    Such messages are expected. This is the EPP trigger checking that there is no EPP transition to fire on the page change. (This can be annoying for debug purposes, but feel free to include these in the Support Request ticket.)

    To Get the JavaScript version:

    1. Open up the Console as described in the previous section.

    2. Go to the Network tab.

    3. Select 'scripts' from the options listed at the bottom of the tab.

      A script named isis-all.js?v=XXXXX displays.

    4. Determine the version build number. (In the previous example, XXXXX represents the version build number).

    hashtag
    FireFox

    To Start the FireFox Console:

    1. Right-click on an element in the browser and select 'inspect element.'

      A sub window opens below the main window in Firefox, showing the source HTML

    2. Select the Console tab, and reload the troublesome page - any JavaScript errors will be reported there. Include these errors in the Support Request ticket.

    NOTE: Between stages in a protocol step you may see errors of the following type:

    Such messages are expected. This is the EPP trigger checking that there is no EPP transition to fire on the page change. (This can be annoying for debug purposes, but feel free to include these in the Support Request ticket.)

    To Get the JavaScript version:

    1. Open up the Console as described in the previous section.

    2. In the Filter options Search box and type 'isis'.

      A script named isis-all.js?v=XXXXX appears.

    3. Hover over this script with your mouse to find the V (version) build number.

    hashtag
    Using Log Files for Troubleshooting Purposes

    If you are experiencing problems and need to submit a support request, use the following guidelines to determine which log files to send to the Clarity LIMS Support team:

    • basespace-lims-*.log: Include if experiencing slowness in the application. (Default path: /opt/gls/clarity/tomcat/current/logs/)

    • automatedinformatics.log: Include if you are experiencing problems with an integration or if a process using an EPP string does not work as expected. (Default path: /opt/gls/clarity/automation_worker/)

    • wrapper.log: Include if the Automation Worker is unable to start (rarely needed).

    • search-indexer.log: Include if there is issue with search feature. (Default path: /opt/gls/clarity/search-indexer/logs/)

    • claritylims.log: Include if there is issue with search feature. (Default path: /var/log/elasticsearch/)

    • Browser Console and LIMS JavaScript version: Include for any web interface display issues. A simple refresh of the browser page may resolve the issue. However, the Support team would prefer receiving the console log and JavaScript version to investigate and make product improvements.

    Automation Worker Log File
    If the migrator has carried out its work successfully, the database should now have a new index present called 'unique_cnt_name'. (In psql, do \di to see indexes)
  • You do not need to restart Tomcat services in order for this change to take effect

  • Turning Container Name Uniqueness OFF

    Container Uniqueness can be turned OFF by running this SQL command:

    Copy

    This statement will fail if there are already any non-unique container names in the database. If it fails, you can find all non-unique names with this statement:

    Copy

    For each non-unique name found, you can iterate through all instances of it, and rename them so that they will be unique.

    Copy

    Copy

    Resolving Non-Unique Container Names

    When you run the above query and determine there are too many containers to hand-edit, what now?

    1. Are any of the containers that have non unique names in a depleted or discarded state? If so, you can probably delete them once the customer gives the all clear:

    Copy

    now, what do the values for the container stateid mean? You won't find them in the database, they are hard coded as follows

    Copy

    Copy

    Copy

    Copy

    Copy

    Copy

    Copy

    Copy

    Copy

    Armed with this knowledge, we can refine the query to highlight containers that may be deleted with little consequence:

    Copy

    Copy

    Copy

    Notes:

    1. Containers can only be deleted if they are empty

    2. SQL 'IN' statements normally have a limit of 255 records, so if the the sub-select - the one in parenthesis - returns more than 255 records, your mileage may vary

    /opt/gls/clarity/automation_worker/node/logs
    /opt/gls/clarity/automation_worker/node/logs
    /opt/gls/clarity/automation_worker/node/bin/automatedinformatics.wrapper.sh log 
    GET http://qaclarity02.gls.lan:8080/clarity/api/work/310/epp?_dc=1375380986790 404 (Not Found)
    GET http://qaclarity02.gls.lan:8080/clarity/api/work/310/epp?_dc=1375380986790 404 (Not Found) 
    java -jar migrator.jar ContainerNameUniqueConstraints
    drop index  unique_cnt_name;
    SELECT name,  COUNT(name) AS namecount FROM container GROUP BY name HAVING COUNT(name) >  1;
    SELECT name, luid FROM container WHERE where name = 'non-unique_name';
    UPDATE container SET name='non-unique_name-1' WHERE luid=luid_of_container;
    select name, luid, stateid from container where name in (SELECT name FROM container GROUP BY name HAVING COUNT(name) >  1) order by name;
    1, "Empty"
    2, "Populated"
    3, "Depleted"
    4, "Discarded"
    5, "Reagent-Only"
    6, "New"
    7, "Hybridized"
    8, "Scanned"
    9, "Discarded"
    SELECT name, luid, stateid FROM container
    WHERE name IN (SELECT name FROM container GROUP BY name HAVING COUNT(name) >  1)
    AND stateid IN (1,4,9) ORDER BY name;
    Automation Worker Nodes and Remote Computing

    The AW node is installed adjacent to the Clarity LIMS application. Its original purpose was to enable remote computing.

    To illustrate these features, assume that a need requires Clarity LIMS to produce a file in a specific location. The file is then processed and an action occurs.

    A good example is label printing via the BarTender application. The BarTender application picks up the new file, associates it with a printer and a template, and causes a label to be printed.

    The infrastructure for this file storage and processing likely occurs on a separate server from the one that contains the Clarity LIMS application and the AW node.

    The Clarity LIMS application invokes the command to create the file on the AW node. After the file is in the file store, it is processed by the file processing application.

    How does the AW node get the file to the file store, even if it is on a different server and possibly on a different network? The solution is to install an AW node on the external server.

    The addition of the second AW node, which is local to the external server but remote as far as Clarity LIMS is concerned, provides a solution to the problem. This demonstrates how AW nodes can support remote computing..

    1. Clarity LIMS invokes the production of the file via the remote AW node.

    2. The remote AW node copies the file to the local file store.

    3. The file processing application processes the file.

    hashtag
    Automation Worker Properties

    An AW node is a Java-based application and can run on most PCs/servers.

    • The AW node and the Clarity LIMS application must be able to communicate through networking firewalls.

    • When the Clarity LIMS application has the choice of sending the task to multiple AW nodes, the channel name property is used to determine which one receives the job. For example, the AW node installed on the Clarity LIMS server has the default channel name of limsserver. This is why you must specify the limsserver value when defining an automation command.

    • When defining the automation, the following two items are defined:

      • Which command should be run by the AW node.

      • Which AW node should receive the job via the channel name property.

    • Typically, for the AW node that runs on the Clarity LIMS server, the convention is to place scripts in the /opt/gls/clarity/customextensions folder. The log file is stored in /opt/gls/clarity/automation_worker/node/logs/.

    • For the remote AW node, store the scripts in any folder, and choose where its log file gets stored (it is running on your hardware).

    • For the external AW node to run Clarity LIMS toolkits, such as the Lab Logic or Lab Instrument toolkit, make copies of the JAR files that contain these toolkits. Place them on the external server, so the AW node can access them.

    • There is no real limit to how many AW nodes you can have. Place them wherever they are needed.

    hashtag
    Automation Workers in the Cloud

    Consider the AW node that is installed on the Clarity LIMS server for a cloud-based implementation.

    Although cloud-based Clarity LIMS servers contain an AW node, the best practice is not to run your code on it.

    Why not? It is a question of security policies for both Illumina and our cloud-hosting provider. If we provide customers with command-line access to the AW node, we are allowing them command-line access to the Clarity LIMS server and, as a consequence, the Clarity LIMS application itself. If using Clarity LIMS in a clinical environment, this makes it more difficult to pass security and access audits.

    If the Clarity LIMS instance is cloud-hosted, and you need to run custom code via an AW node, there is a solution.

    You could install an AW node on your local architecture and have it interact seamlessly with Clarity LIMS, as illustrated in Figure 1. You can control access to the remote AW node, the infrastructure of the Clarity LIMS server is safely hidden behind firewalls, and security policies remain intact.

    However, for some customers, part of the attraction of a cloud-based system is not having to maintain mission-critical hardware. To address this, we can offer an external AW node that does not live on local hardware but is also in the cloud.

    Thus, we can provide an external AW node that lives on a separate machine, known as the Automation Worker host. This Software-as-a-Service (SaaS) model gives access to only those parts of the system that require it. You can access the Automation Worker host, and its AW node can interact with Clarity LIMS. However, you cannot access the Clarity LIMS server.

    Because the AW node is running on a separate machine to the Clarity LIMS server, it needs its own copy of the toolkit JAR files. For some, this feature has an additional bonus in that the Automation Worker host hardware supports Python 3 for scripting.

    For customers who are cloud-based and need a true local AW node, this is not a problem. They can have an AW node in the Automation Worker host and as many local AW nodes as needed. Place them wherever they are needed.

    hashtag
    Automation Worker Node on Windows Server

    In Clarity LIMS v5.4 and later, you can install the Automation Worker Node onto the Windows server. Before you begin the installation, make sure that you have met the following requirements:

    • The clarity-aiinstaller-x-deployment-bundle.zip file must be retrieved from the server where Clarity LIMS is installed. You can find this file at /opt/gls/clarity/config/.templates/automation_worker/.

    • For Windows 10 users, the command prompt must be specified in the Automation command line (eg, cmd.exe / c echo 'Hello World').

    • If VISTA-SETUP.bat does not display in the installation window after Run as Administrator is selected, start the installation window as follows.

      • Launch the command prompt as an Administrator.

      • Change the directory where VISTA-SETUP.bat is located with cd C:<DIRECTORY>.

      • Execute the java -jar .\GLSAutomatedInformatics-Installer.jar command.

    Install the Automated Worker Node as follows.

    1. Copy the SecretUtil deployment bundle ZIP file to the remote Automation Worker node.

    2. Extract the contents of the ZIP file to a folder named secretutil. You can add this folder to C:\opt\gls\clarity\tools or another location you choose.

    3. Edit vault.properties of the file in the conf folder to update application.mode to file.

    4. Make sure that the following System Environment Variables are set:

      • CLARITYSECRET_HOME (eg, C:\opt\gls\clarity\tools\secretutil)

      • CLARITYSECRET_ENCRYPTION_KEY (minimum 24 characters)

    5. Using secretutil.jar, set the required secrets. For a basic installation of AutomationWorker, you must set the passwords for apiuser and glsftp using the following commands:

      # For glsftp

      java -jar C:\opt\gls\clarity\tools\secretutil\secretutil.jar -u=<secret> app.ftp.password

      # For apiuser

      java -jar C:\opt\gls\clarity\tools\secretutil\secretutil.jar -u=<secret> -n=integration apiusers\<username of the API user, e.g. apiuser>

    6. After setting the secrets, attempt to retrieve them with the following command:

      java -jar C:\opt\gls\clarity\tools\secretutil\secretutil.jar app.ftp.password

    7. Restart the Automation Worker service.

    .
  • Configuration manifest file—This text file determines the configuration set to be exported from a system. The manifest file does not contain the actual configuration data. It only drives the extraction of configuration from a system.

  • Configuration package file—This XML file contains the top-level entities selected by the configuration manifest file, plus any related child entities. For example, for process types, it includes process type UDFs, process templates (protocols), and output UDFs.

  • hashtag
    Config Slicer tool Export/Import Process Overview

    The Config Slicer tool uses an export/import process to transfer configuration sets from a source to a destination server. This process breaks down into the following tasks:

    1. On the source Clarity LIMS server, use the Config Slicer tool to create a configuration manifest file.

    2. Edit the manifest file so that only the required custom configuration set is preserved.

    3. Use the Config Slicer tool to export the edited manifest file to a configuration package file.

    4. On the destination Clarity LIMS server, use the Config Slicer tool to import the configuration package file into the system.

    A configuration package file can be imported into multiple systems. Use this feature to create and import multiple custom configuration sets, such as the Illumina TruSeq integration. This functionality also provides the Illumina Support team with a scalable way to keep up with constantly changing protocols.

    hashtag
    Supported Entities

    A configuration set comprises the entities added to a Clarity LIMS system that allow for customization of the system for a particular scientific experiment or workflow. The following entities are currently supported by the Config Slicer tool:

    • Sample UDFs and UDTs

    • Container UDFs and UDTs

    • Project UDFs and UDTs

    • Artifact groups (experiments)

    • Reagent types

    • Control types

    • Reagent kits

    • Process types (any configured processes – for example, Pool Samples and Add Multiple Reagents)

      • Process UDFs and UDTs

      • Output UDFs

    • Process templates - UDFs, UDTs, and parameter strings only (other entities such as instruments and researchers are not supported)

    • Protocols

    • Workflows

    When performing custom manifest generation by workflow, protocol, or process type, the following entities are not exported: Sample, Container, Project UDTs, and UDFs for Project. Account (Lab) and Client (Researcher) UDFs are never exported by config slicer. These are known issues.

    Config Slicer does not export/import nonstep automation, nor does it preserve the order of protocols.

    hashtag
    Tools and requirements

    When working with Config Slicer on the Clarity LIMS application server, there are no additional prerequisites. The latest version of the config-slicer.jar file is installed as part of Clarity LIMS on the Clarity LIMS server. In a default installation, find the file in the following location:

    Copy

    To work with Config Slicer on a machine other than the Clarity LIMS server, do the following:

    1. Make sure that Java is installed.

    2. Copy the /opt/gls/clarity/tools/config-slicer directory, and its contents, from the Clarity LIMS server to the machine. The config-slicer directory and the config-slicer package should contain the following:

      • config-slicer-<version>.jar

      • libs subdirectory (which includes all the libraries referenced by config-slicer-<version>.jar, including groovy-all-2.4.8.jar)

      • upgrade-config-slice.jar

    Supported Entities
    1. Verify Connection Between LIMS Server and Automation Worker Node

    1.1 Checking the connection

    The first step is to check the connection between the Clarity LIMS server and the Automation Worker node.

    Use the -n option of the ai-monitor.jar tool script to see if the Clarity LIMSserver is currently able to communicate with the AI node.

    To check the status of ai-monitor.jar:

    1. As the glsjboss user open a SSH session to the Clarity LIMS server.

    2. Run the following command:

      Copy

      • If the Clarity LIMS server cannot connect to any of the AI nodes the response will be as follows:

        Copy

      • In this scenario, proceed to Step 2. Verify Windows Service or Linux Daemon.

      • If the Clarity LIMS server can connect to the Automation Worker nodes, the response will resemble the following:

        Copy

    hashtag
    2. Verify Windows Service or Linux Daemon

    Determine if the Windows service or Linux daemon for the Automation Worker is running.

    2.1 Starting and stopping the Windows service / Linux daemon

    To start, stop, or restart the Windows service:

    1. From the Start menu, select Run.

    2. In the Open text field, type ‘services.msc’ and select OK.

    3. In the Services dialog, locate the Automation Worker service.

    4. Right-click the service and select Start, Stop, or Restart. If the service is stopped, start the service.

      • If the service is running, stop and start it again.

      • Wait for a minimum of three minutes, and then check if the AI node is communicating with the Clarity LIMS server by running the ai-manager.sh script with the status argument, as described in Step 1.

    To start or stop the Automation Worker Linux daemon:

    • To verify current status:

      Copy

    • To restart a running daemon:

      Copy

    • To stop a running daemon:

      Copy

    • To start a stopped daemon:

      Copy

    Once Started/Restarted:

    • Wait for a minimum of three minutes, and then check if the AI node is communicating with the Clarity LIMS server by running the ai-manager.sh script with the status argument, as described in Step 1.

    If the daemon is not recognized, list out the contents of the /etc/init.d directory and determine the exact name of the Automation Worker daemon.

    The name typically contains 'automation_worker', but may vary—particularly if there is more than one daemon on the same Linux server, or if the Automation Worker is installed on a server other than the Clarity LIMS application server.

    3. Automation Worker Log Files

    Automation Worker creates history and log files and stores them on laboratory computers in the logs folder of the Automation Worker installation directory.

    3.1 Reviewing Automation Worker log files

    After performing the steps described above, reviewing these log files may help to determine the cause of the issue.

    For details on the Automation Worker log files, and instructions on how to view them, refer to Clarity LIMS Log Files.

    3.2 Turning on debug logging

    After reviewing the log files, if the cause of the issue is not evident, the next stage is to turn on debug logging. This outputs DEBUG messages to the log files.

    Contact the Clarity LIMSsupport team for instructions on turning on DEBUG mode.

    Review the log files to determine if the DEBUG messages help to find resolution.

    After turning on debug logging, ensure that you restart the Windows service or Linux daemon.

    /opt/gls/clarity/tools/config-slicer 
    java -jar /opt/gls/clarity/tools/ai-monitor/ai-monitor.jar -u <user> -p <password> -i <base URI> -n
    service automatedinformatics.wrapper.sh status
    service automatedinformatics.wrapper.sh restart
    Listing nodes
    
    No nodes found
    
    Aborting
    Listing nodes
    ==========
    Node: [email protected]
    Address: 192.168.10.88
    Channel: limsserver
    First update: Thu Sep 19 19:15:13 GMT 2013
    Latest update: Thu Sep 19 19:22:09 GMT 2013
    Release: 20190919-0708
    Version: 8.6.1-SNAPSHOT
    JVM: Sun Microsystems Inc. 1.6.0_45
    In progress: 0 requests
    service automatedinformatics.wrapper.sh stop
    service automatedinformatics.wrapper.sh start

    Creating Enrypted Passwords

    Saving passwords in encrypted format is recommended. Use the omxprops to do this action.

    hashtag
    To encrypt a password:

    Use the following command:

    # java -jar /opt/gls/clarity/tools/propertytool/omxprops-ConfigTool.jar encryptPassword <password>

    This command returns a text string resembling the following example:

    RqHL5XpY0NStVRjd+BngRQ==  

    hashtag
    To set an encrypted password:

    Set the password by enclosing the text string in the ENC() wrapper.

    Consider the following examples:

    • When setting an encrypted password in a configuration file:\

    • When using the property tool to set an encrypted password:\

    • Using ENC() is not needed when setting the password in Automation Worker:\

    rabbitmq.password=ENC(RqHL5XpY0NStVRjd+BngRQ==)
    # java -jar /opt/gls/clarity/tools/propertytool/omxprops-ConfigTool.jar set -y ftp.password 'ENC(RqHL5XpY0NStVRjd+BngRQ==)'
     
    informaticsClientTarget.sftpPassword=RqHL5XpY0NStVRjd+BngRQ==

    Managing Configurations with Config Slicer

    hashtag
    Command-line Options and Usage

    Refer to Import/Export Mode and Guidelines and Semantics for Creating Manifest Files.

    Option
    Description

    -a,--apiuri <apiuri>

    * The importAndOverwrite option lets Config Slicer update existing configuration, rather than create new configuration. This option is only available in LIMS 3.4.

    Usage

    Copy

    hashtag
    EXPORT: Creating a configuration package file

    Prerequisites

    • Created and validate a configuration set on the source server.

    • Have access to the Config Slicer tool and the libs subdirectory.

    Step 1: Create configuration manifest file

    Create Simple manifest file or Custom manifest file.

    Simple manifest file:

    1. On the source server, copy and paste following code to the command line. Edit the variables (version, server IP address, username, password, and manifest file name) to match those in your system.

      Copy

      A command with the variables filled in might look like this:

      Copy

    2. This step produces a manifest file containing information about the entire system configuration. For best practice, copy this file and rename the copy in a way that reflects the configuration (we'll use newconfiguration.txt for this example). Use the copied file for the next steps.

    A manifest file is used as an intermediary step to produce an XML configuration package file.

    The manifest file is only relevant to the data that exists in the system at the time it is created. Discard it after creating the configuration package file or save the manifest file for historical auditing purposes. It can provide a record of a known working configuration set on a particular system.

    Custom manifest file:

    To create a manifest file for only specific workflows, protocols, or process types, follow the steps outlined previously, using -o custom instead of -o example.

    • When using this operation provide additional parameters (-w, -pr, and/or -pt) specify the exact entities for which to create a manifest. For example:

      Copy

    • Specifying every option is not required. It is also possible to specify more than one of each kind. For example, create a manifest file for a workflow with the following command:

      Copy

    Step 2: Edit Manifest File

    The next step is to edit the manifest file, removing unnecessary information and preserving only the custom configuration to import into the destination system.

    Example 1: In this example, everything is deleted from the manifest file, except for the two new process types to export.

    Copy

    Example 2: In this example, the manifest file contains definitions for some new reagent types:

    Copy

    Step 3: Export to XML Configuration Package File

    • Copy and paste the following code onto the command line. Edit the variables (version, server IP address, username, password, and manifest and package file names) as required.

      Copy

    • An edited command might look like the following example:

      Copy

    This step generates a data file in an XML format (newconfiguration.xml in our example) that is compliant with the Rapid Scripting API.

    hashtag
    IMPORT: Installing a Configuration Package File

    Prerequisites

    • A configuration package file has been exported. This example uses a file named newconfiguration.xml.

    • Access to the Config Slicer tool on the destination server has been granted.

    • There are no in progress steps for any of the protocols that are going to be sliced in, otherwise the import of the protocol fails.

    Step 1: Import Configuration Package File

    • On the destination server, copy and paste the following code to the command line. Edit the variables (version, package file name, server IP address, username, and password) as required.

      Copy

    • A command with the variables filled in might look like this:

      Copy

    About duplicate entities

    If any of the configuration entities that are about to import, exist in the destination system, Config Slicer either logs a warning or attempts to update them. It depends on the mode being run (see ).

    If Clarity LIMS has maintained an internal record of deleted items, the previous information may also apply to deleted entities. This situation may occur if those entities have created outputs that currently still exist in the system.

    • When running in import mode, entities that exist and are different from the version in the package have a warning and full diff logged.

    • When running in importAndOverwrite mode, Config Slicer attempts to update entities that exist and are different from the version in the package.

      • In this scenario, back up configuration package containing copies of the updated entities before they were changed is saved to the directory where the configuration package is located. If that directory is not writable, the backup package is saved to the same directory as the log file.

    To avoid changing existing configuration (which could possibly break historical data), another option is manually renaming the old entities. Add an extension or a prefix and continue with importing the new configuration package.

    Step 2: Validate the import

    Use the following methods to validate whether an import has completed successfully:

    Check the Import Log:

    For each specific type of configuration that is being imported (e.g. container types, process types, workflows), Config Slicer will log a set of messages. The messages look similar to the following examples:

    Copy

    • Before it begins to process a specific entity, the file logs how many entities were found. Any errors or warnings about this set of entities always appear between Found 4 $Entities and Summary of $Entities.

    • Every entity that is found in the configuration package always appears in the summary, in one category or another. If a scenario occurs where this isn't true, or where the initial count of entities does not match the number in the summary, something has gone wrong and a bug report should be filed.

    Validate with Config Slicer:

    Running Config Slicer with the validate operation checks every entity in the package to see if it exists on the destination server. It reports results in a format similar to the log format shown previously.

    Run the validate operation before or after importing:

    • Before importing—checks if there could be any problems when importing a configuration from a package. This feature is its primary use as it makes sure that during import, "configuration exists in package but not on server" is not considered an error case.

    • After importing—makes sure that the results are what was expected.

    Example of validate output:

    Copy

    Check Configuration on the Destination Server:

    The ultimate test of whether configuration has imported successfully is to check the configuration on the destination server itself. Make sure it looks and behaves as expected.

    Configuration can be checked either via the Configuration screen in the Clarity LIMS user interface, or via the configuration endpoints in the API.

    hashtag
    Guidelines and Semantics for Creating Manifest Files

    Top-level entities

    • To be included in a configuration package file, the top-level entities of the custom configuration set must be explicitly enumerated.

    • Some of the top-level entities are discrete 'self-contained' units, and do not include other units (for example, container types, reagent types, artifact groups, and non-artifact/non-process type UDFs).

    • Some top-level entities (process types, for example) automatically include other units (refer to for more information).

    Non-Top-Level Entities

    • Some entities are only included as part of other entities. For example, process templates, process type UDFs, and artifact UDFs are only included when included in a top-level process type. (The latter is a special case, given that the same artifact UDF can be used by multiple process types.)

    Required Entities

    • Some entities may be required by other entities. In these cases, make sure that these entities are exported/imported in the correct order. For example, because process types may require the existence of a container type, create the container type first.

    • Required entities are not automatically included. If they do not exist in the destination system, explicitly include them in the manifest file. For example, suppose that a process type declares a particular container type as a default output plate. If that container type does not exist in the destination system, include that container type in the manifest file.

    hashtag
    Import/Export Mode

    Import modes affect the transactability of the tool, allowing it to make incremental changes if errors occur or provide an all-or-none option. For example, use the operation validate mode to determine if errors were encountered.

    Best-Effort Mode

    • This is the default import mode.

    • This mode attempts to import as many units as possible. Any failures are logged, but the import operation is not interrupted.

      • For example, a failing container type import will not prevent other container types from being processed.

    Strict Mode

    • If this mode is used, the import operation is aborted if it encounters a failure.

      • For example, if there is an API version mismatch, the operation will abort and no further imports will be executed. Note that any changes already performed are not reverted.

    • Use the -Strict option to enable this import mode.

    Validate Mode

    Use the validate operation (instead of import) to enable this mode.

    This mode produces a report listing showing the following items:

    • Entities that would be successfully imported because they do not exist on the target server.

    • Entities that already exist on the target server and are identical.

    • Entities that already exist on the target server but are different.

    Validate mode can only detect a limited set of errors. For example, it can check if a particular piece of configuration already exists. If so, it checks if it is identical to the one included in the configuration package.

    This information can help determine if the importAndOverwrite option is needed instead. For details, see .

    Example of console output:

    Copy

    Example Mode

    • Use this mode to generate a manifest file if the configuration you want to export is not tied to a specific set of workflows, protocols, or process types.

    • Use the example operation (instead of import) to enable this mode.

    Enforcing Unique Sample Names Within a Project

    hashtag
    Enforcing Unique Sample Names Within a Project

    By default, Clarity LIMS allows duplicate sample names within the same project. If you would like to enforce sample name uniqueness within a project, you can do so.

    Two scripts have been developed to support this requirement:

    Or for two protocols like so:

    Copy

  • Or for two process types and a protocol with this command:

    Copy

  • If the version in the package is identical to the version on the server, no errors are logged and Config Slicer considers that entity successfully imported.

    For process types, only configured processes, vanilla Transfer processes, Pool Samples (since 7.5), and Add Multiple Reagents (since 7.6) processes are supported. All process type details are exportable/importable.

    Similarly, if a process type fails import because it already exists, any UDFs and process templates for that process type will still be processed.
  • There is no need to enter an option for this mode.

  • The BaseSpace Clarity LIMS REST API base URI (ends in "/api") (Either this or --server must be provided)

    -k,--package <package file>

    File to be imported from or exported to (Required if operation is import, importAndOverwrite*, export, or validate). If file is not local a full path is required.

    -f,--force <force>*

    Force update without prompt when running in importAndOverwrite mode (Optional)

    -m,--manifest <manifest>

    Manifest file (Required if operation is export or example). If file is not local a full path is required.

    -o,--operation <operation>

    The operation mode for the Config Slicer tool.

    Options are import, export, validate, example, importAndOverwrite, and custom (Required)

    -p,--password <password>

    The BaseSpace Clarity LIMS REST API password, if encrypted, use "ENC(<encrypted-password>" (Required)

    -pr,--protocols <protocols>

    The protocols to include in the custom manifest (Optional)

    -pt,--processTypes <processTypes>

    The process types to include in the custom manifest (Optional)

    -s,--server <server>

    The BaseSpace Clarity LIMS REST API server (either this or --apiuri must be provided)

    -S,--Strict

    Strict mode for import (fail fast - default mode is best-effort) (Optional)

    -u,--username <username>

    The BaseSpace Clarity LIMS REST API username (Required)

    -w,--workflows <workflows>

    The workflows to include in the custom manifest (Optional)

    Import/Export Mode
    Managing Configurations with Config Slicer
    Command-line Options and Usage
    java -jar config-slicer-3.0.13.jar -o custom -s glsserver -u admin -p glspass 
    -m protocolsManifest.txt -pr 'DNA Initial QC' -pr 'Nextera DNA Library Prep'
    java -jar config-slicer-3.0.13.jar -o custom -s glsserver -u admin -p glspass 
    -m processTypesManifest.txt -pt 'Blood Extraction' -pt 'Qubit QC (DNA) 4.0' -pr 'Illumina SBS (HiSeq GAIIx)'  
    -o  <import|importAndOverwrite|export|validate|example|custom> [-m <manifest file>] [-k <package file>] 
    [-w <workflows>] [-pr <protocols>] [-pt <process types>] [ -s <server> | -a <API root URI> ] 
    -u <username> -p <password>
    java -jar config-slicer-<version>.jar -o example -s <server> -u <username> -p <password> 
    -m <manifest-filename>
    java -jar config-slicer-3.0.13.jar -o example -s glsserver -u admin -p glspass 
    -m goldenconfig.txt
    java -jar config-slicer-<version>.jar -o custom -s <server> -u <username> -p <password> 
    -m <manifest-filename> -w <workflow name> -pr <protocol name> -pt <process-type name>
    java -jar config-slicer-3.0.13.jar -o custom -s glsserver -u admin -p glspass 
    -m workflowManifest.txt -w 'Nextera DNA for HiSeq'
    # Selection for ProcessTypes
    unit.ProcessTypes=\
    Illumina Sequencing,\
    BCL Conversion & Demultiplexing  
    # Selection for ReagentTypes
    unit.ReagentTypes=\
    Index 1 (ATCACG),\
    Index 2 (CGATGT),\
    Index 3 (TTAGGC),\
    Index 4 (TGACCA),\
    Index 5 (ACAGTG),\
    Index 6 (GCCAAT),\
    Index 7 (CAGATC),\
    Index 8 (ACTTGA),\
    Index 9 (GATCAG),\
    Index 10 (TAGCTT),\
    Index 11 (GGCTAC),\
    Index 12 (CTTGTA)
    java -jar config-slicer-<version>.jar -o export -s <server> -u <username> -p <password> 
    -m <manifest-filename>.txt -k <configuration-package-file>.xml
    java -jar config-slicer-3.0.13.jar -o export -s glsserver -u admin -p glspass 
    -m newconfiguration.txt -k newconfiguration.xml
    java -jar config-slicer-<version>.jar -k <configuration-package-file>.xml -o import -s <server> 
    -u <username> -p <password>
    
    - or -
    
    java -jar config-slicer-<version>.jar -o importAndOverwrite -k <configuration-package-file>.xml 
    -s <server> -u <username> -p <password>
    java -jar config-slicer-3.0.13.jar -o import -k newconfiguration.xml -s glsserver 
    -u admin -p glspass
    
    - or -
    
    java -jar config-slicer-3.0.13.jar -o importAndOverwrite -k newconfiguration.xml 
    -s glsserver -u admin -p glspass
    2015-01-16 18:45:13,046  INFO - Found 4 Container Types in configuration package, importing now.
    2015-01-16 18:45:13,614  INFO - Summary of Container Types:
        Newly imported:
            Container Type 'BioAnalyzer DNA 1000 Chip'
            Container Type 'BioAnalyzer DNA High Sensitivity Chip'
            Container Type 'BioAnalyzer RNA Nano Chip'
            Container Type 'BioAnalyzer RNA Pico Chip'
        Imported and updated (diffs in log): None
        Already existing and identical (no update performed): None
        Not imported (due to errors or existing entities): None 
    2015-01-16 15:42:31,174  INFO - Found 4 Container Types in configuration package, validating now.
    2015-01-16 15:42:31,706  INFO - Summary of Container Types:
        Do not exist: None
        Exist and are identical:
            Container Type 'BioAnalyzer DNA 1000 Chip'
            Container Type 'BioAnalyzer DNA High Sensitivity Chip'
            Container Type 'BioAnalyzer RNA Nano Chip'
            Container Type 'BioAnalyzer RNA Pico Chip'
        Exist and are different (diffs in log): None 
    2015-01-16 15:42:31,174  INFO - Found 4 Container Types in configuration package, validating now.
    2015-01-16 15:42:31,706  INFO - Summary of Container Types:
        Do not exist: None
        Exist and are identical:
            Container Type 'BioAnalyzer DNA 1000 Chip'
            Container Type 'BioAnalyzer DNA High Sensitivity Chip'
            Container Type 'BioAnalyzer RNA Nano Chip'
            Container Type 'BioAnalyzer RNA Pico Chip'
        Exist and are different (diffs in log): None 
    SampleNamePerProjectUniqueConstraintStep: Apply this uniqueness constraint to enforce unique sample names within a project.
  • CleanupDuplicatedSampleNamesPerProjectStep: Prior to running the uniqueness constraint, use this optional cleanup script to clean up a database that already contains duplicate sample names.

  • Both scripts are available via the clarity-migrator.jar tool.

    hashtag
    Cleaning up the database

    If your database contains projects in which duplicate sample names exist, run the CleanupDuplicatedSampleNamesPerProjectStep script to clean up sample names that would violate the sample uniqueness constraint property. The cleanup script searches the database for sample names that are not unique and renames them.

    The cleanup script also renames the corresponding original submitted sample name - since there is a one-to-one correspondence between submitted sample and derived sample names in the LIMS interface.

    To clean up the database:

    1. As the glsjboss user, change to the clarity-migrator directory:\

      Copy

    2. Run the clarity-migrator.jar tool, providing the name of the cleanup step as a parameter:\

      Copy

    3. The step will run and no validation errors should be reported.

    Once cleanup has been performed successfully, you can apply the sample uniqueness constraint.

    hashtag
    Enforcing sample uniqueness

    After you have cleaned up the database (if this step was required), you can apply the uniqueness constraint.

    Applying the sample uniqueness constraint results in a change at the LIMS database / schema level. Once you have applied this change, there is no script available to revert it.

    If you need to remove the uniqueness constraint, you will need to submit a request to the Illumina Support team.

    To enforce sample uniqueness:

    1. As the glsjboss user, change to the clarity-migrator directory:\

      Copy

    2. Run the clarity-migrator.jar tool, providing the name of the uniqueness constraint step as a parameter:\

      Copy

    3. The step will run and no validation errors should be reported.

    hashtag
    Results

    After enforcing sample uniqueness, if a user attempts to accession or update sample names that already exist in the project - via the user interface or the API - an error message displays. The message describes the problem and advises the user to rename the duplicate samples.

    The following sections describe and illustrate what happens if an accessioned or updated sample name conflicts with an existing sample name within the same project.

    Postgres database

    If an accessioned or updated sample name conflicts with an existing sample name within the same project:

    Upload/modify via a sample sheet will result in the error shown below.

    The sample with duplicate name is named within the parenthesis of the Detailed error message.

    The quoted string in the Detailed error is the database name of the constraint being violated (uk_sample_name_per_project = unique key on sample table for name and project).

    Sample management accession/modify will result in the error shown below.

    If a user attempts to accession/modify a sample name under similar circumstances via an API operation, the results received would be similar to the content of this error message.

    cd /opt/gls/clarity/tools/database/clarity-migrator
    - java -jar clarity-migrator.jar CleanupDuplicatedSampleNamesPerProjectStep
    cd /opt/gls/clarity/tools/database/clarity-migrator
    - java -jar clarity-migrator.jar SampleNamePerProjectUniqueConstraintStep

    LDAP Integration

    If you use, or would like to use, an LDAP server to consolidate directory services, it is possible to integrate LDAP with Clarity LIMS.

    The Clarity LIMS LDAP solution allows for the following features:

    • User name and password authentication against LDAP to govern access to Clarity LIMS.

    • Ongoing unidirectional synchronization of user information (such as first name, last name, title, phone, fax, and email) from LDAP to Clarity LIMS. For example, if your telephone number is changed in the LDAP directory, the information is pushed down to Clarity LIMS, keeping contact information current.

    • Automated unidirectional provisioning of user accounts from LDAP to Clarity LIMS. For example, adding a user to a particular group within the LDAP directory automatically results in a local account with LDAP authentication being added to Clarity LIMS.

    hashtag
    Providing Information about your LDAP Implementation

    Our Field Application Specialist (FAS) team meets with you to discuss the current LDAP implementation. In preparation for this meeting, collect the following information:

    • The type of provisioning you would like to use to synchronize Clarity LIMS with LDAP (automatic or manual).

    • A list of the LDAP attributes the current system uses to record the following user properties: first name, last name, title, phone number, fax number, and e‐mail address.

    NOTE: When integrating Clarity LIMS with LDAP, the LIMS database and the LDAP directory remain as separate and distinct entities.

    hashtag
    Supported LDAP Servers

    Clarity LIMS is tested with the following LDAP servers:

    • ApacheDS 1.5 and later

    • Microsoft Active Directory (Windows Server 2003 or later)

    • OpenLDAP 2.3.35 and later

    hashtag
    Access and Changes

    While user provisioning and authentication are handled with LDAP, a Clarity LIMS system administrator completes the following steps:

    1. Determine the level of access that a user requires.

    2. Modify the userʹs account within the LIMS to provide that access.

    Once an LDAP integration with Clarity LIMS is established, all changes to user profiles must be made from the LDAP server.

    hashtag
    Provisioning Users

    Only automatic user provisioning is available.

    With automatic user provisioning, Clarity LIMS users are created automatically by a provisioning tool that periodically synchronizes the LDAP server with the LIMS.

    To make use of the LDAP directory services, Clarity LIMS maps to specific LDAP attributes within a defined schema.

    However, the directory structure used can vary among installations. Our Field Applications Specialist (FAS) team work with you to complete the following items:

    • Analyze a specific LDAP solution and directory organization or assist with the selection and initial configuration of an LDAP service.

    • Discuss the user elements that will be synchronized between the LDAP service and Clarity LIMS systems.

    • Configure LDAP to connect to your Clarity LIMS systems.

    hashtag
    Caching User Authentication Results from your LDAP Server

    User authentication is handled in the Clarity LIMS.

    In previous versions of Clarity LIMS, a few customers reported slow response time for the REST API when using LDAP users for authentication. As of Clarity LIMS v5.2.x / v4.3.x, the REST API response time has improved by introducing a new feature that caches user authentication results through a new property (api.session.timeout).

    To make use of the new feature, do the following actions:

    • Make sure that api.session.timeout property is set.

    • Include the HTTP Connection & Authorization request headers and session cookie in the HTTP request.

    hashtag
    Setting the api.session.Timeout Property

    Stored in the Clarity LIMS database table, the api.session.timeout property allows you to specify the period of time for which a user's session should persist, after they have been authenticated.

    This property is set during installation or upgrade of the LIMS. The default value is 5 minutes. If necessary, update the value using the omxprops-ConfigTool.jar tool at the following location:

    For example:

    For this configuration to take effect, stop and restart Tomcat:

    hashtag
    Including the HTTP Authorization Request Header and Session Cookie

    To persist user authentication, the HTTP request must contain the following HTTP request headers:

    • Request Header

      • Connection: Keep-Alive

      • Authorization: Basic <credentials>

    The HTTP request headers are required for the initial request, and for any subsequent request to get a valid JSESSIONID. Additional scenarios are described in the following table.

    To make sure that a valid authenticated session is provided if the cookie in the request has expired, also provide the following JSESSIONID cookie:

    • Cookie

      • JSESSIONID=<a valid JSESSIONID from the initial request>

    The following table lists the various combinations of HTTP Authorization request header and JSESSIONID cookie and their expected result. It assumes that the HTTP Connection request header is provided for all scenarios.

    Clarity LIMS version
    Authorization
    JSESSIONID
    Expected Result

    Absent

    Present (Invalid)

    Open API responds with HTTP Status 401 - Unauthorized.

    Absent

    Absent

    Open API responds with HTTP Status 401 - Unauthorized.

    v5.2.x and later, and v4.3.x

    Present

    Present (Valid)

    Open API does not perform the user authentication and responds with requested resources.

    Present

    Present (Invalid)

    Open API performs the user authentication depending on whether the account is in the database or LDAP server, and responds with requested resources.

    Absent

    Present (Valid)

    Open API does not perform the user authentication and responds with requested resources.

    /opt/gls/clarity/tools/propertytool
    $ java -jar /opt/gls/clarity/tools/propertytool/omxprops-ConfigTool.jar
    set -y api.session.timeout '15'
    service clarity_tomcat stop
    service clarity_tomcat start