arrow-left

All pages
gitbookPowered by GitBook
1 of 7

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Scripts Triggered Outside of Workflows/Steps

  • Repurposing a Process to Upload Indexes

  • Adding Users in Bulk

  • Moving Reagent Kits & Lots to New Clarity LIMS Server

Programatically Importing the Sample Submission Excel File
Generating an MS Excel Sample Submission Spreadsheet
Assigning Samples to New Workflows

Repurposing a Process to Upload Indexes

Compatibility: API version 2

Uploading indexes (reagent types) into BaseSpace Clarity LIMScan be done 3 different ways:

  1. Manually adding the indexes one at a time using the Operations interface.

  2. Uploading an XML data file using the config slicer.

  3. Using the API directly.

There is no "out of the box" method to quickly and easily upload a CSV file of many indexes to the LIMS. Adding the indexes one at a time is not quick, and using the config slicer requires admin privileges, knowledge of the correct XML file format, and the command line.

hashtag
Solution

This example enables Clarity LIMS lab users to upload a CSV file containing new indexes and, through an EPP trigger, instantly create the indexes in Clarity LIMS. This example provides the provides the functionality included with the config slice, including preventing the indexes from being created if there is already an index with the same name in the system as well as ensuring the sequence only contains a valid nucleotide sequence (composed of A,G,T and C).

hashtag
Protocol Step Configuration

In this example, a protocol step is repurposed as a mechanism to upload indexes. The protocol is configured to not produce any analyte outputs, and a single shared output file is used as the placeholder to upload the CSV file containing the indexes. The step is configured to permit use of a control sample, which allows the user to begin the step without any sample inputs.

hashtag
Parameters

The script accepts the following parameters:

An example of the full syntax to invoke the script is as follows:

hashtag
User Interaction

The lab scientist begins the step with only a control in the ice bucket. The user uploads the CSV file to the placeholder on the record details screen, and pushes the button to trigger the script.

hashtag
About the Code

The main method in the script is importIndexes(). This method first calls the function downloadfile() which finds the location of the CSV file on the Clarity LIMS server and reads the content into memory.

The script then generates an XML payload for each index, checks the nucleotide sequence against the acceptable characters, and POSTs to the /api/v2/reagenttypes endpoint. If an index with the same name is already present in the LIMS, the log will be appended with the exception.

This example script produces a log which tracks which indexes were not added to the system as well as the reason. The log displays how many indexes were successfully added to the system and the total number of indexes included in the CSV file.

hashtag
Assumptions and Notes

  • You are running a version of Python that is supported by Clarity LIMS, as documented in the Clarity LIMS Technical Requirements.

  • The attached files are placed on the LIMS server, in the /opt/gls/clarity/customextensions folder.

  • You will need to update the HOSTNAME global variable such that it points to your Clarity LIMS server.

hashtag
Attachments

example_Indexes.csv:

addindexescsv.py:

The example code is provided for illustrative purposes only. It does not contain sufficient exception handling for use 'as is' in a production environment.

-f

The LUID of the file placeholder for the CSV file (Required)

-u

The username of the current user (Required)

-p

The password of the current user (Required)

-s

The URI of the step that launches the script - the {stepURI:v2} token (Required)

-l

The LUID of the file placeholder for the Log file (Required)

file-download
6KB
ClusterBCL.py
arrow-up-right-from-squareOpen
file-download
3KB
addindexescsv.py
arrow-up-right-from-squareOpen
/usr/bin/python /opt/gls/clarity/customextensions/addindexescsv.py -u {username} -p {password} -s {stepURI:v2} -f {compoundOutputFileLuid0} -l {compoundOutputFileLuid1}

Programatically Importing the Sample Submission Excel File

Compatibility: API version 2

The Clarity LIMSweb interface provides an example Sample Import Excel file which can be manually uploaded to submit new samples to a selected project within the LIMS.

This application example shows how the Sample Import Sheet can be uploaded programatically with just one change to its format.

hashtag
Solution

This example uses the same Sample Sheet with an additional column 'Project/Name'. The script processes the Sample Sheet, creates the samples and adds the samples to their respective projects.

This script leverages a python module xlrd, which is not included in the standard python library. It is used to extract data from .xls and .xlsx excel files.

hashtag
Parameters

The script accepts the following parameters:

An example of the full syntax to invoke the script is as follows:

hashtag
About the code

parseFile

This method carries out several operations:

  • Opens the excel file and reads the text

  • Stores the column headers in a dictionary variable called COLS

  • Stores the row data accessibly in an array variable called ROWS

createProject

This method in turn carries out several operations:

  • For each project name the script encounters it searches the LIMS to identify if a project with this name has already been created.

  • If the project does not exist, the script will create the project. This example script is easily modifiable, however as written:

processRows

This method prepares the data needed to create a sample in LIMS:

  • Assembles the UDF values, the project ID and container ID.

  • For each non-tube container the script encounters it searches the LIMS to identify if a container with this name already exists.

  • If the container does not exist the script will create the container.

The script contains additional supporting methods to generate XML which is POSTED to the API.

hashtag
Assumptions & notes

  • The UDFs in the Sample Sheet header have been configured in LIMS prior to submission.

  • The following column headers are required: Sample/Name, Project/Name and any sample-level UDFs that are mandatory within your system.

  • The script need not be run on the Clarity server, however it must have a connection to the Clarity LIMS API.

hashtag
Attachments

_auth_tokens.py:

ClaritySampleSheetprojects.xlsx:

SampleSheetImporter.py:

Assigning Samples to New Workflows

It is sometimes necessary to assign a sample to a new workflow from within another workflow.

You can do this in the BaseSpace Clarity LIMS Operations Interface, by manually adding the sample to the desired workflow. However, it is also possible to perform this action through the API.

hashtag
Solution

This example shows how to use the API to automate the addition of samples to a specified workflow, based on a UDF value.

  • The script can be run off any protocol step whose underlying process is configured with Analyte inputs and a single per-input ResultFile output.

  • The process may have any number of per-all-input ResultFile outputs, as they will be ignored by the script.

  • A Result File UDF, named Validate, will control which samples will be added to the specified workflow.

hashtag
Parameters

The script accepts the following parameters:

-i
The limsid of the process invoking the script (Required)
The {processLuid} token

hashtag
Step 1: Create Result File UDF

Before the example script can be used, first create the ResultFile's Validate UDF in the Operations Interface. This is a single-line text UDF with preset values of 'Yes' and 'No'.

hashtag
Step 2: Create and Configure Process Type

Also in the Operations Interface, create a new process type named Cookbook Workflow Addition.

This process type must:

  • Have Analyte inputs.

  • Have a single per-input ResultFile output.

  • Apply the Validate UDF on its ResultFile outputs.

hashtag
Step 3: Configure an EPP call on this process type as follows:

hashtag
Step 4: Modify the file paths to suit your server's Groovy installation.

hashtag
Step 3: Create Protocol

Once the process type is created, in the Clarity LIMS Web Interface, create a protocol named Cookbook Workflow Addition Protocol.

This protocol should have one the protocol step - Cookbook Workflow Addition.

hashtag
Step 6: Configure EPP

Configure the EPP script to automatically initiate at the end of the Cookbook Workflow Addition step:

hashtag
Step 7: Create workflows

To finish configuration, create two workflows:

  • Destination Workflow: THis workflow should contain the DNA Initial QC protocol only.

  • Sending Workflow: This workflow should contain the new Cookbook Workflow Addition Protocol.

hashtag

hashtag
About the code

Once the script has processed the input parameters and ensured that all the required information is available, we can start processing the samples to determine if they should be assigned to the new workflow.

  1. To begin, we retrieve the process from the API. This gives us access to the input-output maps of the process. These will be used to determine which ResultFiles we will examine.

  2. Next, we retrieve the protocol step action list. This contains a list of the input analytes' URIs and their next steps.

  3. We then search this list for and collect all analyte URIs whose next action has been set to Mark as protocol complete.

hashtag
User Interaction

  1. Assuming samples have been placed in the Switching Workflow, the user proceeds as normal through the protocol step.

  2. In the Record Details screen, the user enters Validate values in the ResultFile UDFs.

  3. The user then proceeds to the Assign Next Steps screen, provides a variety of Next Steps, and completes the protocol step.

hashtag
Assumptions and Notes

  • The attached file is placed on the Clarity LIMS server, in the /opt/gls/clarity/customextensions folder.

  • GLSRestApiUtils.groovy is placed in the Groovy lib folder.

  • The required configuration has been set up, as described in Configuration.

hashtag
Attachments

SwitchingWorkflows.groovy:

Generating an MS Excel Sample Submission Spreadsheet

An easy way to get sample data into BaseSpace Clarity LIMS is to import a Microsoft® Excel® sample spreadsheet. However, manually configuring the spreadsheet can be time-consuming and error-prone.

Use this Python application example to generate a custom Excel sample submission spreadsheet that meets the specific needs of your lab.

hashtag
Solution

Project researcher is System Administrator
  • Project open date is today

  • No project level UDFs are created

    • If container type is not specified, TUBE will be assumed.

    • For TUBE, well location will always be 1:1

    You are using Python version 2.6 or 2.7.
  • The Python installation contains the non-standard xlrd library.

  • The _auth_tokens.py file has been updated to include the information for your Clarity installation.

  • The example code is provided for illustrative purposes only. It does not contain sufficient exception handling for use 'as is' in a production environment.

  • -f

    The full path to the location of the excel file. (Required)

    -g

    The full path to the location of the log file. (Optional)

    file-download
    79B
    _auth_tokens (1).py
    arrow-up-right-from-squareOpen
    file-download
    34KB
    ClaritySampleSheetprojects (1).xlsx
    arrow-up-right-from-squareOpen
    file-download
    10KB
    SampleSheetImporter.py
    arrow-up-right-from-squareOpen
    hashtag
    Parameters

    The script accepts the following parameters:

    -a

    The base URL to the REST API - no trailing slash (Required)

    The URL that points to your main API endpoint

    -u

    The username of the admin user (Required)

    The {username} token

    -p

    The password of the admin user (Required)

    The {password} token

    The SampleSubmissionXLSGenerator.py application script is run as follows:

    An XLS workbook containing the following worksheets is generated:

    • Sample Import Sheet: Contains columns for all Sample UDFs/UDTs.

    • Container Type Names: Contains a list of all container type names.

    hashtag
    User interaction

    1. In the Clarity LIMS Operations Interface, the lab manager configures Container Types and Sample UDFs/UDTs.

    2. In the Clarity LIMS Web Interface, the lab manager or lab scientist runs the SampleSubmissionXLSGenerator.py application script, providing the required parameters.

    3. An Excel workbook containing the Sample Import Sheet and Container Type Names worksheets is generated.

    4. The Sample Import Sheet is provided to lab scientists for use when creating lists of samples to import; it may also be used by collaborators to import samples via the LabLink Collaborations Interface.

    hashtag
    Using the generated spreadsheet

    1. The spreadsheet will contain red and green column headers. Populate the spreadsheet with sample data:

      • Red headers: These columns must contain data.

      • Green headers: These columns may be left empty.

    2. Import the spreadsheet into Clarity LIMS.

    circle-info

    If there are no sample user-defined fields (UDFs) or user-defined types (UDTs) in the system, the generated spreadsheet will only contain four columns. After configuring the UDFs/UDTs, you can re-run the script to add columns to the spreadsheet that reflect the updated configuration.

    hashtag
    Example modifications

    You can edit the Python application to include supplementary functions. For example, you may want to use other attributes from the resulting XML to generate additional data entry columns.

    Inserting additional non-required & non-configured columns

    • The SampleSubmissionXLSGenerator.py Python application script adds Sample/Volume and Sample/Concentration columns to the spreadsheet.

    • The script includes a 'commented' section. You can remove the ### NON-REQUIRED COLUMN MODIFICATION ### comments and use this section to add your own columns.

    hashtag
    Assumptions & notes

    • You have downloaded the attached zip file to the server. On a non-production server use the glsai user account.

    • Unzip the file to the following directory: /opt/glsclarity/customextensions. The contents of the zip file will be installed within that directory, to /CookBook/SpreadSheetGenerator/

    • Python 2.7 is installed and configured on the system path.

    • You can run the SampleSubmissionXLSGenerator.py Python application script on any system running Python, provided it has web access to the Clarity LIMS REST API.

    • This script is not compatible with the 3.x branch of Python.

    • The example code is provided for illustrative purposes only. It does not contain sufficient exception handling for use 'as is' in a production environment.

    All dependencies are preloaded if you install on a non-production server.

    hashtag
    Attachments

    python-xls-generator-2.0-bundle.zip:

    file-archive
    309KB
    python-xls-generator-2.0-bundle.zip
    archive
    arrow-up-right-from-squareOpen
    python /path/to/file/SampleSheetImporter.py -f /Users/mywd/ClaritySampleSheetprojects.xlsx -g /Users/logs/samplesubmission.log
    python SampleSubmissionXLSGenerator.py -a http://<hostname or IP address of server>:<port>/api/v2 -u admin_user -p admin_user_pass

    Next, we gather the per-input ResultFile input-output maps. We can collect the ResultFile URIs of those related to the analytes who have been marked as complete. NOTE: It is important that we strip any extra state information from the URIs. The URIs found in the next action list do not contain any state information and, when compared against a non-stripped URI, will return 'false'.

  • Once we have the ResultFile URIs, we can retrieve them with batchGET. It is important that the list contains unique URIs, as the batchGET will fail otherwise.

  • After we have retrieved the ResultFiles, we can iterate through the list, adding the parent sample's URI to our list of sample URIs if the ResultFile's Validate UDF is set to Yes. We also increment a counter which will allow us to report to the user how many samples were assigned to the new workflow.

  • Since we don't assign samples themselves to workflows, we first need to retrieve the samples' derived artifacts. We can do this by iterating through each sample URI, retrieving it, and adding its artifact's URI to a list.

  • Before we can add the artifacts to the workflow, we need to determine the destination workflow's URI. By retrieving a list of all the workflows in the system, we can find the one that matches our input workflow name.

  • Assigning artifacts to workflows requires the posting of a routing command to the routing endpoint.

    • We first generate the required XML by using a Streaming Markup Builder.

    • We then dynamically build our XML by looping inside of the markup declaration. // Create a new routing assignment using the Markup Builder

  • To create our routing command, we pass the workflow URI and the artifact URIs that we wish to assign to the workflow to a method containing the above code. This will generate the required node.

  • We then perform an httpPOST to the routing endpoint to perform the action.

  • Finally, we define our success message to the user. This will allow us to inform the user of the results of the script.

  • A message displays, alerting the user of the execution of a custom script.

  • When the script completes, a success message displays and the samples are added to the specified workflow.

  • The example code is provided for illustrative purposes only. It does not contain sufficient exception handling for use 'as is' in a production environment.

    -s

    The URI of the step that launches the script (Required)

    The {stepURI:v2:http} token (in the form http://<YourIP>/api/v2/steps/<ProtocolStepLimsid>)

    -u

    The username of the current user (Required)

    The {username} token

    -p

    The password of the current user (Required)

    The {password} token

    -w

    The name of the destination workflow (Required)

    file-download
    7KB
    SwitchingWorkflows.groovy
    arrow-up-right-from-squareOpen
    Assigning_samples_to_workflows_ResultFileUDF.png
    Assigning_samples_to_workflows_Protocol.png
    Assigning_samples_to_workflows_protEpp.png

    The {workflow} token

    Moving Reagent Kits & Lots to New Clarity LIMS Server

    Whereas config slicer is a useful tool for moving configuration data from one server to another and, strictly speaking, reagent lot data is not considered configuration, there is a need for an easy way to transfer reagent lot data to a new Clarity LIMSserver.

    hashtag
    Solution

    The preventative solution is to not create production reagent kits on the development server. However, if you're reading this it might be too late for that.

    This example demonstrates how to use the Clarity LIMS API to extract reagent kit and reagent lot data from one server, transfer the data as an XML file and create the equivalent reagent kits and/or lots on the new server.

    // Gather the PerInput ResultFile mappings
    def inputOutputMaps = process.'input-output-map'.findAll { it.'output'[0].@'output-generation-type' == PER_INPUT }
             
    // Determine which ResultFiles to examine and retrieve them
    def resultFilesURIs = inputOutputMaps.findAll { completedAnalyteURIs.contains(GLSRestApiUtils.stripQuery(it.'input'[0].@uri)) }.collect { it.'output'[0].@uri }.unique()
    def resultFiles = GLSRestApiUtils.batchGET(resultFilesURIs, username, password)
    // Determine which artifacts should be assigned
    List sampleURIs = []
    resultFiles.each {
        if(it.'udf:field'.find { VALIDATION_UDF == it.@name }.value()[0] == YES) {
            sampleURIs.add(it.'sample'[0].@uri)
            switched++
        }
    }
    // Gather the sample artifacts URIs
    List artifactsToAssignURIs = []
    sampleURIs.each {
        Node sample = GLSRestApiUtils.httpGET(it, username, password)
        artifactsToAssignURIs.add(sample.'artifact'[0].@uri)
    }
    // Retrieve workflow URI
    def workflowList = GLSRestApiUtils.httpGET(baseURI + '/configuration/workflows', username, password)
    String workflowURI = workflowList.find { it.@name == workflow }.@uri
    
    def assignmentOrder = builder.bind {
        mkp.xmlDeclaration()
        mkp.declareNamespace(rt: 'http://genologics.com/ri/routing')
        'rt:routing' {
            if(artifactURIsToNewWorkflow.size() != 0) {
                'assign'('workflow-uri': workflowURI) {
                    artifactURIsToNewWorkflow.each {
                        'artifact'(uri: it)
                    }
                }
            }
        }
    }
     
    return GLSRestApiUtils.xmlStringToNode(assignmentOrder.toString())
    // Create and post the assignment
    Node assignmentNode = createAssignmentNode(workflowURI, artifactsToAssignURIs)
    GLSRestApiUtils.httpPOST(assignmentNode, baseURI + '/route/artifacts/', username, password)  
    // Define the success message to the user
    outputMessage = "Script has completed successfully.${LINE_TERMINATOR}" +
        "${switched} samples were assigned to the '${workflow}' workflow.${LINE_TERMINATOR}" +
        "You can find them queued in the 'DNA Initial QC' protocol."
    bash -c "/opt/gls/groovy/current/bin/groovy -cp /opt/groovy/lib /opt/gls/clarity/customextensions/SwitchingWorkflows.groovy -u {username} -p {password} -s {stepURI:v2:http} -i {processURI:v2:http} -w 'Destination Workflow'"
    // Retrieve the process
    def process = GLSRestApiUtils.httpGET(processURI, username, password)
             
    // Retrieve the analytes which have been set to complete
    def actionsList = GLSRestApiUtils.httpGET(stepURI + '/actions', username, password)
    def completedAnalyteURIs = actionsList.'next-actions'.'next-action'.findAll { it.@action == COMPLETE }.collect { it.@'artifact-uri' }
    This example contains two scripts which can be run on the CLI, one which exports data from the dev server and generates a txt file. The second which uses the exported data to create the kits/ lots on the prod server.

    hashtag
    Export

    The first script accepts the following parameters:

    -s

    The hostname of the clarity server API and version from which to export (Required)

    -u

    The username of the current user (Required)

    -p

    The password of the current user (Required)

    -d

    The directory to write the txt file to (Optional, default will write to current working directory)

    --skipLots

    The script will not export reagent lot data (Optional)

    --skipKits

    The script will not export reagent kit data (Optional)

    -k

    A list of reagent kits to limit the export to, only reagent lots for these kits will be exported (Optional, by default will export all reagent kits)

    An example of the full syntax to invoke the first script is as follows:

    hashtag
    Import

    The second script accepts the following parameters:

    -s

    The hostname of the clarity server API and version to import into (Required)

    -u

    The username of the current user (Required)

    -p

    The password of the current user (Required)

    -d

    The directory to read the txt file from. (Optional, default will expect file to be in current working directory)

    --checkforKits

    The script will check the clarity server for existing kits of the same name. (Optional, Recommended) If this parameter is used, the script will not create duplicate reagent kits.

    An example of the full syntax to invoke the first script is as follows:

    hashtag
    About the Code

    hashtag
    reagents_export.py

    The main method in the script searches the reagent kits and reagent lots endpoints for all the kits and lots or, if the -k parameter is used, for the kits and lots belonging to kits specified in the -k parameter.

    The script writes the XML data for each kit and lot to the XML data file.

    hashtag
    reagents_import.py

    The main method of this script creates the reagent kits and reagent lots in CClarity LIMS. In the case of duplicate reagent kits, the reagent lots are associated with the newly created kits. If the --checkforKits parameter is included, the script does not create kits with duplicate names, and associates the reagent lots with the preexisting kits with the matching reagent lot name.

    hashtag
    Assumptions and Notes

    • You are running a version of Python that is supported by Clarity LIMS, as documented in the Clarity LIMS Technical Requirements.

    • The example code is provided for illustrative purposes only. It does not contain sufficient exception handling for use 'as is' in a production environment.Attachments

    • Script Updates

      Jan 2018: reagents_export_jan2018.py and reagents_import_jan2018.py

      • Script updated to enable import/export of reagent kits with special characters.

      Aug 2018: reagents_import_Aug2018.py

      • Script checks for preexisting reagent lots.

      • --checkforkits prevents duplicate reagent lots. Lots with non-matching status are considered to already exist (if duplicate kit, name & number).

    hashtag
    Attachments

    reagents_import.py:

    reagents_export.py:

    reagents_import_jan2018.py:

    reagents_export_jan2018.py:

    reagents_import_Aug2018.py:

    file-download
    4KB
    reagents_import.py
    arrow-up-right-from-squareOpen
    file-download
    3KB
    reagents_export.py
    arrow-up-right-from-squareOpen
    file-download
    4KB
    reagents_import_jan2018.py
    arrow-up-right-from-squareOpen
    file-download
    3KB
    reagents_export_jan2018.py
    arrow-up-right-from-squareOpen
    file-download
    5KB
    reagents_import_Aug2018.py
    arrow-up-right-from-squareOpen
    python reagents_export.py -s https://demo-4-1.claritylims.com/api/v2 -u apiuser -p ***** -d /Users/dcrawford/Documents --skipKits -k "AMPure XP Beads, Covaris Snap-cap microTUBE" 
    python reagents_import.py -s https://demo-4-1.claritylims.com/api/v2 -u apiuser -p ***** -d /Users/dcrawford/Documents --checkforKits 

    Adding Users in Bulk

    When Clarity LIMS is replacing an existing LIMS, or if the LDAP integration is not being utilized, a list of current LIMS users and collaborators often already exists in a file.

    In this scenario, it is useful to use this file to create users in Clarity LIMS. This topic provides an example script and outlines a strategy for the bulk import of users from a simple CSV file.

    hashtag
    Solution

    The attached script parses a CSV file containing user information. Based on the columns of data present in the file, the script then creates users and their associated labs in Clarity LIMS.

    hashtag
    Parameters

    Since this operation is independent of workflow and sample processing, the parameters used to invoke the script differ from those typically used:

    An example of the full syntax to invoke the script is as follows:

    File format

    The format of the file is very flexible. The order of the columns is not relevant.

    If the names of your columns do not match the column names in the example file, modify the script so that the column names match yours.

    Attached to this topic, you'll find a CSV file containing test data that illustrates the format further. The structure of this file is shown below.

    Notice that the first two rows do not contain data of interest, and so the script ignores them. This is controlled by line 38 of the script, which specifies the location of the header row:

    hashtag
    About the Code

    The main method of interest is importData(). After parsing the file into data structures (COLS and DATA), the data is processed one line at a time using the following pseudocode:

    To reduce the number of queries the script makes back to the server, each time a new lab is created it is added to the cache of existing labs that is stored in the LABS dictionary.

    Finally, if you set the DEBUG variable to true, the script will stop processing the file after the first successful creation of a user. This is useful as it allows you to test your script using just one line of the CSV file at a time.

    hashtag
    Assumptions and Notes

    • Both of the attached *.py script files are placed on the Clarity LIMS server, in the /opt/gls/clarity/customextensions folder.

    • The users will be created with the same hard-coded password (abcd1234). It is possible to have the script create a unique password for each user. If this is required, consider having the script e-mail the user with a 'welcome' e-mail outlining their username and password.

    • The users will be created with permissions to access the Collaborations Interface only. This can be modified as required.

    hashtag
    Attachments

    user_List_Test.csv:

    userImport.py:

    The contents of the 'Institution' column will be used to associate the user with a 'Lab', if a value for this column is provided.

  • The example code provided is for illustrative purposes only. It does not contain sufficient exception handling for use 'as is' in a production environment.

  • -h

    The URI of the server to which you want to upload the users (Required)

    -u

    The username of a user with administrative privileges (Required)

    -p

    The password of the user (Required)

    -f

    The path to the CSV file containing the user-related data (Required)

    file-download
    537B
    user_List_Test.csv
    arrow-up-right-from-squareOpen
    file-download
    5KB
    userImport.py
    arrow-up-right-from-squareOpen
    users.png
    /usr/bin/python /opt/gls/clarity/customextensions/userImport.py -h http://192.168.8.10:8080 -u admin -p securepassword -f ./user_list.csv
    uNameIndex = COLS[ "User"]
    fnameIndex = COLS[ "First Name" ]
    lNameIndex = COLS[ "Last Name" ]
    eMailIndex = COLS[ "E-mail" ]
    labIndex = COLS[ "Institution" ]
    headerRow = 3 
    does a user with this username already exist?
    
    if NO:
    
        does the lab exist?
    
        if NO:
    
            createLab
    
        createUser
    Assigning_samples_to_workflows_Send.png
    Assigning_samples_to_workflows_Inputs.png
    Assigning_samples_to_workflows_Message.png