LogoLogo
Illumina KnowledgeIllumina SupportSign In
Clarity LIMS Software
  • Home
Clarity LIMS Software
  • Announcements
  • Clarity LIMS
    • Clarity & LabLink
  • API and Database
    • API Portal
      • REST
        • REST General Concepts
        • REST Web Services
        • HTTP Response Codes and Errors
        • XML UTF-8 Character Encoding
        • Requesting API Version Information
        • Viewing Paginated List Resources
        • Filtering List Resources
        • Working with User-Defined Fields (UDF) and Types (UDT)
        • Traversing a Genealogy
        • Working with Batch Resources
      • Getting Started with API
        • Understanding API Terminology (LIMS v5 and later)
        • API-Based URIs (LIMS v4 and later)
        • Development Prerequisites
        • Structure of REST Resources
        • The Life Cycle of a Sample: Stages Versus Steps
        • Integrating Scripts
      • Automation
        • Automation Triggers and Command Line Calls
        • Automation Execution Environment
        • Supported Command Line Interpreters
        • Automation Channels
        • Error Handling
        • Automation Tokens
          • Derived Sample Automation Tokens
          • Step Automation Tokens
          • Project Automation Tokens
        • Automation Testing
        • Troubleshooting Automation
      • Tips and Tricks
        • Accessing Step UDFs from a different Step
        • Obfuscating Sensitive Data in Scripts
        • Integrating Clarity LIMS with Upstream Sample Accessioning Systems
        • Creating Samples and Projects via the API
        • Displaying Files From an Earlier Step
        • Transitioning Output Artifacts into the Next Step
        • Determining the Workflow(s) to Which a Sample is Assigned
        • Standardizing Sample Naming via the API
        • Copying UDF Values from Source to Destination
        • Updating Preset Value of a Step UDF through API
        • Automating BCL Conversion
        • Finding QC Flags in Aggregate QC (Library Validation) via REST API
        • Setting the Value of a QC Flag on an Artifact
        • Creating Notifications When Files are Added via LabLink
        • Remote HTTP Filestore Setup
      • Cookbook
        • Get Started with the Cookbook
          • Tips and Troubleshooting
          • Obtain and Use the REST API Utility Classes
        • Work with EPP/Automation and Files
          • Automation Trigger Configuration
          • Process Execution with EPP/Automation Support
        • Work with Submitted Samples
          • Adding Samples to the System
          • Renaming Samples
          • Assigning Samples to Workflows
          • Updating Sample Information
          • Show the Relationship Between Samples and Analyte Artifacts (Derived Samples)
        • Work with Containers
          • Add an Empty Container to the System
          • Find the Contents of a Well Location in a Container
          • Filter Containers by Name
        • Work with Derived Sample Automations
          • Remove Samples from Workflows
          • Requeue Samples
          • Rearray Samples
        • Work with Process/Step Outputs
          • Update UDF/Custom Field Values for a Derived Sample Output
          • Rename Derived Samples Using the API
          • Find the Container Location of a Derived Sample
          • Traverse a Pooled and Demultiplexed Sample History/Genealogy
          • View the Inputs and Outputs of a Process/Step
        • Work with Projects and Accounts
          • Remove Information from a Project
          • Add a New Project to the System with UDF/Custom Field Value
          • Get a Project Name
          • Find an Account Registered in the System
          • Update Contact (User and Client) Information
        • Work with Multiplexing
          • Find the Index Sequence for a Reagent Label
          • Demultiplexing
          • Pool Samples with Reagent Labels
          • Apply Reagent Labels with REST
          • Apply Reagent Labels When Samples are Imported
          • Apply Reagent Labels by Adding Reagents to Samples
        • Working with User Defined Fields/Custom Fields
          • About UDFs/Custom Fields and UDTs
          • Performing Post-Step Calculations with Custom Fields/UDFs
        • Work with Processes/Steps
          • Filter Processes by Date and Type
          • Find Terminal Processes/Steps
          • Run a Process/Step
          • Update UDF/Custom Field Information for a Process/Step
          • Work with the Steps Pooling Endpoint
        • Work with Batch Resources
          • Introduction to Batch Resources
          • Update UDF/Custom Field Information with Batch Operations
          • Retrieve Multiple Entities with a Single API Interaction
          • Select the Optimal Batch Size
        • Work with Files
          • Attach a File with REST and Python
          • Attach Files Located Outside the Default File Storage Repository
          • Attach a File to a File Placeholder with REST
        • Work with Controls
          • Automated Removal of Controls from a Workflow
      • Application Examples
        • Python API Library (glsapiutil.py) Location
        • Scripts That Help Automate Steps
          • Route Artifacts Based Off a Template File
          • Invoking bcl2fastq from BCL Conversion and Demultiplexing Step
          • Email Notifications
          • Finishing the Current Step and Starting the Next
          • Adding Downstream Samples to Additional Workflows
          • Advancing/Completing a Protocol Step via the API
          • Setting a Default Next Action
          • Automatic Placement of Samples Based on Input Plate Map (Multiple Plates)
          • Automatic Placement of Samples Based on Input Plate Map
          • Publishing Files to LabLink
          • Automatic Pooling Based on a Sample UDF/Custom Field
          • Completing a Step Programmatically
          • Automatic Sample Placement into Existing Containers
          • Routing Output Artifacts to Specific Workflows/Stages
          • Creating Multiple Containers / Types for Placement
          • Starting a Protocol Step via the API
          • Setting Quality Control Flags
          • Applying Indexing Patterns to Containers Automatically
          • Assignment of Sample Next Steps Based On a UDF
          • Parsing Metadata into UDFs (BCL Conversion and Demultiplexing)
        • Scripts That Validate Step Contents
          • Validating Process/Step Level UDFs
          • Checking That Containers Are Named Appropriately
          • Checking for Index Clashes Based on Index Sequence
          • Validating Illumina TruSeq Index Adapter Combinations
        • Scripts Triggered Outside of Workflows/Steps
          • Repurposing a Process to Upload Indexes
          • Adding Users in Bulk
          • Moving Reagent Kits & Lots to New Clarity LIMS Server
          • Programatically Importing the Sample Submission Excel File
          • Generating an MS Excel Sample Submission Spreadsheet
          • Assigning Samples to New Workflows
        • Miscellaneous Scripts
          • Illumina LIMS Integration
          • Generating a Hierarchical Sample History
          • Protocol-based Permissions
          • Self-Incremental Counters
          • Generic CSV Parser Template (Python)
          • Renaming Samples to Add an Internal ID
          • Creating Custom Sample Sheets
          • Copying Output UDFs to Submitted Samples
          • Parsing Sequencing Meta-Data into Clarity LIMS
          • Submit to a Compute Cluster via PBS
          • Downloading a File and PDF Image Extraction
        • Resources and References
          • Understanding LIMS ID Prefixes
          • Container States
          • Useful Tools
          • Unsupported Artifact Types
          • Unsupported Process Types
          • Suggested Reading
          • API Training Videos
  • Illumina Preset Protocols
    • IPP v2.10
      • Release Notes
      • Installation and User Configuration
      • Manual Upgrade
    • IPP v2.9
      • Release Notes
      • Installation and User Configuration
    • IPP v2.8
      • Release Notes
      • Installation and User Configuration
      • Manual Upgrade
    • IPP v2.7
      • Release Notes
      • Installation and User Configuration
    • IPP v2.6
      • Release Notes
      • Installation and User Configuration
      • Manual Upgrade
  • Sample Prep
    • QC and Sample Prep
      • DNA Initial QC 5.1.2
      • RNA Initial QC 5.1.2
      • Library Validation QC 5.1.2
  • Library Prep
    • AmpliSeq for Illumina
      • BRCA Panel
        • Library Preparation v1.1
        • Equalizer v1.1
        • Standard v1.1
      • Cancer HotSpot Panel v2
        • Library Preparation v1.1
        • Equalizer v1.1
        • Standard v1.1
      • Childhood Cancer Panel
        • DNA Library Prep v1.1
        • RNA Library Prep v1.1
        • Equalizer v1.1
        • Standard v1.1
      • Comprehensive Cancer Panel
        • Library Preparation v1.1
        • Equalizer v1.1
        • Standard v1.1
      • Comprehensive Panel v3
        • DNA Library Prep v1.1
        • RNA Library Prep v1.1
        • Equalizer v1.1
        • Standard v1.1
      • Custom DNA Panel
        • Library Preparation v1.1
        • Equalizer v1.1
        • Standard v1.1
      • Focus Panel
        • DNA Library Prep v1.1
        • RNA Library Prep v1.1
        • Equalizer v1.1
        • Standard v1.1
      • Immune Repertoire Panel
        • RNA Library Prep v1.1
        • Equalizer v1.1
        • Standard v1.1
      • Immune Response Panel
        • RNA Library Prep v1.1
        • Equalizer v1.1
        • Standard v1.1
      • Myeloid Panel
        • DNA Library Prep v1.1
        • RNA Library Prep v1.1
        • Equalizer v1.1
        • Standard v1.1
      • TCR beta-SR Panel
        • DNA Library Prep v1.1
        • RNA Library Prep v1.1
      • Transcriptome Human Gene Expression Panel
        • RNA Library Prep v1.1
        • Equalizer v1.1
        • Standard v1.1
    • Library Prep Validation
    • Nextera
      • Nextera Mate Pair v1.0
      • Nextera Rapid Capture Custom Enrichment v2.0
      • Nextera XT v2.0
    • Targeted Enrichment
      • Illumina DNA Prep with Enrichment (S) Tagmentation v1.2
      • Illumina RNA Prep with Enrichment (L) Tagmentation v1.1
    • TruSeq
      • TruSeq ChIP-Seq v1.0
      • TruSeq Custom Amplicon v1.0
      • TruSeq DNA Exome v2.0
      • TruSeq DNA PCR-Free v2.0
      • TruSeq Methyl Capture EPIC v2.0
      • TruSeq Nano DNA v1.0
      • TruSeq RNA Access v2.0
      • TruSeq RNA Exome v1.0
      • TruSeq Small RNA v1.0
      • TruSeq Stranded mRNA v2.0
    • TruSight
      • TruSight Oncology 500 ctDNA v1.1
      • TruSight Oncology 500 HT v1.1
      • TruSight Oncology 500 v1.1
      • TruSight Tumor 170 v2.0
    • Other DNA Protocols
      • Illumina DNA PCR-Free Library Prep Manual v1.1
      • Illumina DNA Prep (M) Tagmentation v1.0
    • Other RNA Protocols
      • Illumina Stranded mRNA Prep Ligation 1.1
      • Illumina Stranded Total RNA Prep Ligation with Ribo-Zero Plus v1.1
  • iLASS & Infinium Arrays
    • iLASS
      • iLASS Infinium Genotyping v1.1
        • iLASS Infinium Batch DNA v1.1
        • iLASS Infinium Genotyping Assay v1.1
        • iLASS Infinium Genotyping with PGx Assay v1.1
      • iLASS Infinium Genotyping v1.0
        • iLASS Infinium Genotyping Assay v1.0
        • iLASS Infinium Genotyping with PGx Assay v1.0
    • Infinium Arrays
      • Infinium HD Methylation Assay Manual v1.2
      • Infinium HTS Assay Manual v1.2
      • Infinium LCG Assay Manual v1.2
      • Infinium XT Assay Manual v1.2
      • GenomeStudio v1.0
  • Applications
    • IGA
      • IGA v2.1
        • IGA Library Prep Automated v2.1
        • IGA NovaSeq Sequencing v2.1
    • Viral Pathogen Protocols
      • CDC COVID-19 RT-PCR
        • Sort Specimens to Extraction v1.1
        • Qiagen QIAamp DSP Viral RNA Mini Kit v1.1
        • Qiagen EZ1 Advanced XL v1.1
        • Roche MagNA Pure LC v1.1
        • Roche MagNA Pure Compact v1.1
        • Roche MagNA Pure 96 v1.1
        • bioMerieux NucliSENS easyMAG Instrument v1.1
        • bioMerieux EMAG Instrument v1.1
        • Real-Time RT-PCR Prep v1.1
      • Illumina COVIDSeq v1.6
      • Respiratory Virus Panel v1.0
  • Instruments & Integrations
    • Compatibility
    • Integration Properties
      • Integration Properties Details
    • Clarity LIMS Product Analytics
      • Supported Workflows
      • Workflow Customization
      • Clarity LIMS Product Analytics v1.4.0
        • Configuration
      • Clarity LIMS Product Analytics v1.3.1
        • Configuration
      • Clarity LIMS Product Analytics v1.3.0
        • Configuration
      • Clarity LIMS Product Analytics v1.2.0
        • Configuration
    • Illumina Run Manager
      • Illumina Run Manager v1.0.0
        • Installation and User Interaction
    • iScan
      • iScan System
      • iScan v1.2.0
        • Release Notes
        • BeadChip Accessioning, Imaging, and Analysis
      • iScan v1.1.0
        • Release Notes
        • BeadChip Accessioning, Imaging, and Analysis
      • iScan System v1.0
    • iSeq 100 Run Setup v1.0
    • MiniSeq v1.0
    • MiSeq
      • MiSeq v8.3.0
        • Release Notes
        • Installation
        • Configuration
        • User Interaction, Validation and Troubleshooting
      • MiSeq v8.2.0
        • Release Notes
        • Installation
        • Configuration
        • User Interaction, Validation and Troubleshooting
        • Manual Upgrade
    • MiSeq i100 (On-Prem)
      • MiSeq i100 On-Prem v1.0.0
        • Release Notes
        • Installation
        • Configuration
        • User Interaction, Validation and Troubleshooting
    • MiSeq i100 (Hosted)
      • MiSeq i100 v1.0.0
        • Release Notes
        • Configuration
        • User Interaction, Validation and Troubleshooting
    • MiSeqDx
      • MiSeqDx Sample Sheet Generation (v1.11.0 and later)
      • MiSeqDx v1.11.0
        • Release Notes
        • Installation
        • Configuration
        • User Interaction, Validation and Troubleshooting
      • MiSeqDx v1.10.0
        • Release Notes
        • Installation
        • Configuration
        • User Interaction, Validation and Troubleshooting
        • Sample Sheet Generation
        • Manual Upgrade
    • Next Generation Sequencing Package
      • Release Notes
        • NGS Extensions v5.25.0
        • NGS Extensions v5.24.0
        • NGS Extensions v5.23.0
      • Accession Kit Lots
      • Auto-Placement of Reagent Indexes
      • Compute Replicate Average
      • Copy UDFs
      • Initialize Artifact UDFs
      • Label Non-Labeled Outputs
      • Linear Regression Calculation
      • Normalization Buffer Volumes
      • Process Summary Report
      • Routing Script
      • Set UDF
      • Validate Complete Plate
      • Validate Sample Count
      • Validate Unique Indexes
    • NextSeq 500/550
      • NextSeq 500/550 v2.5.0
        • Release Notes
        • Installation
        • Configuration
        • User Interaction, Validation and Troubleshooting
        • Manual Upgrade
      • NextSeq 500/550 v2.4.0
        • Release Notes
        • Installation
        • Configuration
        • User Interaction, Validation and Troubleshooting
      • NextSeq 500/550 v2.3.0
        • Release Notes
        • Installation
        • Configuration
        • User Interaction, Validation and Troubleshooting
    • NextSeq 1000/2000 (Hosted)
      • NextSeq 1000/2000 v2.5.1
        • Release Notes
      • NextSeq 1000/2000 v2.5.0
        • Release Notes
        • Configuration
        • User Interaction, Validation and Troubleshooting
        • Manual Upgrade
      • NextSeq 1000/2000 v2.4.0
        • Release Notes
        • Configuration
        • User Interaction, Validation and Troubleshooting
    • NextSeq 1000/2000 (On-Prem)
      • NextSeq 1000/2000 On-Prem v1.0.0
        • Release Notes
        • Installation
        • Configuration
        • User Interaction, Validation and Troubleshooting
    • NovaSeq 6000 (API-based)
      • NovaSeq 6000 API-based v3.7.0
        • Release Notes
        • Installation
        • Configuration
        • User Interaction, Validation and Troubleshooting
      • NovaSeq 6000 API-based v3.6.0
        • Release Notes
        • Installation
        • Configuration
        • User Interaction, Validation and Troubleshooting
        • Manual Upgrade
    • NovaSeq 6000 (File-based)
      • NovaSeq 6000 File-based v2.6.0
        • Release Notes
        • Installation
        • Configuration
        • User Interaction, Validation and Troubleshooting
      • NovaSeq 6000 File-based v2.5.0
        • Release Notes
        • Installation
        • Configuration
        • User Interaction, Validation and Troubleshooting
    • NovaSeq 6000Dx (API-based)
      • NovaSeq 6000Dx API-based v1.3.0
        • Release Notes
        • Installation
        • Configuration
        • User Interaction, Validation and Troubleshooting
      • NovaSeq 6000Dx API-based v1.2.0
        • Release Notes
        • Installation
        • Configuration
        • User Interaction, Validation and Troubleshooting
    • NovaSeq X Series (Hosted)
      • NovaSeq X Series v1.3.0
        • Release Notes
        • Configuration
        • Manual Upgrade
      • NovaSeq X Series v1.2.1
        • Release Notes
      • NovaSeq X Series v1.2.0
        • Release Notes
        • Configuration
        • User Interaction, Validation and Troubleshooting
        • Manual Upgrade
      • NovaSeq X Series v1.1.0
        • Release Notes
        • Configuration
        • User Interaction, Validation and Troubleshooting
    • NovaSeq X Series (On-Prem)
      • NovaSeq X Series On-Prem v1.0.0
        • Release Notes
        • Installation
        • Configuration
        • User Interaction, Validation and Troubleshooting
    • References
      • Configure Multiple Identical netPathPrefixSearch Values
      • Configure Support for Samples Having Duplicate Names with Different Indexes
      • Illumina Instrument Sample Sheets
      • Terminology
  • Integration Toolkits
    • Lab Instrument Toolkit
      • Template File Generator
        • Creating Template Files
        • Template File Contents
        • Template File Generator Troubleshooting
      • Add Blank Lines
      • Convert CSV to Excel
      • Parse CSV
      • Name Matching XML Parser
      • Sample Placement Helper
    • Lab Logic Toolkit
      • Working with Lab Logic Toolkit
        • Data Collection Entities
        • Failing a Script
        • Mapping Field Types
        • Non-UDF/Custom Field Properties
        • Setting QC Flags
        • Setting Next Actions
        • Specifying Custom Fields
        • Working with Submitted Samples
        • Working with Containers
      • Lab Logic Toolkit Script Examples
        • Comparing Stop/Start Dates and Times with LLTK
      • Lab Logic Toolkit FAQ
  • Known Issues
    • Integration
      • Sample Sheet Generation Issue and CLPA Issues When Samples Have Been Assigned QC Flag Prior to Entering Steps
  • Security Bulletin
    • Investigation of OpenSSH vulnerability with Clarity LIMS
  • Resources
    • Third Party Software Information
  • Others
    • Revision History
Powered by GitBook
On this page
  • Script Parameters
  • Data Source
  • Template Sections
  • HEADER_BLOCK
  • HEADER
  • DATA
  • FOOTER
  • PLACEMENT
  • TOKEN FORMAT
  • Metadata
  • Sorting Logic
  • Rules and Constraints
  • Examples
  • Illumina Instrument Sample Sheets
  • Generating Sample Sheets for QC Instruments
  • Renaming Generated Files
  • Using Token Values In File Names
  • Defining a Project Name for Control Samples
  • Using HIDE to Exclude Empty Columns
  • Using HIDE to Exclude Empty HEADER rows
  • Generating Multiple Files

Was this helpful?

Export as PDF
  1. Integration Toolkits
  2. Lab Instrument Toolkit
  3. Template File Generator

Creating Template Files

PreviousTemplate File GeneratorNextTemplate File Contents

Last updated 4 months ago

Was this helpful?

Available from: BaseSpace Clarity LIMS v4.2.x

You can create template files that the Template File Generator script (driver_file_generator) will use to generate custom files for use in your lab.

This article provides details on the following:

  • The parameters used by the script.

  • The sections of the template file—these define what is output in the generated file.

  • Sorting logic—options for sorting the data in the generated file.

  • Rules and constraints to keep in mind when creating templates and generating files.

  • Examples of how you can use specific tokens and metadata in your template files.

For a complete list of the metadata elements and tokens that you can include in a template file, see .

Script Parameters

Upgrade Note: process vs step URIs

  • The driver_file_generator script now uses steps instead of processes for fetching information. When a process URI is supplied, the script detects it and automatically switches it to a step URI. (The PROCESS.TECHNICIAN token, which is only available on 'process' in the API, is still supported.)

  • The behavior of the script has not changed, except that the long form -processURI parameter must be replaced by -stepURI in configuration. The -i version of this parameter remains supported and now accepts both process and step URI values.

  • If your configuration is using -processURI or --processURI, replace each instance with -i (or -stepURI/--stepURI).

The following table defines the parameters used by the driver_file_generator script.

Option

Name

Description

-i {stepURI:v2} -stepURI {stepURI:v2}

Step URI

(Required) LIMS step URI Provides context to resolve all token values. See Upgrade Note above.

-u {username} -username {username}

Username

(Required) LIMS login username

-p {password} -password {password}

Password

(Required) LIMS login password

-t <templateFile> -templatePath <templateFile>

Template file

(Required) Template file path

-o <outputFile> -outputPath <outputFile>

Output file

  • This output file parameter value is overwritten by OUTPUT.FILE.NAME

  • To output multiple files, use GROUP.FILES.BY.INPUT.CONTAINERS and GROUP.FILES.BY.OUTPUT.CONTAINERS

  • Files generated are in CSV format by default. Other value-separated formats are available—see OUTPUT.SEPARATOR.

-l <logFile> -logFileName <logFile>

Log file

(Required) Log file name

-q [true|false] -quickAttach [true|false]

Quick Attach

Default is 'false'. Provide as 'true' to attach the file on script completion. To attach manually or with AI/Automation Worker, name the file starting with the placeholder LIMSID. If multiple files are generated, they are zipped into one archive. Main use cases are:

  • Multiple files are generated (see GROUP.FILES.BY) and must be attached to the LIMS (in addition to, or in place of, writing them to disk).

  • When chaining multiple scripts together, this make sure that the file has already been attached before the next script runs.

-destLIMSID <LIMSID>

Destination LIMS ID

Command-line example:

bash -l -c "opt/gls/clarity/bin/java -jar /opt/gls/clarity/extensions/ngs-common/v5/EPP/DriverFileGenerator.jar script:driver_file_generator -i {stepURI:v2} -u {username} -p {password} -t /opt/gls/clarity/customextensions/InfiniumHT/driverfiletemplates/NextSeq.csv-o {compoundOutputFileLuid0}.csv -l {compoundOutputFileLuid1}"

Command-line example using -quickAttach and -destLIMS:

bash -l -c "/opt/gls/clarity/bin/java -cp /opt/gls/clarity/extensions/ngs-common/v5/EPP/DriverFileGenerator.jar script:driver_file_generator -i {stepURI:v2} -u {username} -p {password} -t /opt/gls/clarity/customextensions/Robot.csv -quickAttach true -destLIMSID {compoundOutputFileLuid0} -o extended_driver_x384.csv -l {compoundOutputFileLuid2}"

Data Source

The input-output-maps of the step (defined by the -stepURI parameter) are used as the data source for the content of the generated file.

If they are present, input-output-maps with the attribute output-generation-type=PerInput are used. Otherwise, all input-output-map items are used.

The output generation type specifies how the step outputs were generated in relation to the inputs. PerInput entries are available for the following step types: Standard, Standard QC, Add Labels, and Analysis.

Template Sections

The content of the generated file is determined by the sections defined in the template. Content for each section is contained within xml-like opening and closing tags that are structured as follows:

<SECTION>
    section content
</SECTION>

Most template files follow the same basic structure and include some or all the following sections (by convention, section names are written in capital letters, but this is not required):

<HEADER_BLOCK>
<HEADER>
<DATA>
<FOOTER>

The order of the section blocks in the template does not affect the output. In the output file, blocks will always be in the order shown.

The <PLACEMENT> and <TOKEN FORMAT> sections are not part of the list and do not create distinct sections in the generated file. Instead, they alter the formatting of the generated output.

HEADER_BLOCK

  • If an unsupported token is included, file generation will complete with a warning message and a warning will appear in the log file.

Tokens in the header block always resolve in the context of the first input and first output available. For example, suppose the INPUT.CONTAINER.TYPE token is used in the header block:

  • If there is only one type of input container present in the data source, that container type will be present in the output file.

  • If multiple input container types are present in the data source, only the first one encountered while processing the data will be present in the output file.

For this reason, we recommend against using tokens that will resolve to different values for different samples - such as SAMPLE.NAME. If one of these tokens is encountered, a warning is logged and the first value retrieved from the API is used. (Note that you may use.ALL tokens, where available.)

To include a header block section in a template, enclose it within the <HEADER_BLOCK> and </HEADER_BLOCK> tags.

HEADER

The content of this section can only include plain text and is output as is. Tokens are not supported.

To include a header section in a template, enclose it within the <HEADER> and </HEADER> tags.

  • If multiple <HEADER> lines are present, at least one must have the same number of columns as the <DATA> template line.

  • <HEADER> lines that do not match the number of columns are unaffected by the HIDE feature.

DATA

Each data source entry creates a data row for each template line in the section. All entries are output for the first template line, then the next template line runs, and so on.

The data section allows tokens and text entries. All tokens are supported.

Note the following:

  • Duplicated rows are eliminated, if present. A row is considered duplicated if its content (after all variables and placeholders have been replaced with their corresponding values) is identical to a previous row. Tokens must therefore provide distinctive enough data (ie, something more than just CONTAINER.NAME) if all of the input-output entry pairs are desired in the generated file.

  • By default, the script processes only sample entries. However, there are metadata options that allow inclusion of result files/measurements and exclusion of samples.

  • Metadata sorting options are applied to this section of the template file only.

  • By default, pooled artifacts are treated as a single input artifact. They can be demultiplexed using the PROCESS.POOLED.ARTIFACTS metadata element.

  • If there is at least one token relevant to the step inputs or outputs, this section will produce a row for each PerInput entry in the step input-output-map. If no PerInput entries are present in the step input-output-map, the script will attempt to add data rows for PerAllInputs entries.

  • Input and output artifacts are always loaded if a <DATA> section is present in the template file, due to the need to determine what type of artifacts the script is dealing with.

To include a data section in a template, enclose it within the <DATA> and </DATA> tags.

FOOTER

The content of this section can only include plain text and is output as is. Tokens are not supported.

To include a footer section in a template, enclose it within the <FOOTER> and </FOOTER> tags.

PLACEMENT

Within the groovy code, the following variables are available:

Variable Name

Description

containerTypeNode

The container type holding the derived sample

row

The row part of the derived sample's location

column

The column part of the derived sample's location

Note the following:

  • The script must return a string, which replaces the corresponding <PLACEMENT> tag in the template.

  • Logic within the placement tags can be as complex as needed, provided it can be compiled by a groovy compiler.

  • If an error occurs while running formatting code, the original location value is used.

To include a placement section in a template, enclose it within the <PLACEMENT> and </PLACEMENT> tags.

Placement Example: Container Type

In the following example:

  • If the container type is a 96 well plate, sample placement A1 will return as "A_1"

  • If the container type is not a 96 well plate, sample placement A1 will return as "A:1"

<PLACEMENT>
// The inputs to this segment are: String row, String column, Node containerTypeNode
if (containerTypeNode.@name == "96 well plate") return row + "_" + column
else return row + ":" + column
</PLACEMENT>

Placement Example: Zero Padding

<PLACEMENT>
// The inputs to this segment are: String row, String column, Node containerTypeNode
String zeroPad (String entry) {
if (entry.isNumber() && entry.size() == 1) return "0" + entry
return entry
}
return zeroPad(row) + ":" + zeroPad(column)
</PLACEMENT>

TOKEN FORMAT

This section defines logic to be applied to specific tokens to change the format in which they appear in the generated file.

Special formatting rules can be defined per token using the following groovy syntax:

${token.identifier}
…groovy code…
// or 
${token.identifier##Name}
…groovy code…

Within the groovy code, the variable 'token' refers to the original value being transformed by the formatting code. The logic replaces all instances of that token with the result.

${token.identifier} marks the beginning of the token formatting code and the end of the previous token formatting code (if any).

  • You can define multiple formatting logic rules for a given token, by assigning a name to the formatting section (named formatters are called 'variations'). This is done by appending “##” after the token name (eg “${token.identifier##formatterName}”).

  • Using the named formatter syntax without giving a name (“${token.identifier##}”) will abort the file generation.

  • If an error occurs while running formatting code, the resulting value will be blank.

  • If a named formatter is used but not defined, the value is used as is.

To include a placement section in a template, enclose it within the <TOKEN_FORMAT> and </TOKEN_FORMAT> tags.

TOKEN FORMAT Example: Technician Name

In this example, a custom format is defined for displaying the name of the technician who ran a process (step).

The name of the token appears at the beginning of the groovy code that will then be applied. In this code, the variable 'token' refers to the token being affected. The return value is what will replace all instances of this token in the file.

<TOKEN_FORMAT>
${PROCESS.TECHNICIAN}
def name = token.split(" ")
return "First name: " + name[0] + ", Last name: " + name[1]
</TOKEN_FORMAT>

TOKEN FORMAT Example: Appending a String to Container Name or Sample Name

In this second example, when special formatting is required for two tokens, the logic for both appear inside the same set of tags.

The example appends a string to the end of the input container name or a prefix to the beginning of the submitted sample name.

<TOKEN_FORMAT>
${INPUT.CONTAINER.NAME}
return token + "-PlateName"
${SAMPLE.NAME}
return "SN-" + token
</TOKEN_FORMAT>

Metadata

Metadata provides information about the template file that is not retrieved from the API — such as the file output directory to use, and how the data contents should be grouped and sorted.

Metadata is not strictly confined to a section, and is not designated by opening and closing tags. However, each metadata entry must be on a separate line.

Metadata entries can be anywhere in the template, but the recommended best practice is to group them either at the top or the bottom of the file.

Sorting Logic

Sorting in the generated file is done either alphanumerically or by vertical placement information, using the SORT.BY. and SORT.VERTICAL metadata elements.

Sorting must be done using a combination of sort keys - provided to SORT.BY. as one or more ${token} values, each of which always produces a unique value in the file. For example, sorting by just OUTPUT.CONTAINER.NAME would work for samples placed in tubes, but would not work for samples in 96 well plates. Sorting behavior on nonunique combinations is not guaranteed to be predictable.

To sort vertically:

Include the SORT.VERTICAL metadata element in the template file. In addition, the SORT.BY.${token}, ${token} metadata must also be included, as follows:

SORT.BY.${OUTPUT.CONTAINER.ROW}${OUTPUT.CONTAINER.COLUMN}

Any SORT.BY. tokens will be sorted using the vertical sorter instead of the alphanumeric sort.

To apply sorting to samples in 96 well plates:

You could narrow the sort key to a unique combination such as:

SORT.BY.${OUTPUT.CONTAINER.NAME}${OUTPUT.CONTAINER.ROW}${OUTPUT.CONTAINER.COLUMN}

Rules and Constraints

The template must adhere to the following rules:

  • Metadata entries must each appear on a new line and be the only entry on that line.

  • Metadata entries must not appear inside tags.

  • Opening and closing section tags must appear on a new line and as the only entry on that line.

  • Each opened tag must be closed, otherwise it is skipped by the script.

  • Any sections (opening tag + closing tag combination) can be omitted from the template file.

  • Entries that are separated by commas in the template will be delimited by the metadata-specified separator (default: COMMA) in the template file.

  • White space is allowed in the template. However, if there is a blank line inside a tag, it will also be present in the template file produced.

  • If an entry in the template is enclosed in double quotes it will be imported as a single entry and written to the template file as such, even if it has commas inside.

  • To include double-quotes or single-quotes in the template file, use the escape character: Example: \" or \'

  • To include an escape character in the template file, use two escape characters inside double-quotes. For example, if you want to see \\Share\Folder\Filename.txt use "\\\\Share\\Folder\\Filename.txt" as the token.

If any of the following conditions is not met - the tag, and everything inside it, is ignored by the script and a warning displays in the log file:

  • Except for the metadata, all template sections must be enclosed inside tags.

  • Each tag must have its own line, and must be the only tag present on that line.

  • No other entries, even empty ones, are allowed.

  • All opened tags must be closed.

  • Custom field names must not contain periods.

Examples

Illumina Instrument Sample Sheets

Generating Sample Sheets for QC Instruments

The LIMS provides configured automations that generate sample sheets compatible with a number of QC instruments. The default automation command lines are provided below.

Generate Bioanalyzer Driver File Automation

Command line:

bash -l -c "/opt/gls/clarity/bin/java -jar /opt/gls/clarity/extensions/ngs-common/v5/EPP/DriverFileGenerator.jar -u {username} -p {password} \
script:driver_file_generator \
-i {processURI:v2} \
-t /opt/gls/clarity/extensions/ngs-common/v5/EPP/conf/readonly/bioA_driver_file_template.csv \
-o {compoundOutputFileLuid0}.csv \
-l {compoundOutputFileLuid1} \
&& /opt/gls/clarity/bin/java -jar /opt/gls/clarity/extensions/ngs-common/v5/EPP/ngs-extensions.jar -u {username} -p {password} \
script:addBlankLines \
-i {stepURI:v2} \
-f {compoundOutputFileLuid0}.csv \
-l {compoundOutputFileLuid1} \
-sep COMMA \
-b ',False,' \
-h 1 \
-c LIMSID \
-pre 'Sample '"

Template file content:

INCLUDE.OUTPUT.RESULTFILES
<HEADER_BLOCK>
</HEADER_BLOCK>
<HEADER>
"\"Sample Name\",\"Sample Comment\",\"Rest. Digest\",\"Observation\""
</HEADER>
<DATA>
${OUTPUT.LIMSID},,False,
</DATA>
<FOOTER>
Ladder,,False,"\"Chip Lot #\",\"Reagent Kit Lot #\",
\"QC1 Min [%]\",\"QC1 Max [%]\",\"QC2 Min [%]\",\"QC2 Max [%]\"
,,,
\"Chip Comment\”
"
</FOOTER>
Generate NanoDrop Driver File Automation

Command line:

bash -l -c "/opt/gls/clarity/bin/java -jar /opt/gls/clarity/extensions/ngs-common/v5/EPP/DriverFileGenerator.jar -u {username} -p {password} \
script:driver_file_generator \
-i {processURI:v2} \
-t /opt/gls/clarity/extensions/ngs-common/v5/EPP/conf/readonly/nd_driver_file_template.csv \
-o {compoundOutputFileLuid0}.csv \
-l {compoundOutputFileLuid1}"

Template file content:

INCLUDE.OUTPUT.RESULTFILES
<HEADER_BLOCK>
</HEADER_BLOCK>
<HEADER>
"Well Location, Sample Name"
</HEADER>
<DATA>
${INPUT.CONTAINER.PLACEMENT},${INPUT.CONTAINER.NAME}_${INPUT.CONTAINER.PLACEMENT}_${INPUT.NAME}
</DATA>
Generate Tapestation Input Sample Table CSV Automation

Command line:

bash -l -c "/opt/gls/clarity/bin/java -jar /opt/gls/clarity/extensions/ngs-common/v5/EPP/DriverFileGenerator.jar -u {username} -p {password} \
script:driver_file_generator \
-i {processURI:v2} \
-t /opt/gls/clarity/extensions/ngs-common/v5/EPP/conf/readonly/tapestation_driver_file_template.csv \
-o {compoundOutputFileLuid0}.csv \
-l {compoundOutputFileLuid1}"

Template file content:

INCLUDE.OUTPUT.RESULTFILES
SORT.BY.${INPUT.LIMSID}
<DATA>
${OUTPUT.LIMSID}_${INPUT.NAME}
</DATA>
Create GenomeStudio Driver File Automation

Command line:

bash -c "/opt/gls/clarity/bin/java -cp /opt/gls/clarity/extensions/ngs-common/v5/EPP/DriverFileGenerator.jar driver_file_generator \
-i {processURI} -u {username} -p {password} -t /opt/gls/clarity/extensions/conf/driverfiletemplates/GenomeStudioGeneExpressionTemplate.csv \
-o {compoundOutputFileLuid0}.csv -l {compoundOutputFileLuid1}.html"

Template file content:

INCLUDE.OUTPUT.RESULTFILES
OUTPUT.SEPARATOR,COMMA
LIST.SEPARATOR,";"
ILLEGAL.CHARACTERS,COMMA
ILLEGAL.CHARACTER.REPLACEMENTS,_SORT.BY.${INPUT.CONTAINER.NAME}${INPUT.CONTAINER.ROW}${INPUT.CONTAINER.COLUMN}
<HEADER_BLOCK>
[HEADER]Investigator Name ${PROCESS.TECHNICIAN}
Project Name, ${SAMPLE.PROJECT.NAME.ALL}
Experiment Name
Date, ${DATE}
[Manifests]
${PROCESS.UDF.Manifest A}
</HEADER_BLOCK>
<HEADER>
[DATA]
Sample_Name,Sample_Well,Sample_Plate,Pool_ID,Sentrix_ID,Sentrix_Position
</HEADER>
<DATA>
${INPUT.NAME},,,,${INPUT.CONTAINER.NAME},${INPUT.CONTAINER.PLACEMENT}
</DATA>
<PLACEMENT>
// inputs to this section are String row, String column, Node containerTypeNode
int convertAlphaToNumeric(String letters) {
    int result = 0
    letters = letters.toUpperCase()
    for (int i = 0; i < letters.length(); i++) {
        result += (letters.charAt(i).minus('A' as char) + 1) * (26 ** (letters.length() - i - 1))
    }
    return result
}
int SENTRIX_POS_THRESHOLD = 12
int WELL_PLATE_SIZE_96 = 96
int xSize = containerTypeNode.'x-dimension'.size.text().toInteger()
int ySize = containerTypeNode.'y-dimension'.size.text().toInteger()
int containerSize = xSize * ySize
boolean xIsAlpha = containerTypeNode.'x-dimension'.'is-alpha'.text().toBoolean()
boolean yIsAlpha = containerTypeNode.'y-dimension'.'is-alpha'.text().toBoolean()
if (containerSize <= SENTRIX_POS_THRESHOLD && (xIsAlpha || yIsAlpha)) {
    return row
}
// R001_C001 for 96 well plate, r01c01 for other container types
if (containerSize == WELL_PLATE_SIZE_96) {
    def numFormat = java.text.NumberFormat.getNumberInstance() numFormat.setMinimumIntegerDigits(3)
    String xStr = numFormat.format(column.isInteger() ? column as int : convertAlphaToNumeric(column))
    String yStr = numFormat.format(row.isInteger() ? row as int : convertAlphaToNumeric(row))
    // Row is mapped to x coordinate, while column is mapped to y.
    // When creating an array type of size 96, swap the row and column dimension.
    // e.g 12 x 8 array should be mapped as an 8 x 12 array
    //
    // This mapping has been in RI for a while.
    // In AddIlluminaArraysStep, all 2D Illumina arrays added have a dimension of 8 x 12.
    // This driver file template then converts it back to 12 x 8.
    // This logic is now corrected to follow other arrays to make sure the driver file.
    // generated is compatible with existing arrays and software.
    return "R"+xStr+"_C"+yStr
} else {
    def numFormat = java.text.NumberFormat.getNumberInstance()
    numFormat.setMinimumIntegerDigits(2)
    String xStr = numFormat.format(column.isInteger() ? column as int : convertAlphaToNumeric(column))
    String yStr = numFormat.format(row.isInteger() ? row as int : convertAlphaToNumeric(row))
    // row is mapped to y, column is mapped to x
    return "r"+yStr +"c"+xStr
}
</PLACEMENT>

Renaming Generated Files

In the template file, the following OUTPUT.FILE.NAME metadata element renames the generated template file 'NewTemplateFileName':

OUTPUT.FILE.NAME, NewTemplateFileName.csv

In the automation command line, the following will attach the generated file to the {compoundOutputFileLuid0} placeholder, with the name defined by the OUTPUT.FILE.NAME metadata element.

bash -c "/opt/gls/clarity/bin/java -cp /opt/gls/clarity/extensions/ngs-common/v5/EPP/DriverFileGenerator.jar \
script:driver_file_generator \
-i {stepURI:v2} \
-u {username} \
-p {password} \
-t /opt/gls/clarity/customextensions/Robot.csv \
-q true -destLIMSID {compoundOutputFileLuid0} \
-o extended_driver_x384.csv \
-l {compoundOutputFileLuid2}"

When the LIMS attaches a file to a placeholder in the LIMS, it assumes that the file is named with the step LIMSID, and uses this LIMSID to identify the placeholder to which the file should be attached. However, when using OUTPUT.FILE.NAME, you can give the file a name that does not begin with the LIMSID of the placeholder to which it will be attached. To do this, you must use the quickAttach and destLIMSID parameters in the automation command line.

  • If the quickAttach parameter is provided without destLIMSID parameter, the script logs an error and stops execution.

  • If destLIMSID is provided without using quickAttach, it is ignored.

Using Token Values In File Names

The OUTPUT.FILE.NAME and OUTPUT.TARGET.DIR metadata elements support token values. This allows you to name files based on input / output values of the step - the input or output container name, for example.

The following tokens are supported for this feature:

  • PROCESS.LIMSID

  • PROCESS.UDF.<UDF NAME>

  • PROCESS.TECHNICIAN

  • DATE

  • INPUT.CONTAINER.NAME

  • INPUT.CONTAINER.TYPE

  • INPUT.CONTAINER.LIMSID

  • OUTPUT.CONTAINER.NAME

  • OUTPUT.CONTAINER.TYPE

  • OUTPUT.CONTAINER.LIMSID

Rules and Constraints

When using token values in file names, the following rules and constraints apply:

  • Container-related functions will return the value from a single container, even if there are multiple containers.

  • Other tokens will function, but will only return the value for the first row of the file (first input or output).

  • If the OUTPUT.FILE.NAME specified does not match the LIMS ID of the file, the output file will not be attached in the LIMS user interface. To ensure that the file is attached, include the quickAttach and destLIMSID parameters in the command-line string.

  • It is highly recommended that you do not use SAMPLE.PROJECT.NAME.ALL or SAMPLE.PROJECT.CONTACT.ALL, because the result is prone to surpassing the maximum length of a file name. There are similar issues with other SAMPLE tokens when dealing with pools.

  • Only the following characters are supported in the file name. Any other characters will be replaced by an _ (underscore) by default. This replacement character can be configured with the OUTPUT.FILE.NAME.ILLEGAL.CHARACTER.REPLACEMENT metadata element.

    • a-z

    • A-Z

    • 0–9

    • _ (underscore)

    • - (dash)

    • . (period)

Providing a full file path for OUTPUT.FILE.NAME is still supported, but deprecated. If the full path is provided, the file/directory separator will be automatically detected and will not be replaced in the static parts of the file name. Any of these separators derived from the result of a token value will be replaced.

Defining a Project Name for Control Samples

You can use the CONTROL.SAMPLE.DEFAULT.PROJECT.NAME metadata element to define a project name for control samples. The value specified by this token will be used when determining one or more values for the SAMPLE.PROJECT.NAME and SAMPLE.PROJECT.NAME.ALL tokens.

Example:

CONTROL.SAMPLE.DEFAULT.PROJECT.NAME, My Control Sample Project

Rules and Constraints

  • If the token is found in the template, but with no value then no project name will be given for control samples.

  • If the token is not found in the template, then no project name will be given for control samples.

  • If multiple values are provided, the first one will be used.

  • The SAMPLE.PROJECT.NAME.ALL list will include the control project name.

Using HIDE to Exclude Empty Columns

You can use tthe HIDE metadata element to optionally hide a column if it contains no data. The following lines in the metadata will hide a data column when empty:

HIDE, ${OUTPUT.UDF.SAMPLEUDF}, IF, NODATA

Assuming ${OUTPUT.UDF.SAMPLEUDF} is one of the data columns specified in the template, then that column will be hidden whenever there is no data to show in the output file. If a list of fields is provided, then any empty ones will be hidden:

HIDE, ${OUTPUT.UDF.SAMPLEUDF},${PROCESS.TECHNICIAN}, ${PROCESS.LIMSID}, IF, NODATA

You may also hide only one representation of a specific column or field:

HIDE, ${PROCESS.TECHNICIAN##FirstName}, IF, NODATA

Using HIDE to Exclude Empty HEADER rows

You can also use the HIDE metadata element with tokens in the header section. If one or more tokens are used for a header key value pair, and there are no values for any of the tokens, the entire row will be hidden.

Assuming ${OUTPUT.UDF.SAMPLEUDF} is one of the rows specified in the template header section, that header row will be hidden whenever there is no data to display in the output file.

If a list of tokens is provided for the value, the row will only be shown if one or more of the tokens resolves to a value:

HIDE, ${OUTPUT.UDF.SAMPLEUDF},${PROCESS.TECHNICIAN}, ${PROCESS.LIMSID}, IF, NODATA

Generating Multiple Files

If you would like to generate multiple files, you can use the following GROUP.FILES.BY metadata elements:

  • GROUP.FILES.BY.INPUT.CONTAINERS

  • GROUP.FILES.BY.OUTPUT.CONTAINERS

These elements allow a file to be created per instance of the specified element in the step, for example, one file per input or per output container. Step level information appears in all files, but sample information is specific to the samples in the given container.

For example, suppose that a step has two samples - each in their own container - with a template file calling for information about process UDFs and sample names. Using this metadata will produce two files, each of which will contain:

  • One sample entry

  • The same process UDF information

Naming The Files

When generating multiple files, the script gathers them all into one zip file so only one file placeholder is needed regardless of how many containers are in the step.

The zip file name may be provided in the metadata as follows:

GROUP.FILES.BY.INPUT.CONTAINERS,<zip file name>
GROUP.FILES.BY.OUTPUT.CONTAINERS,<zip file name>

Inside the zip file, include any paths specified for where files should be written. An example final structure inside the zip, where the subfolders are specified using the container name token, could be as follows:

GROUP.FILES.BY.INPUT.CONTAINERS,MyZip.zip
MyZip.zip\
      \-- Container1\
              \-- SampleSheet.csv\
      \-- Container2\
              \-- SampleSheet.csv

The file naming, writing, and uploading process works as follows:

  • The outputPath parameter element is required for the script. You can use this parameter to specify the path to which the generated files will be written and/or the name to use for the file. Use this in the following scenarios:

    • When the target path/name is constant OR

    • When the target path/name includes something that can only be passed to the script via the command line - for example, if you want to include the value of a {compoundOutputFileLuidN} in the path.

  • The OUTPUT.TARGET.DIR metadata element overrides any path provided by outputPath, but does not change the file name. Use this:

    • When the target path includes something that can only be accessed with token templates - for example, the name of the user who ran the step.

  • The OUTPUT.FILE.NAME metadata element overrides any value provided by outputPath entirely. This token determines the name of the files that are produced for each container - for example, SampleSheet.csv. It may also contain tokens to access information, such as the container name, and it may also contain a path.

If you provide all three of outputPath, OUTPUT.TARGT.DIR, and OUTPUT.FILE.NAME, the result is that outputPath is ignored and the path specified by OUTPUT.TARGET.DIR is used as the parent under which OUTPUT.FILE.NAME is created, even if OUTPUT.FILE.NAME includes a path in addition to the file name.

If you wish to only attach files to placeholders in the LIMS and do not wish to also write anything to disk, then omit OUTPUT.TARGET.DIR and provide the outputPath parameter value as ".". This will cause files to only be written to the temporary directory that is cleaned up after the automation completes.

To produce the example of MyZip.zip, you could use the following:

Script parameters:

-outputPath SampleSheet.csv
-q 'true'
-destLIMSID {compoundOutputFileLuid0}

Template:

GROUP.FILES.BY.OUTPUT.CONTAINERS,MyZip.zip
OUTPUT.TARGET.DIR,${OUTPUT.CONTAINER.NAME}

Rules and Constraints

  • You can only use one GROUP.FILES.BY metadata element in each template file.

  • To attach the files in the LIMS as a zip file, you must provide the quickAttach parameter along with the destLIMSID.

  • The zip file name may optionally be specified with the GROUP.FILES.BY metadata.

  • If quickAttach is used and no zip name is specified in the template, the zip will be named using the destLIMSID parameter value.

  • The zip file name, file paths, and file names should not contain characters that are illegal for directories and files on the target operating system. Illegal characters will be replaced with underscores.

  • If a file name is not unique to the target directory, e.g., if multiple SampleSheet.csv files are being written to /my/target/path, an error will be thrown and no files written.

  • When specifying the OUTPUT.TARGET.DIR metadata element, if a token is used that may resolve to multiple values for a single path (for example, using INPUT.NAME in the path when it will resolve to multiple sample names), one value will be chosen arbitrarily for the path. For example, you may end up with /Container1/Sample1/myfile.csv when there are two samples in the container.

(Required) Output file path If the folder structure specified in the path does not exist, it is created. Details on the following metadata elements are provided in the Metadata section of the article:

See and examples.

LIMSID of the output to attach the template file to. Use with quickAttach. See and examples.

See also the section.

By default, the data source entries are sorted alphanumerically by LIMS ID. You can modify the sort order by using the SORT.BY and SORT.VERTICAL metadata elements (see section of the article).

The area outside of the sections can contain metadata elements (see section of the article). Anything else outside of the section tags is ignored.

Only a subset of the tokens is available for use in the header block section. For details, see the article table.

The header block section may include both plain text and data from the LIMS. It consists of information that does not appear multiple times in the generated file—ie, the information is not included in the data rows (see section)

HIDE feature: If one of the tokens of a line is empty and is part of a HIDE statement, that line will be removed entirely. See and examples.

The header section describes the header line of the data section (see section). A simple example might be "Sample ID, Placement".

HIDE feature: See 'Hide feature' in section. Also note:

HIDE feature: If the token in a given column is empty for all lines and that token is part of a HIDE statement, that column (including the matching <HEADER> columns) will be removed entirely. There can only be one <DATA> template line present when using the HIDE feature. See and examples.

This section contains groovy code that controls the formatting of PLACEMENT tokens (see the PLACEMENT tokens in article table).

For a list of supported metadata elements, rules for using them, and examples, see , section.

See also SORT.VERTICAL and SORT.BY. in the article.

The LIMS provides configuration to support generation of sample sheets that are compatible with some Illumina instruments. For details, see the documentation.

As a best practice, we recommend storing a copy of generated files in the LIMS. To do this, you must use the quickAttach script parameter. This parameter must be used with the destLIMSID parameter, which tells the Template File Generator script which file placeholder to use. (For details, see .)

Template File Contents
Template File Contents
Illumina Instrument Sample Sheets
Examples
DATA
Using HIDE to Exclude Empty Columns
Using HIDE to Exclude Empty HEADER rows
DATA
DATA
Using HIDE to Exclude Empty Columns
Using HIDE to Exclude Empty HEADER rows
Script Parameters
Template File Contents
Renaming Generated Files
Generating Multiple Files
Renaming Generated Files
Generating Multiple Files
Template File Contents
Template File Contents
Template File Contents
Template File Contents
Template File Contents
Metadata
Metadata
Tokens
Tokens
Metadata