Version 9.1 represents a important step forward in the ongoing evolution of the open source LabKey Server. Enhancements in this release are designed to:
  • Support leading medical research institutions using the system as a data integration platform to reduce the time it takes for laboratory discoveries to become treatments for patients
  • Provide fast-to-deploy software infrastructure for communities pursing collaborative clinical research efforts
  • Deliver a secure data repository for managing and sharing laboratory data with colleagues, such as for proteomics, microarray, flow cytometry or other assay-based data.
New capabilities introduced in this release are summarized below. For a full query listing all improvements made in 9.1, see: Items Completed in 9.1. Refer to 9.1 Upgrade Tips to work around minor behavior changes associated with upgrading from v8.3 to v9.1.

Download LabKey Server v 9.1.

Quality Control

  • Field-level quality control. Data managers can now set and display the quality control (QC) status of individual data fields. Data coming in via text files can contain the special symbols Q and N in any column that has been set to allow quality control markers. “Q” indicates a QC has been applied to the field, “N” indicates the data will not be provided (even if it was officially required).
  • Programmatic quality control for uploaded data. Programmatic quality control scripts (written in R, Perl, or another language of the developer's choice) can now be run at data upload time. This allows a lab to perform arbitrary quality validation prior bringing data into the database, ensuring that all uploaded data meets certain initial quality criteria. Note that non-programmatic quality control remains available -- assay designs can be configured to perform basic checks for data types, required values, regular expressions, and ranges in uploaded data.
  • Default values for fields in assays, lists and datasets. Dataset schemas can now be set up to automatically supply default values when imported data tables have missing values. Each default value can be the last value entered, a fixed value or an editable default.

Assay/Study Data Integration

  • Display of assay status. Assay working folders now clearly display how many samples/runs have been processed for each study.
  • Improved study integration. Study folders provide links to view source assay data and designs, as well as links to directly upload data via appropriate assay pipelines.
  • Hiding of unnecessary "General Purpose" assay details. Previously, data for this type of assay had a [details] link displayed in the copied dataset. This link is now suppressed because no additional information is available in this case.
  • Easier data upload. Previously, in order to add data to an assay, a user needed to know the destination folder. Now users are presented with a list of appropriate folders directly from the upload button either in the assay runs list or from the dataset.
  • Improved copy to study process. It is now easier to find and fix incorrect run data when copying data to a study. Improvements:
    • Bad runs can now be skipped.
    • The run details page now provides a link so that run data can be examined.
    • There is now an option to re-run an assay run, pre-populating all fields, including the data file, with the previous run. On successful import, the previous run will be deleted.

Proteomics and Microarrays

  • Protein Search Allows Peptide Filtering. When performing a protein search, you can now filter to show only proteins groups that have a peptide that meets a PeptideProphet probability cutoff, or specify an arbitrarily complex peptide filter.
  • Auto-derivation of samples during sample set import. Automated creation of derivation history for newly imported samples eases tracking of sample associations and history. Sample sets now support an optional column that provides parent sample information. At import time, the parent samples listed in that column are identified within LabKey Server and associations between samples are created automatically.
  • Microarray bulk upload.
    • When importing MageML files into LabKey Server, users can now include a TSV file that supplies run-level metadata about the runs that produced the files. This allows users to reuse the TSV metadata instead of manually re-entering it.
    • The upload process leverages the Data Pipeline to operate on a single directory at a time, which may contain many different MageML files. LabKey Server automatically matches MageML files to the correct metadata based on barcode value.
    • An Excel template is provided for each assay design to make it easier to fill out the necessary information.
  • Microarray copy-to-study. Microarray assay data can now be copied to studies, where it will appear up as an assay-backed dataset.

Assays

  • Support for saving state within an assay batch/run upload. Previously, once you started upload of assay data, you had to finish at one point in time. Now you can start by uploading an assay batch, then upload the run data later.
  • NAb improvements:
    • Auto-complete during NAb upload. This is available for specimen, visit, and participant IDs.
    • Re-run of NAb runs. After you have uploaded a NAb run and you wish to make an edit, you can redo the upload process with all the information already pre-filled, ready for editing.

Specimen Tracking

  • Specimen shopping cart. When compiling a specimen request, you can now perform a specimen search once, then build a specimen request from items listed in that search. You can add individual vials one-at-a-time using the "shopping cart" icon next to each vial. Alternatively, you can add several vials at once using the checkboxes next to each vial and the actions provided by the "Request Options" drop-down menu. After adding vials to a request of your choice, you return to your specimen search so that you can add more.
  • Auditing for specimen comments. Specimen comments are now logged, so they can be audited.
  • Specimen reports can now be based on filtered vial views. This increases the power of reporting features.

Views

  • Enhanced interface for managing views. The same interface is now used to manage views within a study and outside of a study.
  • Container filters for grid views. You can now choose whether the list of "Views" for a data grid includes views created within the current folder or both the current folder and subfolders.
  • Ability to clear individual columns from sorts and filters for grid views. The "Clear Sort" and "Clear Filter" menu items area available in the sort/filter drop-down menu available when you click on a grid view column header. For example, the "Clear Sort" menu item is enabled when the given column is included in the current sort. Selecting that item will remove just that column from the list of sorted columns, leaving the others intact.
  • More detailed information for the "Remember current filter" choice on the Customize View page. When you customize a grid view that already contains sorts and filters, these sorts and filters can be retained with that custom view, along with any sorts and filters added during customization. The UI now explicitly lists the pre-existing sorts and filters that can be retained.
  • Stand-alone R views. You do not need to associate every R view with a particular grid view. R views can be created independently of a particular dataset through the "Manage Views" page.
  • Improved identification of views displayed in the Reports web part. The Reports web part now can accept string-based form of report ID (in addition to normal integer report ID) so that you can refer to a report defined within a module.

Flow Cytometry

  • Ability to download a single FCS file. A download link is now available on the FCS File Details page.
  • New Documentation: Demo, Tutorial and additional Documentation
  • Richer filter UI for "background column and value." Available in the ICS Metadata editor. This provides support for "IN" and multiple clauses. Example: Stim IN ('Neg Cont', 'negctrl') AND CD4_Count > 10000 AND CD8_Count > 10000
  • Performance improvements. Allow loading larger FlowJo workspaces than previously possible.
  • UI improvements for FlowJo import. Simplify repeated uploading of FlowJo workspaces.

Development: Client API

  • New SAS Client API. The LabKey Client API Library for SAS makes it easy for SAS users to load live data from a LabKey Server into a native SAS dataset for analysis, provided they have permissions to read those data. It also enables SAS users to insert, update, and delete records stored on a LabKey Server, provided they have appropriate permissions to do so. All requests to the LabKey Server are performed under the user's account profile, with all proper security enforced on the server. User credentials are obtained from a separate location than the running SAS program so that SAS programs can be shared without compromising security.
  • Additions to the Java, JavaScript, R and SAS Client Libraries:
  • Additions to the Javascript API:
    • Callback to indicate that a web part has loaded. Provides a callback after a LABKEY.WebPart has finished rendering.
    • Information on the current user (LABKEY.user). The LABKEY.Security.currentUser API exposes limited information on the current user.
    • API/Ext-based management of specimen requests. See: LABKEY.Specimen.
    • Sorting and filtering for NAb run data retrieved via the LabKey Client APIs. For further information, see: LABKEY.Assay#getNAbRuns
    • Ability to export tables generated through the client API to Excel. This API takes a JavaScript object in the same format as that returned from the Excel->JSON call and pops up a download dialog on the client. See LABKEY.Utils#convertToExcel.
    • Improvements to the Ext grid.
      • Quality control information available.
      • Performance improvements for lookup columns.
  • Documentation for R Client API. Available here on CRAN.

Development: Modules

  • File-based modules. File-based modules provide a simplified way to include R reports, custom queries, custom query views, HTML views, and web parts in your modules. You can now specify a custom query view definition in a file in a module and it will appear alongside the other grid views for the given schema/query. These resources can be included either in a simple module with no Java code whatsoever, or in Java-based modules. They can be delivered as a unit that can be easily added to an existing LabKey Server installation. Documentation: Overview of Simplified Modules and Queries, Views and Reports in Modules.
  • File-based assays. A developer can now create a new assay type with a custom schema and custom views without having to be a Java developer. A file-based assay consists of an assay config file, a set of domain descriptions, and view html files. The assay is added to a module by placing it in an assay directory at the top-level of the module. For information on the applicable API, see: LABKEY.Experiment#saveBatch.

Development: Custom SQL Queries

  • Support for additional SQL functions:
    • UNION and UNION ALL
    • BETWEEN
    • TIMESTAMPDIFF
  • Cross-container queries. You can identify the folder containing the data of interest during specification of the schema. Example: Project."studies/001/".study.demographics.
  • Query renaming. You can now change the name of a query from the schema listing page via the “Edit Properties” link.
  • Comments. Comments that use the standard SQL syntax ("--") can be included in queries.
  • Metadata editor for built-in tables. This editor allows customization of the pre-defined tables and queries provided by LabKey Server. Users can change number or date formats, add lookups to join to other data (or query results), and change the names and description of columns. The metadata editor shows the metadata associated with a table of interest and allows users to override default values. Edits are saved in the same XML format used to describe custom queries.

Collaboration

  • Version comparison tool for wiki pages. Differences between older and newer versions of wiki pages can now be easily visualized through the "History"->"Compare Versioned Content"->"Compare With" pathway.
  • Attachments can now be downloaded from the "Edit" page. Also, if an attachment is an image, clicking on it displays it in a new browser tab.

Administration

  • Tomcat 5.5.27 is now supported.
  • Upgrade to PostgreSQL 8.3 is now strongly encouraged. For anyone running PostgreSQL 8.2.x or earlier, you will now see a yellow warning message in the header when logged in as a system admin. Upgrade to PostgreSQL 8.3 to eliminate the message. The message can also be hidden. Upgrade documentation.


previousnext
 
expand allcollapse all