SWL Montage

The Smart Wet Lab Assistant

 

The Problem

One of the largest challenges facing science is the documentation and replication of experimements. A 2012 study by Amgen researcher Glenn Begley, for example, showed that 47 of 53 landmark cancer biology results could not be replicated, in some cases even with the help of the original researchers. Part of what makes documentation and replication difficult is the large number and subtlety of steps performed during a lab experiment. Another factor is that often documentation is performed after the fact since both hands are needed to perform experiment, and lab technicians often wear gloves that make writing or typing difficult. A final factor is that in many cases, scientists are not aware of what specific experimental details are important to capture.

Given the difficulty and potential high-value of machine understanding in this environment, we’ve chosen to use a “Smart Wet Lab Assistant” as a capstone challenge for our research center. Our aim is to develop a system employing camera, sensors, machine learning and detail formal task definitions capable to observing a lab technician performing a scientific protocol and recognize what they are doing, document their progress and even notice errors and suggest corrections.  We see out system as a potential “triple threat”, able to teach novices, double-check the work of experts, and auto document for all levels of expertise.

Key research challenges:

The Smart Wet Lab Assistant provides unique research challenges for the understanding and assistance research themes. The key challenges in realizing our goal are:

  • Formally representing lab protocols: For a machine to correctly recognize and follow an experiment/protocol, the specific steps need to be formally captured. In addition, the technicians executing the experiments need to commit to using these versions of the protocols. This is as much an organizational challenge as a technical one. The Smart Wet Lab Assistant project is being developed jointed with the Klavins Synthetic Biology lab at the University of Washington.  The biology students and  researchers in that group have developed a system called Aquarium that includes a protocol description language as well as a touchscreen-based system to allow a technician to step through a documented protocol.  By adding identity, authentication, job-control and inventory to their system, they are able to create an abstraction layer that allows an experiment to be authored by one researcher and then submitted and executed by another. This allows the biology experiments in their lab to serve as rich data sets for sensing, perception and talk assistance research for the ISTC.
  • Recognize lab equipment and materials:  This includes small, disposable items such as pipette tips and Eppendorf tubes. We are initially employing a hybrid approach to this task.  Non-disposable items (which are not autoclaved) are being RFID tagged so that we may recognize when they enter the work space. Depth cameras aimed at the work space run state-of-the-art object recognition (using pre-trained models) to try to recognize objects. Finally we are instrumenting lab equipment such as scales and other measurement devices to report the mass/volume they observe. By fusing these streams together we hope to reliably recognize specific tools as well as amounts of specific materials.
  • Infer actions by the technician: While not a fully unconstrained environment, there are many actions a lab tech may perform including pouring, shaking, stirring, screwing, unscrewing as well as  many tool-specific manipulations. Our strategy again is to use cameras as a primary means to recognize these actions. (Video below show some early results with our articulated rigid-body tracker).  We are also augmenting RFID tags with location and acceleration tracking. (See RFID 2013 publication on our ultrasonic RFID tag.)

 

We are using a general articulated rigid-body model for action tracking. The algorithm was published in RSS 2014

  • Recognize and follow protocols: From our sensor data and base observations of objects and actions, we would like to be able to recognize which particular protocol is being executed, follow the technician’s progress and watch for errors. We are experimenting with a variety of probabilistic frameworks and task representations to see which best fits our recognition needs. One strong candidate for an underlying technique is the deep belief network as implemented in the Graphical Models Toolkit.

Beyond the Wet Lab

We chose the wet lab as a test environment due to the complex multistep tasks and human/object interactions. The lab is not the only such environment and as such, we see valuable application beyond the wet science lab for our results. Other formal environments such as a factory or medical facility could benefit from the tutoring, self-documenting and error checking capabilities. Outside the work environment, our technology could easily be applied to informal education and DIY tasks such as cooking or auto maintenance.

Key Participants

A large number of faculty and students in our center are participating in the Smart Wet Lab Assistance project. In particular. Eric Klavins and the Klavins Lab is providing the physical environment, biology domain knowledge and lots of enthusiastic collaboration. Novel sensing artifacts are courtesy of Joshua Smith and Sensor Systems Lab.  At the University of Washington, machine learning and vision expertise is coming from Jeff Bilmes and Signal, Speech and Language Interpretation (SSLI), and  Dieter Fox’s  Robotics and State Estimation Lab. Our work in natural language understanding and integration is being led by Luke Zettlemoyer.  At Rochester we work with AI faculty Henry Kautz in the Institute for Data Science, at Georgia Tech with Jim Rehg and the  Computational Perception Lab and at Stanford with Fei-Fei Li and their Vision Lab.

A quick and campy montage of data captured during our research