EP3259069B1 - Classification of barcode tag conditions from top view sample tube images for laboratory automation - Google Patents
Classification of barcode tag conditions from top view sample tube images for laboratory automation Download PDFInfo
- Publication number
- EP3259069B1 EP3259069B1 EP16752913.0A EP16752913A EP3259069B1 EP 3259069 B1 EP3259069 B1 EP 3259069B1 EP 16752913 A EP16752913 A EP 16752913A EP 3259069 B1 EP3259069 B1 EP 3259069B1
- Authority
- EP
- European Patent Office
- Prior art keywords
- roi
- tube
- barcode tag
- sample tube
- tray
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/774—Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B01—PHYSICAL OR CHEMICAL PROCESSES OR APPARATUS IN GENERAL
- B01L—CHEMICAL OR PHYSICAL LABORATORY APPARATUS FOR GENERAL USE
- B01L3/00—Containers or dishes for laboratory use, e.g. laboratory glassware; Droppers
- B01L3/54—Labware with identification means
- B01L3/545—Labware with identification means for laboratory containers
- B01L3/5453—Labware with identification means for laboratory containers for test tubes
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B01—PHYSICAL OR CHEMICAL PROCESSES OR APPARATUS IN GENERAL
- B01L—CHEMICAL OR PHYSICAL LABORATORY APPARATUS FOR GENERAL USE
- B01L9/00—Supporting devices; Holding devices
- B01L9/06—Test-tube stands; Test-tube holders
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01N—INVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
- G01N35/00—Automatic analysis not limited to methods or materials provided for in any single one of groups G01N1/00 - G01N33/00; Handling materials therefor
- G01N35/00584—Control arrangements for automatic analysers
- G01N35/00594—Quality control, including calibration or testing of components of the analyser
- G01N35/00613—Quality control
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01N—INVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
- G01N35/00—Automatic analysis not limited to methods or materials provided for in any single one of groups G01N1/00 - G01N33/00; Handling materials therefor
- G01N35/00584—Control arrangements for automatic analysers
- G01N35/00722—Communications; Identification
- G01N35/00732—Identification of carriers, materials or components in automatic analysers
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2411—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2415—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06K—GRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
- G06K7/00—Methods or arrangements for sensing record carriers, e.g. for reading patterns
- G06K7/10—Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation
- G06K7/14—Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation using light without selection of wavelength, e.g. sensing reflected white light
- G06K7/1404—Methods for optical code recognition
- G06K7/1408—Methods for optical code recognition the method being specifically adapted for the type of code
- G06K7/1413—1D bar codes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/25—Determination of region of interest [ROI] or a volume of interest [VOI]
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B01—PHYSICAL OR CHEMICAL PROCESSES OR APPARATUS IN GENERAL
- B01L—CHEMICAL OR PHYSICAL LABORATORY APPARATUS FOR GENERAL USE
- B01L2200/00—Solutions for specific problems relating to chemical or physical laboratory apparatus
- B01L2200/14—Process control and prevention of errors
- B01L2200/143—Quality control, feedback systems
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B01—PHYSICAL OR CHEMICAL PROCESSES OR APPARATUS IN GENERAL
- B01L—CHEMICAL OR PHYSICAL LABORATORY APPARATUS FOR GENERAL USE
- B01L2300/00—Additional constructional details
- B01L2300/02—Identification, exchange or storage of information
- B01L2300/021—Identification, e.g. bar codes
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B01—PHYSICAL OR CHEMICAL PROCESSES OR APPARATUS IN GENERAL
- B01L—CHEMICAL OR PHYSICAL LABORATORY APPARATUS FOR GENERAL USE
- B01L2300/00—Additional constructional details
- B01L2300/08—Geometry, shape and general structure
- B01L2300/0809—Geometry, shape and general structure rectangular shaped
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01N—INVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
- G01N35/00—Automatic analysis not limited to methods or materials provided for in any single one of groups G01N1/00 - G01N33/00; Handling materials therefor
- G01N35/00584—Control arrangements for automatic analysers
- G01N35/00722—Communications; Identification
- G01N35/00732—Identification of carriers, materials or components in automatic analysers
- G01N2035/00742—Type of codes
- G01N2035/00752—Type of codes bar codes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30204—Marker
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/03—Recognition of patterns in medical or anatomical images
- G06V2201/034—Recognition of patterns in medical or anatomical images of medical instruments
Definitions
- the present invention relates generally to detection of conditions of barcode tags, and more particularly to utilizing top-view sample tube images to classify conditions of barcode tags on sample tubes.
- Barcode tags are frequently used on sample tubes in clinical laboratory automation systems to uniquely identify and track the sample tubes, and are often the only means that associate a patient with a sample inside a particular sample tube. Through normal, everyday use, the condition of the barcode tags may deteriorate, including tearing, peeling, discoloring, and other deformations. Such deterioration hinders lab automation systems from streamlining the sample tube processing.
- US 2013/0129166 A1 discloses a modular laboratory automation system that comprises different laboratory units and transport systems.
- Patient samples associated with a unique laboratory identifier, such as barcode, may be loaded into a rack which is moved into one of several lanes of an input module.
- a gripper transports individual sample tubes to a fixed matrix called the distribution area.
- the laboratory automation system can also utilize a circular barcode identification device on top of a sample tube in order to identify a sample tube before it is handled by the gripper of the input module.
- Embodiments are directed to classifying barcode tag conditions on sample tubes from top view images to streamline sample tube handling in advanced clinical laboratory automation systems.
- Embodiments are directed to classifying barcode tag conditions on sample tubes from top view images to streamline sample tube handling in advanced clinical laboratory automation systems.
- the classification of barcode tag conditions advantageously leads to the automatic detection of problematic barcode tags, allowing for the system, or a user, to take necessary steps to fix the problematic barcode tags.
- the identified sample tubes with problematic barcode tags may be dispatched to a separate workflow apart from the normal tube handling procedures to rectify the problematic barcode tags.
- a vision system is utilized to perform an automatic classification of barcode tag conditions on sample tubes from top view images.
- An exemplary vision system may comprise a drawer for loading and unloading tube trays on which sample tubes are contained. Each tube tray includes a plurality of tube slots, each configured to hold a sample tube.
- the vision system further comprises one or more cameras mounted above an entrance area of the drawer, allowing for acquisition of images of the sample tubes as the drawer is being inserted. According to an embodiment, each sample tube is captured in multiple images with varying perspectives from top view images.
- FIG. 1 is a representation of an exemplary drawer vision system 100 in which tube trays 120 and sample tubes 130 contained thereon are characterized by obtaining and analyzing images thereof, according to an embodiment.
- One or more drawers 110 are movable between an open and a closed position and are provided in a work envelope 105 for a sample handler.
- One or more tube trays 120 may be loaded into a drawer 110 or may be a permanent feature of the drawer 110.
- Each tube tray 120 has an array of rows and columns of slots (as depicted in exemplary tray 121) in which tubes 130 may be held.
- images are taken of a tube tray 120; the images are analyzed to classify the barcode tag conditions of the sample tubes 130.
- a moving-tray/fixed camera approach is used, according to embodiments provided herein, to capture the images for analysis thereof.
- an image capture system 140 is used to take images of the tube tray 120 and the tubes 130 contained thereon.
- the image capture system 140 includes one or more cameras positioned at or near the entrance to the work envelope 105. The one or more cameras may be positioned above the surface of the tube tray 120.
- the cameras may be placed 7.62-15.24 cm (three to six inches) above the surface to capture a high resolution image of the tube tray 120. Other distances and/or positioning may also be used depending on the features of the cameras and the desired perspective and image quality.
- the image capture system 140 may include one or more lighting sources, such as an LED flash. As the tube tray 120 is already required to be slid into the work envelope 105, adding the fixed image capture system 140 does not add an excess of cost or complexity to the work envelope 105.
- the image capture system 140 also includes one or more processors to perform the image capture algorithms and subsequent classification analysis, as further described below.
- the image capture system 140 captures an image each time a row of the tube tray 120 is moved into a center position or a position substantially centered under the one or more cameras. More than one row of the tubes 130 can be captured in this image, with one row being centered or substantially centered beneath the image capture system 140, while adjacent rows are captured from an oblique angle in the same image. By capturing more than one row at a time, the rows of tubes 130 are captured from multiple perspectives, providing for depth and perspective information to be captured in the images for each tube 130.
- a tri-scopic perspective of a row of tubes 130 is captured as the row of tubes 130 are captured in multiple images.
- a single row may appear in the bottom portion of an image (from an oblique perspective) when the subsequent row is centered or substantially centered beneath the image capture system 140; that single row may then appear substantially centered in an image (from a substantially top-down perspective) when the row of tubes 130 itself is centered or substantially centered beneath the image capture system 140; and that single row may appear in the top portion of an image (from another oblique perspective) when the preceding row of tubes 130 is centered or substantially centered beneath the image capture system 140.
- a stereoscopic perspective of a row of tubes 130 may be captured as images are taken when the image capture system 140 is centered or substantially centered above a point between two adjacent rows (allowing each row to appear in two images at two oblique perspectives).
- rows may appear in more than three images, in more than three perspectives, allowing more three-dimensional information about each tube to be gleaned from a plurality of images.
- the invention is not limited to tri-scopic and stereoscopic perspectives of the row of tubes 130; instead, depending on features of the cameras and the positioning of the image capture system 140 with respect to the work envelope 105, additional perspectives may be obtained.
- the exemplary drawer vision system 100 described with respect to FIG. 1 is one type of configuration in which sample tubes may be arranged for the classification of barcode tag conditions on sample tubes from top view images, as provided by embodiments described herein.
- the invention is not limited to the drawer configuration and other configurations may instead be utilized.
- a flat surface with guide rails may be provided. This configuration allows for an operator or a system to align keying features on the trays to the rails and push the trays to a working area.
- the classification of barcode tag conditions on sample tubes from top view images is based on the following factors: (1) a region-of-interest (ROI) extraction and rectification method based on sample tube detection; (2) a barcode tag condition classification method based on holistic features uniformly sampled from the rectified ROI; and (3) a problematic barcode tag area localization method based on pixel-based feature extraction.
- ROI region-of-interest
- barcode tag conditions are grouped into three main categories: good, warning, and error. Subcategories are further derived within each of the main categories such as deformation, peeling, folding, tear, label too high, etc.
- FIG. 2 illustrates a flow diagram of a method of classifying barcode tag conditions on sample tubes, according to an embodiment.
- top view image sequences of the tube tray are acquired.
- the acquisition of images may comprise an input image sequence containing images obtained during insertion of a drawer, for example.
- ROI extraction and rectification of the sample tubes from each input image is performed.
- the rectification may include rectifying to a canonical orientation.
- features are extracted and, at 240, inputted into a classifier to determine a barcode tag condition for a sample tube.
- the determination of the barcode tag condition is based on the barcode tag condition category, provided at 250.
- a pixel-based classifier is applied to localize the problematic area (270). If, at 260, a problematic barcode tag is not identified, the process ends.
- the ROI of the sample tube 130 is defined as the region containing the sample tube from its top to the tray surface area plus the regions extended out from the tube which may contain the deformed or folded barcode tags.
- a sample tube can only stay in a tube slot and its height and diameter are within a certain range, its plausible two-dimensional projection can be determined with the knowledge of camera intrinsic calibration and the extrinsic pose with respect to the tray surface.
- the tube top circle is detected based on known robust detection methods to determine the exact sample tube location in the image. This region is further enlarged at both sides of the tube and then rectified into a canonical orientation.
- various features are extracted to represent the characteristics of the sample tube appearance. For example, histogram of oriented gradients (HOG) and Sigma points have been observed to represent well the underlying gradient and color characteristics of the sample tube appearance.
- HOG histogram of oriented gradients
- Sigma points have been observed to represent well the underlying gradient and color characteristics of the sample tube appearance.
- the rectified ROI is divided into non-overlapped cells for feature extraction.
- These local features are sequentially concatenated to represent the features of the rectified ROI.
- each sample tube can be observed from three consecutive images from the acquired image sequence. Each image provides a specific perspective of the sample tube. Features extracted from these three images are further concatenated to represent the final feature vector of the sample tube.
- FIG. 3 shows sample results on the ROI extraction and rectification as well as the visualization of extracted features.
- (a), (d), and (h) illustrate the plausible region of the sample tube from different columns of the tube tray viewed from three different perspectives;
- (b), (e), and (i) show the corresponding rectified ROI of the sample tube;
- (c), (f), and (j) show the feature visualization for each rectified ROI.
- classifiers Based on the extracted feature vectors, various types of classifiers can be applied for the classification task.
- SVM Support Vector Machines
- linear SVM is utilized due to its simplicity, and more sophisticated kernels may also be used.
- Other classifiers such as random decision trees (e.g., Random Forests), decision trees, and Probabilistic Boosting Trees, among others, can also be applied for the classification task.
- the barcode tag conditions For the classification of barcode tag conditions, the barcode tag conditions may be grouped into three main categories: good, warning, and error; or they may be grouped into different forms of deformation such as peeling, tear, folding, etc.
- FIG. 4 illustrates the classification result on the three main categories
- FIG. 5 illustrates the classification result obtained for ten subcategories.
- FIG. 6 is a flowchart illustrating the segmentation process, according to an embodiment.
- the classification task can be performed with efficient feature types which can handle and discriminate the visual characteristics of deformations.
- Sigma points have shown reliable performance in this task since various filter responses and colors can be tightly integrated within a compact feature representation.
- the classification can be performed quickly by using integral structures.
- image sequences of the tube tray are acquired; at 620, a ROI of each sample tube is extracted from the input image; and at 630, from the extracted ROI, features are extracted.
- the pixel-based classification task is performed on each pixel in the ROI to determine how likely this pixel belongs to the problematic area for this specific condition (640).
- the likelihood is further refined in a Conditional Random Field (CRF) framework, or the like, to incorporate smoothness constraints on the output such that nearby fragmented responses can be merged and noisy outliers can be removed (650).
- CRF Conditional Random Field
- FIG. 7 shows sample results on the problematic area localization. This information is used to report problematic image regions for further decision making or visualization.
- a controller is provided for managing the image analysis of the images taken by the cameras for classifying barcode tag conditions on sample tubes from top view images.
- the controller may be, according to an embodiment, part of a sample handler that is used in an in vitro diagnostics (IVD) environment to handle and move the tube trays and the tubes between storage locations, such as the work envelope, to analyzers.
- IVD in vitro diagnostics
- One or more memory devices may be associated with the controller.
- the one or more memory devices may be internal or external to the controller.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Health & Medical Sciences (AREA)
- Chemical & Material Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- Data Mining & Analysis (AREA)
- Analytical Chemistry (AREA)
- Evolutionary Computation (AREA)
- Chemical Kinetics & Catalysis (AREA)
- Clinical Laboratory Science (AREA)
- Multimedia (AREA)
- Quality & Reliability (AREA)
- Biochemistry (AREA)
- Immunology (AREA)
- Pathology (AREA)
- Medical Informatics (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- General Engineering & Computer Science (AREA)
- Bioinformatics & Computational Biology (AREA)
- Computing Systems (AREA)
- Software Systems (AREA)
- Databases & Information Systems (AREA)
- Electromagnetism (AREA)
- Toxicology (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Radiology & Medical Imaging (AREA)
- Probability & Statistics with Applications (AREA)
- Investigating Or Analysing Biological Materials (AREA)
- Automatic Analysis And Handling Materials Therefor (AREA)
- Image Processing (AREA)
- Image Analysis (AREA)
Description
- The present invention relates generally to detection of conditions of barcode tags, and more particularly to utilizing top-view sample tube images to classify conditions of barcode tags on sample tubes.
- Barcode tags are frequently used on sample tubes in clinical laboratory automation systems to uniquely identify and track the sample tubes, and are often the only means that associate a patient with a sample inside a particular sample tube. Through normal, everyday use, the condition of the barcode tags may deteriorate, including tearing, peeling, discoloring, and other deformations. Such deterioration hinders lab automation systems from streamlining the sample tube processing.
-
US 2013/0129166 A1 discloses a modular laboratory automation system that comprises different laboratory units and transport systems. Patient samples, associated with a unique laboratory identifier, such as barcode, may be loaded into a rack which is moved into one of several lanes of an input module. From the lanes within the input module a gripper transports individual sample tubes to a fixed matrix called the distribution area. During the transfer, the levels of the constituents components are measured and top view photographs of the sample tube are taken and analyzed. The laboratory automation system can also utilize a circular barcode identification device on top of a sample tube in order to identify a sample tube before it is handled by the gripper of the input module. - Thus, there is a need for classifying barcode tag conditions on sample tubes to streamline sample tube handling in advanced clinical laboratory automation systems. There is also a need for such classification to be automatic, efficient, and unobtrusive.
- Embodiments are directed to classifying barcode tag conditions on sample tubes from top view images to streamline sample tube handling in advanced clinical laboratory automation systems.
- The foregoing and other aspects of the present invention are best understood from the following detailed description when read in connection with the accompanying drawings. For the purpose of illustrating the invention, there is shown in the drawings embodiments that are presently preferred, it being understood, however, that the invention is not limited to the specific instrumentalities disclosed. Included in the drawings are the following Figures:
-
FIG. 1 is a representation of an exemplary drawer vision system in which sample tubes are contained thereon for classifying barcode tag conditions on sample tubes from top view images, according to an embodiment; -
FIG. 2 illustrates a flow diagram of a method of classifying barcode tag conditions on sample tubes from top view images, according to an embodiment; -
FIG. 3 illustrates sample results on region of interest (ROI) extraction, rectification, and visualization of extracted features of sample tubes from top view images, according to an embodiment; -
FIG. 4 illustrates a classification result on three main categories for classifying barcode tag conditions on sample tubes from top view images, according to an embodiment; -
FIG. 5 illustrates a classification result obtained for ten subcategories for classifying barcode tag conditions on sample tubes from top view images, according to an embodiment; -
FIG. 6 is a flowchart illustrating a segmentation process for classifying barcode tag conditions on sample tubes from top view images, according to an embodiment; and -
FIG. 7 shows sample results on the problematic area localization of sample tubes from top view images, according to an embodiment. - Embodiments are directed to classifying barcode tag conditions on sample tubes from top view images to streamline sample tube handling in advanced clinical laboratory automation systems. The classification of barcode tag conditions, according to embodiments provided herein, advantageously leads to the automatic detection of problematic barcode tags, allowing for the system, or a user, to take necessary steps to fix the problematic barcode tags. For example, the identified sample tubes with problematic barcode tags may be dispatched to a separate workflow apart from the normal tube handling procedures to rectify the problematic barcode tags.
- A vision system is utilized to perform an automatic classification of barcode tag conditions on sample tubes from top view images. An exemplary vision system may comprise a drawer for loading and unloading tube trays on which sample tubes are contained. Each tube tray includes a plurality of tube slots, each configured to hold a sample tube. The vision system further comprises one or more cameras mounted above an entrance area of the drawer, allowing for acquisition of images of the sample tubes as the drawer is being inserted. According to an embodiment, each sample tube is captured in multiple images with varying perspectives from top view images.
-
FIG. 1 is a representation of an exemplarydrawer vision system 100 in whichtube trays 120 andsample tubes 130 contained thereon are characterized by obtaining and analyzing images thereof, according to an embodiment. One ormore drawers 110 are movable between an open and a closed position and are provided in awork envelope 105 for a sample handler. One ormore tube trays 120 may be loaded into adrawer 110 or may be a permanent feature of thedrawer 110. Eachtube tray 120 has an array of rows and columns of slots (as depicted in exemplary tray 121) in whichtubes 130 may be held. - According to embodiments, images are taken of a
tube tray 120; the images are analyzed to classify the barcode tag conditions of thesample tubes 130. A moving-tray/fixed camera approach is used, according to embodiments provided herein, to capture the images for analysis thereof. As thetube tray 120 is moved into thework envelope 105 by, for example, manually or automatically pushing in thedrawer 110, animage capture system 140 is used to take images of thetube tray 120 and thetubes 130 contained thereon. According to an embodiment, theimage capture system 140 includes one or more cameras positioned at or near the entrance to thework envelope 105. The one or more cameras may be positioned above the surface of thetube tray 120. For example, the cameras may be placed 7.62-15.24 cm (three to six inches) above the surface to capture a high resolution image of thetube tray 120. Other distances and/or positioning may also be used depending on the features of the cameras and the desired perspective and image quality. Optionally, theimage capture system 140 may include one or more lighting sources, such as an LED flash. As thetube tray 120 is already required to be slid into thework envelope 105, adding the fixedimage capture system 140 does not add an excess of cost or complexity to thework envelope 105. Theimage capture system 140 also includes one or more processors to perform the image capture algorithms and subsequent classification analysis, as further described below. - The
image capture system 140 captures an image each time a row of thetube tray 120 is moved into a center position or a position substantially centered under the one or more cameras. More than one row of thetubes 130 can be captured in this image, with one row being centered or substantially centered beneath theimage capture system 140, while adjacent rows are captured from an oblique angle in the same image. By capturing more than one row at a time, the rows oftubes 130 are captured from multiple perspectives, providing for depth and perspective information to be captured in the images for eachtube 130. - According to an embodiment, a tri-scopic perspective of a row of
tubes 130 is captured as the row oftubes 130 are captured in multiple images. For example, a single row may appear in the bottom portion of an image (from an oblique perspective) when the subsequent row is centered or substantially centered beneath theimage capture system 140; that single row may then appear substantially centered in an image (from a substantially top-down perspective) when the row oftubes 130 itself is centered or substantially centered beneath theimage capture system 140; and that single row may appear in the top portion of an image (from another oblique perspective) when the preceding row oftubes 130 is centered or substantially centered beneath theimage capture system 140. In another embodiment, a stereoscopic perspective of a row oftubes 130 may be captured as images are taken when theimage capture system 140 is centered or substantially centered above a point between two adjacent rows (allowing each row to appear in two images at two oblique perspectives). Similarly, rows may appear in more than three images, in more than three perspectives, allowing more three-dimensional information about each tube to be gleaned from a plurality of images. The invention is not limited to tri-scopic and stereoscopic perspectives of the row oftubes 130; instead, depending on features of the cameras and the positioning of theimage capture system 140 with respect to thework envelope 105, additional perspectives may be obtained. - The exemplary
drawer vision system 100 described with respect toFIG. 1 is one type of configuration in which sample tubes may be arranged for the classification of barcode tag conditions on sample tubes from top view images, as provided by embodiments described herein. The invention is not limited to the drawer configuration and other configurations may instead be utilized. For example, in another embodiment, a flat surface with guide rails may be provided. This configuration allows for an operator or a system to align keying features on the trays to the rails and push the trays to a working area. - The classification of barcode tag conditions on sample tubes from top view images is based on the following factors: (1) a region-of-interest (ROI) extraction and rectification method based on sample tube detection; (2) a barcode tag condition classification method based on holistic features uniformly sampled from the rectified ROI; and (3) a problematic barcode tag area localization method based on pixel-based feature extraction.
- According to embodiments provided herein, barcode tag conditions are grouped into three main categories: good, warning, and error. Subcategories are further derived within each of the main categories such as deformation, peeling, folding, tear, label too high, etc.
-
FIG. 2 illustrates a flow diagram of a method of classifying barcode tag conditions on sample tubes, according to an embodiment. At 210, top view image sequences of the tube tray are acquired. The acquisition of images may comprise an input image sequence containing images obtained during insertion of a drawer, for example. - At 220, ROI extraction and rectification of the sample tubes from each input image is performed. The rectification, according to an embodiment, may include rectifying to a canonical orientation.
- At 230, from the rectified ROI, features are extracted and, at 240, inputted into a classifier to determine a barcode tag condition for a sample tube. The determination of the barcode tag condition is based on the barcode tag condition category, provided at 250.
- If, at 260, a problematic barcode tag is identified, according to an embodiment, a pixel-based classifier is applied to localize the problematic area (270). If, at 260, a problematic barcode tag is not identified, the process ends.
- The ROI of the
sample tube 130 is defined as the region containing the sample tube from its top to the tray surface area plus the regions extended out from the tube which may contain the deformed or folded barcode tags. As a sample tube can only stay in a tube slot and its height and diameter are within a certain range, its plausible two-dimensional projection can be determined with the knowledge of camera intrinsic calibration and the extrinsic pose with respect to the tray surface. Within the plausible region, the tube top circle is detected based on known robust detection methods to determine the exact sample tube location in the image. This region is further enlarged at both sides of the tube and then rectified into a canonical orientation. - According to an embodiment, within the rectified ROI, various features are extracted to represent the characteristics of the sample tube appearance. For example, histogram of oriented gradients (HOG) and Sigma points have been observed to represent well the underlying gradient and color characteristics of the sample tube appearance. In order to handle the trade-off between the dimensionality of the feature vectors and the power of representativeness, the rectified ROI is divided into non-overlapped cells for feature extraction. These local features are sequentially concatenated to represent the features of the rectified ROI. According to an embodiment, each sample tube can be observed from three consecutive images from the acquired image sequence. Each image provides a specific perspective of the sample tube. Features extracted from these three images are further concatenated to represent the final feature vector of the sample tube.
-
FIG. 3 shows sample results on the ROI extraction and rectification as well as the visualization of extracted features. (a), (d), and (h) illustrate the plausible region of the sample tube from different columns of the tube tray viewed from three different perspectives; (b), (e), and (i) show the corresponding rectified ROI of the sample tube; and (c), (f), and (j) show the feature visualization for each rectified ROI. - Based on the extracted feature vectors, various types of classifiers can be applied for the classification task. According to an embodiment, widely-used Support Vector Machines (SVM) is adopted as the classifier, although the invention is not so limited to this specific type of classifier. In one embodiment, linear SVM is utilized due to its simplicity, and more sophisticated kernels may also be used. Other classifiers such as random decision trees (e.g., Random Forests), decision trees, and Probabilistic Boosting Trees, among others, can also be applied for the classification task. For the classification of barcode tag conditions, the barcode tag conditions may be grouped into three main categories: good, warning, and error; or they may be grouped into different forms of deformation such as peeling, tear, folding, etc.
FIG. 4 illustrates the classification result on the three main categories, andFIG. 5 illustrates the classification result obtained for ten subcategories. - To obtain detailed information on regions with problematic subcategories, a pixel-based classifier is trained to localize and segment the specific area with visible deformation.
FIG. 6 is a flowchart illustrating the segmentation process, according to an embodiment. The classification task can be performed with efficient feature types which can handle and discriminate the visual characteristics of deformations. In particular Sigma points have shown reliable performance in this task since various filter responses and colors can be tightly integrated within a compact feature representation. Together with random decision trees, the classification can be performed quickly by using integral structures. - Similar to the preprocessing step in the condition classification (
FIG. 2 ), at 610, image sequences of the tube tray are acquired; at 620, a ROI of each sample tube is extracted from the input image; and at 630, from the extracted ROI, features are extracted. The pixel-based classification task is performed on each pixel in the ROI to determine how likely this pixel belongs to the problematic area for this specific condition (640). The likelihood is further refined in a Conditional Random Field (CRF) framework, or the like, to incorporate smoothness constraints on the output such that nearby fragmented responses can be merged and noisy outliers can be removed (650). -
FIG. 7 shows sample results on the problematic area localization. This information is used to report problematic image regions for further decision making or visualization. - A controller is provided for managing the image analysis of the images taken by the cameras for classifying barcode tag conditions on sample tubes from top view images. The controller may be, according to an embodiment, part of a sample handler that is used in an in vitro diagnostics (IVD) environment to handle and move the tube trays and the tubes between storage locations, such as the work envelope, to analyzers. One or more memory devices may be associated with the controller. The one or more memory devices may be internal or external to the controller.
Claims (15)
- A method of classifying barcode tag conditions on sample tubes held in a tube tray (120) which has an array of rows and columns of slots in which sample tubes (130) may be held, the method comprising:acquiring, by an image capture system (140) comprised of at least one camera, top view image sequences of the tube tray; andanalyzing, by one or more processors in communication with the image capture system, the top view image sequences, the analyzing comprising, for each sample tube:rectifying a region of interest (ROI) from each input image of the top view image sequences;extracting features from the rectified ROI; andinputting the extracted features from the rectified ROI into a classifier to determine the barcode tag condition, the barcode tag condition based upon a barcode tag condition category stored in the classifier,characterized in that the image capture system (140) captures an image each time a row of the tube tray (120) is moved into a center position or a position substantially centered under the one or more cameras and in that the classifier comprises a pixel-based classifier trained to localize and segment the ROI with visible deformation.
- The method of claim 1, wherein the analyzing by the one or more processors further comprises:
if the determined barcode tag condition comprises a problematic identification, localizing a problematic area and issuing a warning via an output device in communication with the one or more processors. - The method of claim 1, wherein the tube tray (120) is configured to fit within a portion of a drawer movable between an open and a closed position, wherein the top view image sequences of the tube tray preferably comprises images of the tube tray at predetermined positions in the drawer.
- The method of claim 1, wherein rectifying a region of interest (ROI) comprises rectifying the ROI to a canonical geometric orientation.
- The method of claim 1, wherein the ROI for a particular sample tube comprises a region including the particular sample tube (130) from a top portion of the particular sample tube to a surface area of the tube tray to a given region extending outward from the particular sample tube, wherein the given region comprises the barcode tag for the particular sample tube.
- The method of claim 1, wherein the barcode tag conditions are grouped into a predetermined number of main categories, each of the main categories comprising a plurality of subcategories.
- The method of claim 1, wherein the localization and segmentation of the ROI is performed on each pixel in the ROI to determine a likelihood that a particular pixel belongs to a problematic area.
- A vision system for an in vitro diagnostics environment for classifying barcode tag conditions on sample tubes (130) held in a tube tray (120), the vision system comprising:a surface configured to receive the tube tray (120), wherein the tube tray has an array of rows and columns of slots, each configured to receive a sample tube;at least one camera configured to capture top view image sequences of the tube tray (120) positioned on the surface;a processor in communication with the at least one camera, the processor configured to perform the following steps for each sample tube:rectify a region of interest (ROI) from each input image of the top view image sequences;extract features from the rectified ROI; andinput the extracted features from the rectified ROI into a classifier to determine the barcode tag condition, the barcode tag condition based upon a barcode tag condition category stored in the classifier,characterized in that the at least one camera is configured to capture an image each time a row of the tube tray (120) is moved into a center position or a position substantially centered under the one or more cameras and in that the classifier comprises a pixel-based classifier trained to localize and segment the ROI with visible deformation.
- The system of claim 8, wherein the processor is further configured to:
if the determined barcode tag condition comprises a problematic identification, localize a problematic area and issue a warning via an output device in communication with the processor. - The system of claim 8, wherein the surface comprises a drawer movable between an open and a closed position.
- The system of claim 10, wherein the top view image sequences of the tube tray comprises images of the tray at predetermined positions in the drawer.
- The system of claim 8, wherein rectifying a region of interest (ROI) comprises rectifying the ROI to a canonical geometric orientation.
- The system of claim 8, wherein the ROI for a particular sample tube (130) comprises a region including the particular sample tube from a top portion of the particular sample tube to a surface area of the tube tray to a given region extending outward from the particular sample tube, wherein the given region comprises the barcode tag for the particular sample tube.
- The system of claim 8, wherein the barcode tag conditions are grouped into a predetermined number of main categories, each of the main categories comprising a plurality of subcategories.
- The system of claim 8, wherein the localization and segmentation of the ROI is performed on each pixel in the ROI to determine a likelihood that a particular pixel belongs to a problematic area.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US201562117280P | 2015-02-17 | 2015-02-17 | |
PCT/US2016/018096 WO2016133915A1 (en) | 2015-02-17 | 2016-02-16 | Classification of barcode tag conditions from top view sample tube images for laboratory automation |
Publications (3)
Publication Number | Publication Date |
---|---|
EP3259069A1 EP3259069A1 (en) | 2017-12-27 |
EP3259069A4 EP3259069A4 (en) | 2018-03-14 |
EP3259069B1 true EP3259069B1 (en) | 2019-04-24 |
Family
ID=56689314
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
EP16752913.0A Active EP3259069B1 (en) | 2015-02-17 | 2016-02-16 | Classification of barcode tag conditions from top view sample tube images for laboratory automation |
Country Status (6)
Country | Link |
---|---|
US (1) | US10325182B2 (en) |
EP (1) | EP3259069B1 (en) |
JP (1) | JP6560757B2 (en) |
CN (1) | CN107427835B (en) |
CA (1) | CA2976774C (en) |
WO (1) | WO2016133915A1 (en) |
Families Citing this family (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP3259069B1 (en) * | 2015-02-17 | 2019-04-24 | Siemens Healthcare Diagnostics Inc. | Classification of barcode tag conditions from top view sample tube images for laboratory automation |
EP3259068B1 (en) * | 2015-02-17 | 2023-03-29 | Siemens Healthcare Diagnostics Inc. | Detection of barcode tag conditions on sample tubes |
JP7012746B2 (en) | 2017-04-13 | 2022-01-28 | シーメンス・ヘルスケア・ダイアグノスティックス・インコーポレーテッド | Label correction method and equipment during sample evaluation |
US11238318B2 (en) | 2017-04-13 | 2022-02-01 | Siemens Healthcare Diagnostics Inc. | Methods and apparatus for HILN characterization using convolutional neural network |
EP3610269A4 (en) | 2017-04-13 | 2020-04-22 | Siemens Healthcare Diagnostics Inc. | Methods and apparatus for determining label count during specimen characterization |
CN110252034B (en) * | 2019-05-10 | 2021-07-06 | 太原理工大学 | Biological 3D prints toilet's fungus degree control and monitoring system |
CN110119799A (en) * | 2019-05-11 | 2019-08-13 | 安图实验仪器(郑州)有限公司 | Sample rack heparin tube bar code visual identity method |
US11796446B2 (en) * | 2019-10-01 | 2023-10-24 | National Taiwan University | Systems and methods for automated hematological abnormality detection |
EP4080515A1 (en) * | 2021-04-19 | 2022-10-26 | Roche Diagnostics GmbH | A method for classifying an identification tag on a sample tube containing a sample and an automated laboratory system |
Family Cites Families (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5153416A (en) * | 1989-09-20 | 1992-10-06 | Neeley William E | Procedure and assembly for drawing blood |
WO2003023571A2 (en) * | 2001-09-12 | 2003-03-20 | Burstein Technologies, Inc. | Methods for differential cell counts including related apparatus and software for performing same |
JP4196302B2 (en) * | 2006-06-19 | 2008-12-17 | ソニー株式会社 | Information processing apparatus and method, and program |
KR100988960B1 (en) * | 2008-04-23 | 2010-10-20 | 한국도로공사 | Bridge inspecting robot capable of climbing obstacle |
US8321055B2 (en) * | 2009-11-03 | 2012-11-27 | Jadak, Llc | System and method for multiple view machine vision target location |
US20130012916A1 (en) * | 2010-02-11 | 2013-01-10 | Glide Pharmaceutical Technologies Limited | Delivery of immunoglobulin variable domains and constructs thereof |
US8451384B2 (en) * | 2010-07-08 | 2013-05-28 | Spinella Ip Holdings, Inc. | System and method for shot change detection in a video sequence |
US8523075B2 (en) * | 2010-09-30 | 2013-09-03 | Apple Inc. | Barcode recognition using data-driven classifier |
US9310791B2 (en) * | 2011-03-18 | 2016-04-12 | Siemens Healthcare Diagnostics Inc. | Methods, systems, and apparatus for calibration of an orientation between an end effector and an article |
KR20140091033A (en) * | 2011-11-07 | 2014-07-18 | 베크만 컬터, 인코포레이티드 | Specimen container detection |
EP2809444B1 (en) * | 2012-02-03 | 2018-10-31 | Siemens Healthcare Diagnostics Inc. | Barcode reading test tube holder |
US10145857B2 (en) * | 2013-03-14 | 2018-12-04 | Siemens Healthcare Diagnostics Inc. | Tube tray vision system |
US9594983B2 (en) | 2013-08-02 | 2017-03-14 | Digimarc Corporation | Learning systems and methods |
EP3259069B1 (en) * | 2015-02-17 | 2019-04-24 | Siemens Healthcare Diagnostics Inc. | Classification of barcode tag conditions from top view sample tube images for laboratory automation |
-
2016
- 2016-02-16 EP EP16752913.0A patent/EP3259069B1/en active Active
- 2016-02-16 JP JP2017542012A patent/JP6560757B2/en active Active
- 2016-02-16 WO PCT/US2016/018096 patent/WO2016133915A1/en active Application Filing
- 2016-02-16 CN CN201680010510.4A patent/CN107427835B/en active Active
- 2016-02-16 CA CA2976774A patent/CA2976774C/en active Active
- 2016-02-16 US US15/551,566 patent/US10325182B2/en active Active
Non-Patent Citations (1)
Title |
---|
None * |
Also Published As
Publication number | Publication date |
---|---|
CN107427835B (en) | 2020-06-05 |
WO2016133915A1 (en) | 2016-08-25 |
EP3259069A4 (en) | 2018-03-14 |
JP2018512566A (en) | 2018-05-17 |
US20180046883A1 (en) | 2018-02-15 |
CA2976774A1 (en) | 2016-08-25 |
CA2976774C (en) | 2023-02-28 |
CN107427835A (en) | 2017-12-01 |
EP3259069A1 (en) | 2017-12-27 |
US10325182B2 (en) | 2019-06-18 |
JP6560757B2 (en) | 2019-08-14 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
EP3259069B1 (en) | Classification of barcode tag conditions from top view sample tube images for laboratory automation | |
EP3259068B1 (en) | Detection of barcode tag conditions on sample tubes | |
US11774735B2 (en) | System and method for performing automated analysis of air samples | |
CA2950296C (en) | Drawer vision system | |
US10510038B2 (en) | Computer implemented system and method for recognizing and counting products within images | |
US11226280B2 (en) | Automated slide assessments and tracking in digital microscopy | |
JP2018512567A5 (en) | ||
EP3652679B1 (en) | Methods and systems for learning-based image edge enhancement of sample tube top circles | |
US20070133856A1 (en) | Cross-frame object reconstruction for image-based cytology applications | |
US20220012884A1 (en) | Image analysis system and analysis method | |
CN111161295A (en) | Background stripping method for dish image | |
WO2005024395A1 (en) | System for organizing multiple objects of interest in field of interest | |
US20160334427A1 (en) | Priority indicator for automation system fluid sample | |
CN110622168A (en) | Medical image detection | |
US20220383618A1 (en) | Apparatus and methods of training models of diagnostic analyzers | |
JPH0886736A (en) | Method for analyzing particle image |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE INTERNATIONAL PUBLICATION HAS BEEN MADE |
|
PUAI | Public reference made under article 153(3) epc to a published international application that has entered the european phase |
Free format text: ORIGINAL CODE: 0009012 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE |
|
17P | Request for examination filed |
Effective date: 20170913 |
|
AK | Designated contracting states |
Kind code of ref document: A1 Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR |
|
AX | Request for extension of the european patent |
Extension state: BA ME |
|
RIN1 | Information on inventor provided before grant (corrected) |
Inventor name: WU, WEN Inventor name: CHEN, TERRENCE Inventor name: POLLACK, BENJAMIN Inventor name: SOOMRO, KHURRAM Inventor name: KLUCKNER, STEFAN Inventor name: CHANG, YAO-JEN |
|
A4 | Supplementary search report drawn up and despatched |
Effective date: 20180212 |
|
RIC1 | Information provided on ipc code assigned before grant |
Ipc: G06K 9/32 20060101ALI20180206BHEP Ipc: G01N 21/01 20060101ALI20180206BHEP Ipc: B01L 3/14 20060101AFI20180206BHEP Ipc: B01L 9/06 20060101ALN20180206BHEP Ipc: G06K 9/78 20060101ALI20180206BHEP Ipc: G01N 35/00 20060101ALI20180206BHEP Ipc: G06K 9/72 20060101ALI20180206BHEP Ipc: G06T 7/60 20170101ALI20180206BHEP Ipc: G06K 9/62 20060101ALI20180206BHEP Ipc: B01L 3/00 20060101ALN20180206BHEP |
|
DAV | Request for validation of the european patent (deleted) | ||
DAX | Request for extension of the european patent (deleted) | ||
GRAP | Despatch of communication of intention to grant a patent |
Free format text: ORIGINAL CODE: EPIDOSNIGR1 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: GRANT OF PATENT IS INTENDED |
|
RIC1 | Information provided on ipc code assigned before grant |
Ipc: G06K 9/32 20060101ALI20181017BHEP Ipc: G06K 7/14 20060101ALI20181017BHEP Ipc: G06K 9/46 20060101ALI20181017BHEP Ipc: B01L 3/14 20060101AFI20181017BHEP Ipc: G01N 35/00 20060101ALI20181017BHEP Ipc: B01L 3/00 20060101ALN20181017BHEP Ipc: G01N 21/01 20060101ALI20181017BHEP Ipc: G06K 9/72 20060101ALI20181017BHEP Ipc: G06K 9/20 20060101ALI20181017BHEP Ipc: G06T 7/11 20170101ALI20181017BHEP Ipc: G06T 7/00 20170101ALN20181017BHEP Ipc: G06T 7/60 20170101ALI20181017BHEP Ipc: B01L 9/06 20060101ALN20181017BHEP Ipc: G06K 9/78 20060101ALI20181017BHEP Ipc: G06K 9/62 20060101ALI20181017BHEP |
|
INTG | Intention to grant announced |
Effective date: 20181120 |
|
GRAS | Grant fee paid |
Free format text: ORIGINAL CODE: EPIDOSNIGR3 |
|
GRAA | (expected) grant |
Free format text: ORIGINAL CODE: 0009210 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE PATENT HAS BEEN GRANTED |
|
AK | Designated contracting states |
Kind code of ref document: B1 Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR |
|
REG | Reference to a national code |
Ref country code: GB Ref legal event code: FG4D |
|
REG | Reference to a national code |
Ref country code: CH Ref legal event code: EP |
|
REG | Reference to a national code |
Ref country code: AT Ref legal event code: REF Ref document number: 1123491 Country of ref document: AT Kind code of ref document: T Effective date: 20190515 Ref country code: IE Ref legal event code: FG4D |
|
REG | Reference to a national code |
Ref country code: DE Ref legal event code: R096 Ref document number: 602016012936 Country of ref document: DE |
|
REG | Reference to a national code |
Ref country code: NL Ref legal event code: MP Effective date: 20190424 |
|
REG | Reference to a national code |
Ref country code: LT Ref legal event code: MG4D |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: NL Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20190424 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: ES Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20190424 Ref country code: LT Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20190424 Ref country code: HR Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20190424 Ref country code: NO Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20190724 Ref country code: PT Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20190824 Ref country code: SE Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20190424 Ref country code: AL Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20190424 Ref country code: FI Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20190424 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: LV Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20190424 Ref country code: RS Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20190424 Ref country code: PL Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20190424 Ref country code: GR Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20190725 Ref country code: BG Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20190724 |
|
REG | Reference to a national code |
Ref country code: AT Ref legal event code: MK05 Ref document number: 1123491 Country of ref document: AT Kind code of ref document: T Effective date: 20190424 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: IS Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20190824 |
|
REG | Reference to a national code |
Ref country code: DE Ref legal event code: R097 Ref document number: 602016012936 Country of ref document: DE |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: CZ Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20190424 Ref country code: SK Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20190424 Ref country code: EE Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20190424 Ref country code: RO Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20190424 Ref country code: DK Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20190424 Ref country code: AT Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20190424 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: SM Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20190424 Ref country code: IT Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20190424 |
|
PLBE | No opposition filed within time limit |
Free format text: ORIGINAL CODE: 0009261 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: NO OPPOSITION FILED WITHIN TIME LIMIT |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: TR Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20190424 |
|
26N | No opposition filed |
Effective date: 20200127 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: SI Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20190424 |
|
REG | Reference to a national code |
Ref country code: CH Ref legal event code: PL |
|
REG | Reference to a national code |
Ref country code: BE Ref legal event code: MM Effective date: 20200229 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: LU Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20200216 Ref country code: MC Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20190424 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: CH Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20200229 Ref country code: LI Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20200229 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: IE Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20200216 Ref country code: FR Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20200229 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: BE Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20200229 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: MT Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20190424 Ref country code: CY Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20190424 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: MK Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20190424 |
|
PGFP | Annual fee paid to national office [announced via postgrant information from national office to epo] |
Ref country code: GB Payment date: 20230308 Year of fee payment: 8 Ref country code: DE Payment date: 20220620 Year of fee payment: 8 |