WO2022117094A1 - Method, device, and computer program product for slides processing - Google Patents

Method, device, and computer program product for slides processing Download PDF

Info

Publication number
WO2022117094A1
WO2022117094A1 PCT/CN2021/135516 CN2021135516W WO2022117094A1 WO 2022117094 A1 WO2022117094 A1 WO 2022117094A1 CN 2021135516 W CN2021135516 W CN 2021135516W WO 2022117094 A1 WO2022117094 A1 WO 2022117094A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
label
sub
information
tray
Prior art date
Application number
PCT/CN2021/135516
Other languages
French (fr)
Inventor
Chang LONG
Xiaojun Tao
Weibin Xing
Original Assignee
Roche Diagnostics (Shanghai) Limited
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Roche Diagnostics (Shanghai) Limited filed Critical Roche Diagnostics (Shanghai) Limited
Priority to CN202180092893.5A priority Critical patent/CN116829958A/en
Publication of WO2022117094A1 publication Critical patent/WO2022117094A1/en

Links

Images

Classifications

    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B21/00Microscopes
    • G02B21/36Microscopes arranged for photographic purposes or projection purposes or digital imaging or video purposes including associated control and data processing arrangements
    • G02B21/365Control or image processing arrangements for digital or video microscopes
    • G02B21/367Control or image processing arrangements for digital or video microscopes providing an output produced by processing a plurality of individual source images, e.g. image tiling, montage, composite images, depth sectioning, image comparison
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B21/00Microscopes
    • G02B21/34Microscope slides, e.g. mounting specimens on microscope slides
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/12Edge-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/155Segmentation; Edge detection involving morphological operators
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/751Comparing pixel values or logical combinations thereof, or feature values having positional relevance, e.g. template matching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition
    • G06V30/14Image acquisition
    • G06V30/1444Selective acquisition, locating or processing of specific regions, e.g. highlighted text, fiducial marks or predetermined fields
    • G06V30/1448Selective acquisition, locating or processing of specific regions, e.g. highlighted text, fiducial marks or predetermined fields based on markings or identifiers characterising the document or the area
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N35/00Automatic analysis not limited to methods or materials provided for in any single one of groups G01N1/00 - G01N33/00; Handling materials therefor
    • G01N35/00584Control arrangements for automatic analysers
    • G01N35/00722Communications; Identification
    • G01N35/00732Identification of carriers, materials or components in automatic analysers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20021Dividing image into blocks, subimages or windows
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20112Image segmentation details
    • G06T2207/20132Image cropping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30024Cell structures in vitro; Tissue sections in vitro
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30072Microarray; Biochip, DNA array; Well plate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30204Marker

Definitions

  • Embodiments of the present disclosure generally relate to the field of digital pathology and in particular, to a method, a device, a computer program product for slides processing.
  • the present disclosure provides a solution for slides processing, where a plurality of slides can be automatically processed at one time, thus improving the efficiency of slides management.
  • Determining the plurality of sub-images from the second image may comprise detecting edges of the plurality of slots in the second image; and determining the plurality of sub-images from the second image based on the edges. Due to the determining of the sub-images based on edges of the slots, the edges of the slides may be exactly identified for further identifying information provided by the label, specifically barcode information. Specifically, barcode areas may be positioned for detection.
  • the method may further comprise storing the information in association with the first image.
  • the method may further comprise determining an index of a sub-image corresponding to the label, the index indicating a position of a slot holding the slide on the tray; and the storing of the information may further comprise storing the information in association with the index.
  • the storing of the information may further comprise storing the information in association with the sub-image corresponding to the slide.
  • the information may indicate an identification of the slide and the method may further comprise: receiving an input indicating a target identification of a slide; obtaining a target image of a tray based on the target identification; and presenting the target image as a response to the input.
  • an electronic device comprising one or more processors, one or more memories coupled to the one or more processors and having computer-executable instructions stored thereon.
  • the computer-executable instructions when executed by the one or more processors, cause the device to perform acts as follows: obtaining a first image of a tray comprising a plurality of slots, a slot being capable of holding a slide, the slide comprising a label and a specimen for pathologic analysis; determining a plurality of sub-images from the first image based on a structure of the tray, each of the plurality of sub-images corresponding to one of the plurality of slots; and extracting information from the label of the slide based on at least one of the sub-images.
  • the first image of the tray may be obtained by an image capturing device, the image capturing device comprising a camera, a light source and a housing for enclosing the camera and the light source.
  • the tray may be placed in the housing for image capturing.
  • Extracting information from the label of the slide may comprise: detecting a region associated with the label in the at least one of the sub-images; and extracting the information based on the region associated with the label.
  • Extracting information from the label of the slide may comprise: dividing a sub-image into a plurality of regions, at least one of the plurality of regions having a size corresponding to the label; and extracting the information from the at least one of the plurality of regions.
  • the label may comprise a two-dimensional code
  • extracting the information may comprise: for each of the plurality of regions: extracting the information encoded in the two-dimensional code from the region of the label; in accordance with a failure of extracting the information, obtaining data of a lightness channel corresponding to the region; generating a binary image based on the data of the lightness channel; filtering out a connective region in the binary image based on a shape or a size of the connective region; and extracting the information from the filtered binary image.
  • the acts may further comprise: presenting the first image and a visual element indicating a status of one of the plurality sub-images, the processing status being selected from a group consisting of: a first status indicating that a label is contained in a corresponding sub-image and the information indicated by the label is successfully extracted, a second status indicating that no label is contained in a corresponding sub-image, and a third status indicating that a label is contained in a corresponding sub-image but the information indicated by the label fails to be extracted.
  • the information may indicate an identification of the slide and the acts may further comprise: receiving an input indicating a target identification of a slide; obtaining a target image of a tray based on the target identification; and presenting the target image as a response to the input.
  • a computer readable storage medium having computer-executable instructions stored thereon, the computer-executable instructions, when executed by a processor of an apparatus, causing the apparatus to perform the steps of the method in the first aspect described above.
  • a computer program product comprising computer-executable instructions which, when executed by a processor of an apparatus, cause the apparatus to perform the steps of the method in the first aspect described above.
  • a scanning device comprising a housing with an opening for loading a tray with a plurality of slots.
  • the tray is capable of holding at least one slide, which comprises a label and a specimen for pathologic analysis.
  • the scanning device further comprises a camera.
  • the camera may be configured to capture an image of the tray for extracting information from the label.
  • the scanning device may comprise a light source, specifically a light source with a configurable illuminative parameter.
  • the scanning device further comprises a processor configured to perform acts comprising, in response to the tray being loaded into the scanning device, obtaining an image of the tray, specifically by using the camera and the light source, determining a plurality of sub-images from the image of the tray based on a structure of the tray, each of the plurality of sub-images corresponding to one of the plurality of slots; and extracting information from the label of the slide based on at least one of the sub-images. Further, the acts may comprise storing the information in association with the first image.
  • the processor may be configured for performing the method for slide processing as described above or as will further be described below in more detail.
  • specimen as used herein is a broad term and is to be given its ordinary and customary meaning to a person of ordinary skill in the art and is not to be limited to a special or customized meaning.
  • the term specifically may refer, without limitation, to an arbitrary element which is removed or collected from a human body or from an animal body for pathologic analysis.
  • the specimen may refer to a piece of tissue or to an entire organ.
  • the specimen may also refer to a bodily fluid such as blood or urine.
  • label as used herein is a broad term and is to be given its ordinary and customary meaning to a person of ordinary skill in the art and is not to be limited to a special or customized meaning.
  • the term specifically may refer, without limitation, to an arbitrary identifier which may be attached to an article, such as to a sticker which may be attached or attachable on a surface of another element.
  • the identifier as an example, may comprise an optical identifier, such as a barcode or a QR code, and/or an electronic identifier, such as an RFID identifier.
  • the label may comprise at least one adhesive surface. Specifically, the label may be configured for attachment on a surface of the slide.
  • the label may be configured for attachment on an area of a surface of the slide neighboring the supporting surface of the slide being configured for receiving the at least one specimen.
  • the label may be configured for tracking the slide during different processing stages.
  • the label may comprise at least one element for indicating information of the slide such as at least one one-dimensional code or at least one two-dimensional code.
  • extract information from the label specifically may refer, without limitation, to a machine-reading of data from an one-dimensional code or from a two-dimensional code via at least one optical reader as well as to an electronically processing of the data.
  • slide processing is a broad term and is to be given its ordinary and customary meaning to a person of ordinary skill in the art and is not to be limited to a special or customized meaning.
  • the term specifically may refer, without limitation, to a procedure of applying one or more steps of at least one of slide preparation and slide analysis.
  • the slide processing may comprise a collection of different slide preparation and slide analysis steps in pathologic analysis.
  • the different steps may specifically include sampling, embedding, slicing, staining, reading and/or archiving.
  • the term “tray” as used herein is a broad term and is to be given its ordinary and customary meaning to a person of ordinary skill in the art and is not to be limited to a special or customized meaning.
  • the term specifically may refer, without limitation, to an arbitrary carrier element, such as a flat element, which is configured for carrying at least one other object.
  • the tray may comprise at least one supporting surface configured for receiving the at least one other object. More specifically, the tray may comprise at least one recess or slot configured for receiving the at least one other object.
  • the term “slot” as used herein is a broad term and is to be given its ordinary and customary meaning to a person of ordinary skill in the art and is not to be limited to a special or customized meaning.
  • the term specifically may refer, without limitation, to a recess or an opening within an arbitrary element such as within the tray.
  • the slot may have a surrounding frame configured for holding an object in a desired position.
  • the surrounding frame may be configured for preventing a dislocation of the object at least to a large extent when the element is tilted or transported.
  • image is a broad term and is to be given its ordinary and customary meaning to a person of ordinary skill in the art and is not to be limited to a special or customized meaning.
  • the term specifically may refer, without limitation, to data recorded by using a camera, such as a plurality of electronic readings from an imaging device, such as the pixels of the camera chip.
  • the image itself, thus, may comprise pixels, the pixels of the image correlating to pixels of the camera chip. Consequently, when referring to "pixels” , reference is either made to the units of image information generated by the single pixels of the camera chip or to the single pixels of the camera chip directly.
  • the image may comprise raw pixel data.
  • the image may comprise data in the RGB space, single color data from one of R, G or B pixels, a Bayer pattern image or the like.
  • sub-image as used herein is a broad term and is to be given its ordinary and customary meaning to a person of ordinary skill in the art and is not to be limited to a special or customized meaning.
  • the term specifically may refer, without limitation, to an image which depicts a part or a section of another image. Specifically, as will be outlined in further detail below, the image may be divided in a plurality of sub-images. Thereby, each sub-image may depict another section of the image.
  • the method for slide processing comprises determining a plurality of sub-images from the first image based on a structure of the tray.
  • the term “based on a structure of the tray” as used herein is a broad term and is to be given its ordinary and customary meaning to a person of ordinary skill in the art and is not to be limited to a special or customized meaning.
  • the term specifically may refer, without limitation, to the circumstance that the plurality of sub-images are obtained in dependency of an arrangement of the slots, a size of the slots, a shape of the slots, an orientation of the slots to each other and a number of the slots on the tray.
  • other parameters may be considered.
  • the plurality of sub-images from the image may be determined such that one area of interest such as one single slot, one slide received in the slot or one area of the slide received in the slot may be depicted completely at least to a large extent on the sub-image.
  • enhancing an image is a broad term and is to be given its ordinary and customary meaning to a person of ordinary skill in the art and is not to be limited to a special or customized meaning.
  • the term specifically may refer, without limitation, to an arbitrary process which improves an image color, a contrast and/or a quality of an image.
  • the enhancing may exemplarily include converting a color image into a grayscale image, converting an RGB image into an image with another color space such as HSV, HLS or the like, sharpening an image and/or the elimination of blurs. Further details will be given below in more detail.
  • Fig. 1 illustrates an environment 100 in which example embodiments of the present disclosure can be implemented
  • Fig. 3 illustrates an example flowchart of a method for slide processing according to an embodiment of the present disclosure.
  • Fig. 4 illustrates an example flowchart of a method for enhancing the image of the tray.
  • Fig. 5 illustrates an example diagram for dividing a tray image for batch scanning according to an embodiment of the present disclosure.
  • Fig. 7 illustrates a schematic block diagram of an example device 700 for implementing embodiments of the present disclosure.
  • references in the present disclosure to “one embodiment, ” “an embodiment, ” “an example embodiment, ” and the like indicate that the embodiment described may include a particular feature, structure, or characteristic, but it is not necessary that every embodiment includes the particular feature, structure, or characteristic. Moreover, such phrases are not necessarily referring to the same embodiment. Further, when a particular feature, structure, or characteristic is described in connection with an example embodiment, it is submitted that it is within the knowledge of one skilled in the art to affect such feature, structure, or characteristic in connection with other embodiments whether or not explicitly described.
  • first and second etc. may be used herein to describe various elements, these elements should not be limited by these terms. These terms are only used to distinguish one element from another. For example, a first element could be termed a second element, and similarly, a second element could be termed a first element, without departing from the scope of example embodiments.
  • the term “and/or” includes any and all combinations of one or more of the listed terms.
  • slides holding specimens of patients are prepared, read, and then analyzed.
  • Those slides are critical for a patient, and need to be efficiently tracked during different processing stages. For example, when a specimen on a slide is to be stained, an identification of the slide may need to be recorded for preventing incorrectly associating the slide with a different patient.
  • the slides associated with the patient need also be physically transferred.
  • the original hospital/institute may need to record which of the slides are transferred the new hospital/institute may need to record which slides are received.
  • a label on a slide may be utilized for tracking.
  • a barcode contained on the label may be scanned, and the information encoded by the barcode may be extracted and recorded for tracking.
  • a solution for slides processing In this solution, a first image of a tray comprising a plurality of slots is first obtained, wherein the slots are capable of holding a slide, and the slide comprises a label and a specimen for pathologic analysis. A plurality of sub-images can then be determined from the first image based on a structure of the tray, wherein each of the plurality of sub-images is corresponding to one of the plurality of slots. Information can then be extracted from the label of the slide based on at least one of the sub-images. Through the solution, information of a plurality of slides can be automatically extracted at one time, which can thus significantly reduce manual efforts paid in scanning the slides.
  • Fig. 1 illustrates an environment 100 in which example embodiments of the present disclosure can be implemented.
  • the environment 100 may comprise a scanning device 110 which is configured to capture an image of a tray 116.
  • a tray 116 may comprise a plurality of slots for holding the slides.
  • Fig. 2 illustrates an example of the tray 116 in accordance with some embodiments. As shown in Fig. 2, the tray 116 may comprise twenty slots 210, a shape of which is designed to hold a slide 220. In the example of Fig. 2, the twenty slots 210 are arranged in two rows, each row comprising ten slots 210.
  • the shape of the tray 116 and the arrangement of the slots 210 as shown in Fig. 2 are only for example, and any other proper tray structure may be utilized. In one example, a tray with a circle shape could be utilized to hold the slides. In another example, the twenty slots 210 may be arranged in four rows, each comprising five slots.
  • one or more slides 220 may be placed on the slots 210.
  • An example slide 220 is shown in Fig. 2 for illustration.
  • the slide 220 may comprise a label 222 and the corresponding specimen 224.
  • the label 222 may comprise at least one element for indicating information of the slide 220.
  • the label 222 may comprise at least one character, which may indicate information associated with the slide 220.
  • the label 222 may comprise text “John” for indicating the name of a patient associated with the slide 220.
  • the at least one character such as the text “John” may be depicted in a first area 221 within the label 222 which is marked with dashed lines.
  • the label 222 may comprise at least a symbol, which may indicate information associated with the slide 220.
  • a logo of a hospital sampling the specimen 224 may be printed on the label 222 for indicating which hospital prepared this slide 220.
  • the at least one logo or at least one character with reference to a hospital such as the text “No. 3 HOSPITAL” may be depicted in a second area 223 within the label 222 which is marked with dashed lines.
  • the label 222 may comprise a graphically encoded representation.
  • the examples of graphically encoded representation may comprise but are not limited to: one dimensional barcode, two dimensional code (e.g., QR code) and any other graphical presentations for encoding information.
  • QR code e.g., QR code
  • the structure of the scanning device 110 may be particularly designed.
  • the scanning device 110 may comprise a housing 118 enclosing a camera 114 for capture an image of the tray 116.
  • the housing 118 may comprise an opening 113 for loading and unloading the tray 116.
  • a light source 112 may be provided within the housing 118. Examples of the light source 112 may comprise, but are not limited to, an incandescent lamp, a halogen lamps, a fluorescent lamp, a mercury lamp, light emitting diode (LED) lamp, or the like.
  • the light source may have configurable illuminative parameters, such as luminous flux, color temperature, power, brightness, and the like, which are adjustable by the user.
  • the tray 116 may be put within the housing 118, such that influence of environmental light outside of the housing 118 may be reduced and quality of the captured image may be improved.
  • the scanning device 110 may further comprise a support 117.
  • the tray 116 may be fixed onto the support 117 for image capturing. In this case, different trays may have a relative stable position in the captured image, thereby facilitating analysis of the captured image. It should be understood that, though only one camera 114 and one light source 112 are shown in the example of Fig. 1, multiple cameras 114 and/or multiple light sources may be included in the scanning device 110.
  • the scanning device 110 may be communicatively coupled to a computing device 120.
  • the scanning device 110 may capture an image 140 of the tray 116 and then send the captured image 140 to the computing device 120 for analysis.
  • a pathologist may press a button (not shown in Fig. 1) on the scanning device 110 to cause the scanning device 110 to capture the image 140 and then transmit the image to the computing device 120.
  • the scanning device 110 may detect whether the tray 116 is ready within the housing 118, and may starting capturing the image 140 of the tray 116 after a predetermine time period after the tray 116 is determined as ready.
  • the scanning device 110 may receive an instruction from the computing device 120 and then start capturing the image 140 of the tray 116.
  • a user may interact with the computing device 120 through a graphical user interface for causing the scanning device 110 to capture the image 140 of the tray 116.
  • the computing device 120 may extract information from the label (s) of the slide (s) from the captured image 140.
  • the process of extracting information will be discussed in detail with reference to Figs. 3-7 below.
  • the computing device 120 may be coupled with display 122.
  • the display 122 may present to a user (e.g., a pathologist) a graphical user interface.
  • the user may for example review the analysis results of the tray 116 through the graphical user interface.
  • the graphical user interface may also provide the user with some components for controlling the scanning device 110.
  • the user may interact with the graphical user interface to power on and/or power off the scanning device 110.
  • the user also may configure the parameters for capturing the image 140 of the tray 116 through the graphical user interface.
  • the computing device 120 may be further coupled to a storage device 130.
  • the computing device 120 may store the extracted information in the storage device 130, and may retrieve the stored information from the storage device 130 in response to a future query.
  • the computing device 120 is shown as a separate entity from the scanning device 110, the computing device 120 may be contained in or integrated with the scanning device 110 as a hardware and/or software component of the scanning device 110. In such cases, the image analysis as will be discussed in detail below may then be implemented by the scanning device 110 itself.
  • the scanning device 110 may have a different structure than the example in Fig. 1.
  • a pathologist may use a cellphone or camera to capture an image 140 of a tray 116 which is placed on a desk.
  • the solution of slides processing as will be discussed in detail below can also be applied to an image of a tray captured by a different scanning device than the example in Fig. 1.
  • Fig. 3 illustrates a flowchart of an example process 300 for slide processing according to an embodiment of the present disclosure.
  • the process 300 may be implemented by the computing device 120 as shown in Fig. 1.
  • the process 300 will be described with reference to Figs. 1-2.
  • the computing device 120 obtains a image 140 (also referred to first image 140) of a tray 116 comprising a plurality of slots 210, wherein a slot 210 is capable of holding a slide 220, and the slide 220 comprises a label 222 and a specimen 224 for pathologic analysis.
  • At least one slice 220 may be placed on the slot (s) 210. It should be noted that it is unnecessary to have all of the slots 210 occupied. For example, when the tray 116 comprising twenty slots 210 is used, six slices may be placed on any suitable slots 210 for image capturing, with other fourteen slots being empty.
  • a scanning device 110 may be used to capture an image 140 of the tray 116.
  • the scanning device 110 may send the captured image 140 of the tray 116 to the computing device 120 via wire or wireless network.
  • the scanning device 110 may send the captured image 140 along with other captured images at one time, e.g., later on a day.
  • an image 140 of the tray 116 may be stored in a storage device, for example the storage device 130. Further, the computing device 120 may later obtain the image 140 of the tray 116 from the storage device 130 for further analysis.
  • the computing device 120 may for example obtain the image 140 of the tray 116 from an image capturing component (e.g., the camera 114) through internal communication within the scanning device 110.
  • an image capturing component e.g., the camera 11
  • the image of the tray 116 may be captured by any suitable devices, including but not limited to the example scanning device 110 shown in Fig. 1.
  • the computing device 120 determines a plurality of sub-images from the first image 140 based on a structure of the tray 116, wherein each of the plurality of sub-images is corresponding to one of the plurality of slots 210.
  • the computing device 120 extracts information from the label 222 of the slide 220 based on at least one of the sub-images.
  • the computing device 120 may first obtain a second image by enhancing the first image 140.
  • Fig. 4 illustrates a flowchart of an example process 400 for enhancing the first image.
  • the computing device 120 may convert the captured first image 140 of the tray 116 into a grayscale image.
  • the first image 140 may be a color image, which main contain richer information for later analysis.
  • the computing device 120 may convert the first image 140 into an 8-bit grayscale image.
  • a grayscale value of a pixel is represented by an 8-bit byte (i.e., a value range of 0-255) .
  • a maximum value of the three components of R, G, and B of a pixel in the first image 140 may be calculated as the grayscale value.
  • a weighted average of the three components of R, G, and B of a pixel in the first image 140 may be calculated as the grayscale value.
  • a grayscale image can be obtained by converting the RGB image to another color space (e.g. HSV, HLS, or the like) and then calculating the gray value based on components of the other color space.
  • another color space e.g. HSV, HLS, or the like
  • the computing device 120 may obtain a second image by sharpening the grayscale image.
  • the grayscale image may also be sharpened by, for example, a Laplace operator.
  • An example Laplace operator may be, for example, a 4-connection kernel, having a value of 4 for the element in the center, and a value of -1 for the four neighboring elements.
  • the sharpened grayscale image may be further adapted back into an 8-bit grayscale image for overflow values of pixels.
  • the grayscale image By sharpening the grayscale image, more salient edges in the image could be obtained, which may facilitate the image analysis which will be described below.
  • the computing device 120 may determine whether the standard deviation of the second image is greater than a threshold.
  • a standard deviation of the second image may be calculated and compared with a threshold.
  • the process 400 proceeds to block 440.
  • the computing device may divide the enhanced second image into a plurality of sub-images which may specifically be based on the structure of the tray 116.
  • the process 400 proceeds to block 450.
  • the computing device 120 may prompt the user to capture another image of the tray.
  • the user may configure illuminative parameters of the light source (for example, luminous flux, color temperature, power, brightness, etc. ) and/or the parameters of the camera (for example, resolution, white balance, focus, etc. ) , such that obtained new image may have a higher quality for recognition.
  • a preview of the image may be presented on the display to assist the user to adjust the parameters.
  • an image that meets the threshold requirement might not be obtained after a number of attempts.
  • one of the captured images with a highest standard deviation may be selected for the later processing.
  • the threshold used in the block 430 may for example be decreased, and an image with a relative higher quality can then be selected.
  • the computing device may further determine the plurality of sub-images from the second image based on the structure of the tray.
  • the computing device 120 may determine a template corresponding to the structure of the tray 116, wherein the template may indicate a layout of the plurality of slots 210.
  • the tray (s) 116 for holding the slides are designed with a single structure.
  • the layouts of the slots 210 on the tray (s) 116 are the same.
  • a template for indicating the layout of the plurality of slots 210 in the tray 116 may be predetermined and maintained.
  • the template may indicate that there are twenty slots arranged in two rows, each row comprising ten slots evenly arranged in space.
  • tray (s) with multiple types of structures may be used.
  • multiples trays with different structures may be provided, and a pathologist may select one from them for holding the slices.
  • multiple templates corresponding to the multiple trays could be predetermined, and then maintained in association with an identification of the tray.
  • a first template indicating a first layout of the slots may be maintained in association with a first identification
  • a second template indicating a second layout of the slots may be maintained in association with a second identification.
  • an identification of the tray 116 may be first obtained.
  • a pathologist may first input, through a graphical user interface, that a tray 116 with a first identification is being used.
  • the computing device 120 may then obtain the maintained first template based on the first identification.
  • the computing device 120 may automatically detect the identification of the tray 116 for example based on the first or second image. For example, a number “1” might be printed on the tray 116 for indicating that the tray 116 has a first identification. The computing device 120 may then obtain the maintained first template based on the first identification.
  • the computing device 120 may then divide the second image into the plurality of sub-images according to the determined template. For example, the computing device 120 may device the second image into twenty sub-images according the obtained template indicating that there are twenty slots arranged in two rows, each row comprising ten slots evenly arranged in space.
  • the computing device 120 may detect edges of the plurality of slots 210 in the second image. It should be noted that any proper edge detection algorithms, e.g., Sobel algorithm or Canny algorithm, might be applied for detecting the edges of a slot. The present disclosure is not aimed to be limited in this aspect.
  • the computing device 120 may then determine the plurality of sub-images from the second image based on the edges. For example, the computing device 120 may determine the plurality of regions bounded by the edges as the plurality of sub-images.
  • Fig. 5 illustrates a schematic diagram 500 for slide processing according to an embodiment of the present disclosure.
  • an enhanced second image 510 may be obtained according to the process discussed above.
  • the computing device 120 may further determine twenty sub-images 520 from the second image 510 based on the process discussed above.
  • each of the plurality of sub-images 520 may be assigned with an index.
  • twenty indexes e.g. 1, 2, 3...K
  • the assigned indexes may help associate the information to be extracted with the corresponding sub-image.
  • the computing device 120 may also determine the plurality of sub-images directly from the first image 140, without enhancing the first image into a second image. Additionally, the plurality of sub-images obtained from the first image 140 may be enhanced for later processing according to the process 400 as discussed with reference to Fig. 4.
  • the computing device 120 may further divide a sub-image 520 into a plurality of regions, wherein at least one of the plurality of regions having a size corresponding to the label 222.
  • a slide 220 may have an elongate shape, where the label portion is positioned at one end of the slide and the specimen portion at another end. Further, different labels 222 may have a same size.
  • the computing device 120 may divide a sub-image 520 into multiple regions 530. In the example of Fig. 5, a sub-image 520 is divided into three regions 530, and a size of a region 530 has a size corresponding to the label 520. That is, a label 2020 always falls within one of these regions 530, and would not be included in two or more regions 530.
  • the computing device 120 may extract the information from the at least one of the plurality of regions 530, without considering regions for example in the middle of the sub-image.
  • a direction in which the slide (s) 220 is placed on the tray 116 is always the same and a direction in which the tray 116 is placed in the scanning device 110 is always the same, only one of the plurality of regions 530 may need to be processed. For example, if it could be ensured that a label 222 is always placed on the top of slide 220 as shown in Fig. 5, the computing device 120 may then only process the top region 530 (e.g., a region with an index “91” ) of the sub-image 520 (e.g., a sum-image with an index “9” ) , thereby reducing the calculation cost.
  • the top region 530 e.g., a region with an index “91”
  • the sub-image 520 e.g., a sum-image with an index “9”
  • the directions in which the slide (s) 220 is placed on the tray 116 may be different, two or more of the plurality of regions 530 may need to be processed.
  • the top region e.g., a region with an index “91”
  • the bottom region e.g., a region with an index “93”
  • the middle region e.g., a region with an index “92”
  • a label 222 may comprise different types of information, and different recognition solutions may be utilized then.
  • a label may comprise at least one of: a character, a symbol or a graphically encoded representation.
  • OCR Optical Character Recognition
  • a suitable barcode recognition algorithm for each of the collected images, a suitable barcode recognition algorithm (for example, for Data Matrix, QRCode, PDF417, etc. ) may be utilized to decode the graphically encoded representation.
  • a suitable barcode recognition algorithm for example, for Data Matrix, QRCode, PDF417, etc.
  • an identification “IM123456789” may be encoded by a QR code on the label 222, and the computing device 120 may extract the identification by decoding the QR code.
  • each of the sub-images includes at most one label image (one label for one slide)
  • the recognition algorithm is not necessarily applied to all of the images divided from the same sub-image. For example, if some information has been extracted from the image identified as “11” , then the image identified as “13” does not need to be processed for recognition and can be skipped, thus accelerating the batch scanning of the present disclosure.
  • the label may be stained during slide preparation and the barcode recognition algorithms are generally vulnerable to noises, it may fail to extract information in the barcode, in particular, a two dimensional code.
  • Fig. 6 illustrates a flowchart of an example process 600 for filtering a label image, e.g. including a two dimensional code, so as to make the batch scanning of the present disclosure more robust.
  • the computing device 120 may obtain an RGB image of the image.
  • the images which the recognition algorithm has been applied to but fails to extract information are enhanced grayscale images (as in steps 410-420) .
  • Now raw images (RGB image) of these images may be obtained to get more details which may be lost during the previous processes.
  • the computing device 120 may convert the RGB image to a grayscale image using L (lightness) channel of HLS (Hue, Lightness, Saturation) color space.
  • L lightness
  • HLS Human, Lightness, Saturation
  • the RGB image may be converted to an HLS image by calculating the components of the HLS, which is known in the art, and the value of L channel of each pixel is extracted as the grayscale of the pixel.
  • the computing device 120 may convert the grayscale image to a binary image by, for example, an adaptive threshold.
  • a threshold is applied to the grayscale image to generate a binary image, where if the value of the pixel surpasses the adaptive threshold, the pixel is set to be white (e.g. a value of 255) , otherwise the pixel is set to be black (e.g. a value of 0) .
  • the threshold varies according to local statistics of the pixel to be applied to. For example, if the pixel is located in a region where pixels around it generally have relatively high gray values, the threshold applied to the pixel may adaptively increase.
  • the threshold applied to the pixel may adaptively decrease.
  • the adaptive threshold approach may prevent image details in overexposure or underexposure regions from being lost when generating the binary image.
  • the computing device 120 may detect edges of the binary image.
  • a canny operator may be applied to the binary image to generate multi-edges.
  • the computing device 120 may dilute the detected edges.
  • the kernel of the dilution operation may be a 4-connection kernel (consider 4 pixels to the up, left, down and right of the center pixel) or an 8-connection kernel (consider 8 pixels around the center pixel) . Based on the dilution, isolated pixels, most of which are noises statistically, are effectively removed.
  • the computing device 120 may further filter pixels in the diluted image to generate an image for recognition by an algorithm for the two dimensional code.
  • the computing device 120 may filter pixels in the dilated image based on connectivity. If the pixel is not an 8-connected pixel, the pixel is an isolated pixel and should be abandoned (e.g. setting its value to be 255 as background, which means white) .
  • the computing device 120 may filter pixels in the dilated image based on a shape, specifically based on a shape of the connective region of the pixels.
  • a shape specifically based on a shape of the connective region of the pixels.
  • only the pixels which are located in an approximately square region can be retained (e.g. setting its value to be 0 as foreground, which means black) .
  • the ratio of the width and height of the connective region of the pixel may be calculated. The ratio should fall in a range of a predefined values (e.g. 0.7-1.2) , which are dependent on types of the two dimensional codes. Pixels otherwise should be abandoned.
  • the computing device 120 may filter pixels in the dilated image based on a size, specifically based on the size of the connective region of the pixels.
  • a size specifically based on the size of the connective region of the pixels.
  • only the pixels which are located in a connective region having a proper size can be retained (e.g. setting its value to be 0 as foreground, which means black) .
  • the product of the width and height of the connective region of the pixel, a quick approach to calculate the area of the region should fall in a range of a predefined values, which are dependent on types of the two dimensional codes. Pixels otherwise should be abandoned.
  • the algorithm for decoding two dimensional barcode may be applied again to the results of the method 600 to extract information in the label. If it still fails to recognize and extract information, a Laplace operator may be applied to the images to sharpen or enhance the image, followed by applying the algorithm again to extract information again.
  • the Laplace operator may be, for example, a 3 by 3 kernal, having a value of 2, 4, 6, 8, or 10 for the element in center, and a value of -1, 0, or 1 for the other 8 surrounding elements.
  • the images may be processed with the Laplace operator repeatedly, before applying the algorithm to extract information.
  • the embodiments of the present disclosure can extract information from the slides at one time and reliably, thus improving the efficiency of slides management.
  • the computing device 120 may present, for example via the display 122, the captured image 140 (the first image) of the tray 116 and a visual element indicating a status of one of the plurality sub-images.
  • the computing device 120 may present via the display 122 a first status indicating that a label is contained in a corresponding sub-image and the information indicated by the label is successfully extracted. For example, a block with a green color may be presented for indicating that a corresponding sub-image is successfully recognized, and the information is extracted. Additionally, the extracted information may for example be presented in response to a click on the block.
  • the computing device 120 may present a second status indicating that no label is contained in a corresponding sub-image. For example, a block with a grey color may be presented for indicating that no slide is placed on the corresponding slot.
  • the computing device may preset a third status indicating that a label is contained in a corresponding sub-image but the information indicated by the label fails to be extracted. For example, a block with a red color may be presented for indicating that recognition of a corresponding sub-image is failed.
  • the computing device 120 may record the extracted information.
  • the extracted information may comprise an identification of the slide 220.
  • the computing device 120 may update information associated with the identification for indicating a particular process (e.g., a retaining process) has been finished for the slide 220.
  • the computing device 120 may later provide, for example in response to a query based on identification, information about processing stages of a slide based on the stored information. As such, by extracting information from slides in a batch, the efficiency of slides tracking may be improved.
  • the computing device 120 may store the extracted information in association with the image 140 of the tray 116.
  • the captured image 140 may help verifying whether the extracted information is correct, and a user may updated the information by reviewing the stored image 140. Additionally, by storing the information visually indicated or encoded on the label in association with the captured image 140, a user may retrieve the specimen of the slide for further review or analysis after the slides have been archived.
  • the computing device 120 may receive an input indicating a target identification of a slide, and may then obtaining a target image of a tray based on the target identification. For example, the computing device 120 may look up the target image from the storage device 130 using the identification. Further, the computing device 120 may present the target image as a response to the input. In this way, a user may easily retrieve the original captured image of the tray using a particular identification of a slide.
  • the computing device 120 may further determine an index of a sub-image corresponding to the label, wherein the index may indicates a position of a slot holding the slide on the tray. As discussed above, each of the sub-images may be assigned with a respective index. The computing device 120 may further store storing the extracted information in association with the index.
  • an identification extracted from the label may be stored in association with an index “1” .
  • an index “1” a user can easily identify from the stored image 140 which one of the plurality of slides is corresponding to the identification.
  • Fig. 7 illustrates a schematic block diagram of an example device 700 for implementing embodiments of the present disclosure.
  • the computing device 120 can be implemented by the device 700.
  • the device 700 includes a central processing unit (CPU) 701, which can execute various suitable actions and processing based on the computer program instructions stored in a read-only memory (ROM) 702 or computer program instructions loaded in a random-access memory (RAM) 703 from a storage unit 708.
  • the RAM 703 may also store all kinds of programs and data required by the operations of the device 700.
  • the CPU 701, ROM 702 and RAM 703 are connected to each other via a bus 704.
  • the input/output (I/O) interface 705 is also connected to the bus 704.
  • a plurality of components in the device 700 is connected to the I/O interface 705, including: an input unit 706, for example, a keyboard, a mouse, and the like; an output unit 707, for example, various kinds of displays and loudspeakers, and the like; a storage unit 708, such as a magnetic disk and an optical disk, and the like; and a communication unit 709, such as a network card, a modem, a wireless transceiver, and the like.
  • the communication unit 709 allows the device 700 to exchange information/data with other devices via the computer network, such as Internet, and/or various telecommunication networks.
  • the above described process and processing can also be performed by the processing unit 701.
  • the process 300 may be implemented as a computer software program being tangibly included in the machine-readable medium, for example, the storage unit 708.
  • the computer program may be partially or fully loaded and/or mounted to the device 700 via the ROM 702 and/or communication unit 709.
  • the computer program is loaded to the RAM 703 and executed by the CPU 701, one or more steps of the above described methods or processes can be implemented.
  • the present disclosure may be a method, a device, a system and/or a computer program product.
  • the computer program product may include a computer-readable storage medium, on which the computer-readable program instructions for executing various aspects of the present disclosure are loaded.
  • the computer-readable storage medium may be a tangible device that maintains and stores instructions utilized by the instruction executing devices.
  • the computer-readable storage medium may be, but is not limited to, an electrical storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device or any appropriate combination of the above.
  • the computer-readable storage medium includes: a portable computer disk, a hard disk, a random-access memory (RAM) , a read-only memory (ROM) , an erasable programmable read-only memory (EPROM or flash) , a static random-access memory (SRAM) , a portable compact disk read-only memory (CD-ROM) , a digital versatile disk (DVD) , a memory stick, a floppy disk, a mechanical coding device, a punched card stored with instructions thereon, or a projection in a slot, and any appropriate combination of the above.
  • RAM random-access memory
  • ROM read-only memory
  • EPROM or flash erasable programmable read-only memory
  • SRAM static random-access memory
  • CD-ROM compact disk read-only memory
  • DVD digital versatile disk
  • memory stick a floppy disk
  • mechanical coding device a punched card stored with instructions thereon
  • a projection in a slot and any appropriate combination of the above.
  • the computer-readable storage medium utilized herein is not interpreted as transient signals per se, such as radio waves or freely propagated electromagnetic waves, electromagnetic waves propagated via waveguide or other transmission media (such as optical pulses via fiber-optic cables) , or electric signals propagated via electric wires.
  • the described computer-readable program instructions may be downloaded from the computer-readable storage medium to each computing/processing device, or to an external computer or external storage via Internet, local area network, wide area network and/or wireless network.
  • the network may include copper-transmitted cables, optical fiber transmissions, wireless transmissions, routers, firewalls, switches, network gate computers and/or edge servers.
  • the network adapter card or network interface in each computing/processing device receives computer-readable program instructions from the network and forwards the computer-readable program instructions for storage in the computer-readable storage medium of each computing/processing device.
  • the computer program instructions for executing operations of the present disclosure may be assembly instructions, instructions of instruction set architecture (ISA) , machine instructions, machine-related instructions, microcodes, firmware instructions, state setting data, or source codes or target codes written in any combination of one or more programming languages, where the programming languages consist of object-oriented programming languages, e.g., Smalltalk, C++, and so on, and conventional procedural programming languages, such as “C” language or similar programming languages.
  • the computer-readable program instructions may be implemented fully on a user computer, partially on the user computer, as an independent software package, partially on the user computer and partially on a remote computer, or completely on the remote computer or a server.
  • the remote computer may be connected to the user computer via any type of network, including a local area network (LAN) and a wide area network (WAN) , or to the external computer (e.g., connected via Internet using an Internet service provider) .
  • state information of the computer-readable program instructions is used to customize an electronic circuit, e.g., a programmable logic circuit, a field programmable gate array (FPGA) or a programmable logic array (PLA) .
  • the electronic circuit may execute computer-readable program instructions to implement various aspects of the present disclosure.
  • the computer-readable program instructions may be provided to the processing unit of a general-purpose computer, dedicated computer or other programmable data processing devices to manufacture a machine, such that the instructions, when executed by the processing unit of the computer or other programmable data processing apparatuses, generate an apparatus for implementing functions/actions stipulated in one or more blocks in the flow chart and/or block diagram.
  • the computer-readable program instructions may also be stored in the computer-readable storage medium and cause the computer, programmable data processing apparatus and/or other devices to work in a particular manner, such that the computer-readable medium stored with instructions contains an article of manufacture, including instructions for implementing various aspects of the functions/actions stipulated in one or more blocks of the flow chart and/or block diagram.
  • the computer-readable program instructions may also be loaded into a computer, other programmable data processing apparatuses or other devices, so as to execute a series of operation steps on the computer, other programmable data processing apparatuses or other devices to generate a computer-implemented procedure. Therefore, the instructions executed on the computer, other programmable data processing apparatuses or other devices implement functions/actions stipulated in one or more blocks of the flow chart and/or block diagram.
  • each block in the flow chart or block diagram can represent a module, a portion of program segment or code, where the module and the portion of program segment or code include one or more executable instructions for performing stipulated logic functions.
  • the functions indicated in the block may also take place in an order different from the one indicated in the drawings. For example, two successive blocks may be in fact executed in parallel or sometimes in a reverse order depending on the involved functions.
  • each block in the block diagram and/or flow chart and combinations of the blocks in the block diagram and/or flow chart may be implemented by a hardware-based system exclusively for executing stipulated functions or actions, or by a combination of dedicated hardware and computer instructions.
  • Embodiment 1 A method for slide processing comprising:
  • obtaining a first image of a tray comprising a plurality of slots, a slot being capable of holding a slide, the slide comprising a label and a specimen for pathologic analysis;
  • Embodiment 2 The method of embodiment 1, wherein the first image of the tray is obtained by an image capturing device, the image capturing device comprising a camera, a light source and a housing for enclosing the camera and the light source, and wherein the tray is placed in the housing for image capturing.
  • an image capturing device comprising a camera, a light source and a housing for enclosing the camera and the light source, and wherein the tray is placed in the housing for image capturing.
  • Embodiment 3 The method of embodiment 1, wherein determining the plurality of sub-images comprises:
  • Embodiment 4 The method of embodiment 3, wherein determining the plurality of sub-images from the second image comprises:
  • Embodiment 5 The method of embodiment 3, wherein determining the plurality of sub-images from the second image comprises:
  • Embodiment 6 The method of embodiment 1, wherein extracting information from the label of the slide comprises:
  • Embodiment 7 The method of embodiment 6, wherein the label comprises a two-dimensional code, and wherein extracting the information comprises:
  • Embodiment 8 The method of embodiment 1, further comprising:
  • Embodiment 9 The method of embodiment 8, further comprising:
  • storing the information further comprises:
  • Embodiment 10 The method of embodiment 8, wherein storing the information further comprises:
  • Embodiment 11 The method of embodiment 1, wherein the information comprises at least one of:
  • Embodiment 12 The method of embodiment 1, further comprising:
  • Embodiment 13 The method of embodiment 1, wherein the information indicates an identification of the slide, the method further comprising:
  • Embodiment 14 An electronic device, comprising:
  • processors one or more processors
  • one or more memories coupled to the one or more processors and having computer-executable instructions stored thereon, the computer-executable instructions, when executed by the one or more processors, causing the device to perform acts comprising:
  • obtaining a first image of a tray comprising a plurality of slots, a slot being capable of holding a slide, the slide comprising a label and a specimen for pathologic analysis;
  • Embodiment 15 The device of embodiment 14, wherein the first image of the tray is obtained by an image capturing device, the image capturing device comprising a camera, a light source and a housing for enclosing the camera and the light source, and wherein the tray is placed in the housing for image capturing.
  • an image capturing device comprising a camera, a light source and a housing for enclosing the camera and the light source, and wherein the tray is placed in the housing for image capturing.
  • Embodiment 16 The device of embodiment 14, wherein determining the plurality of sub-images comprises:
  • Embodiment 17 The device of embodiment 16, wherein determining the plurality of sub-images from the second image comprises:
  • Embodiment 18 The device of embodiment 14, wherein extracting information from the label of the slide comprises:
  • Embodiment 19 The device of embodiment 14, wherein extracting information from the label of the slide comprises:
  • Embodiment 20 The device of embodiment 19, wherein the label comprises a two-dimensional code, and wherein extracting the information comprises:
  • Embodiment 21 The device of embodiment 14, the acts further comprising:
  • Embodiment 22 The device of embodiment 21, wherein the acts further comprise:
  • storing the information further comprises:
  • Embodiment 23 The device of embodiment 21, wherein storing the information further comprises:
  • Embodiment 24 The device of embodiment 14, wherein the information comprises at least one of:
  • Embodiment 25 The device of embodiment 14, wherein the acts further comprise:
  • Embodiment 26 The device of claim 14, wherein the information indicates an identification of the slide, and wherein the acts further comprise:
  • Embodiment 27 A computer readable storage medium having computer-executable instructions stored thereon, the computer-executable instructions, when executed by a processor of an apparatus, causing the apparatus to perform the steps of any one of the method according to embodiments 1-13.
  • Embodiment 28 A computer program product comprising computer-executable instructions which, when executed by a processor of an apparatus, cause the apparatus to perform the steps of any one of the method according to embodiments 1-13.
  • Embodiment 29 A scanning device, comprising:
  • a housing with an opening for loading a tray with a plurality of slots, the tray being capable of holding at least one slide, a slide comprising a label and a specimen for pathologic analysis;
  • a camera configured to capture an image of the tray for extracting information from the label .
  • Embodiment 30 The device of embodiment 29, further comprising a light source with a configurable illuminative parameter.
  • Embodiment 31 The device of embodiment 29, further comprising:
  • a processor configured to perform acts comprising, in response to the tray being loaded into the scanning device,
  • 310 obtain a first image of a tray comprising a plurality of slots, a slot being capable of
  • the slide comprising a label and a specimen for pathologic analysis

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • General Health & Medical Sciences (AREA)
  • Optics & Photonics (AREA)
  • Analytical Chemistry (AREA)
  • Health & Medical Sciences (AREA)
  • Chemical & Material Sciences (AREA)
  • Medical Informatics (AREA)
  • Radiology & Medical Imaging (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Quality & Reliability (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • Software Systems (AREA)
  • Investigating Or Analysing Materials By Optical Means (AREA)

Abstract

The present disclosure relates to batch scanning solution for pathological slides. The present disclosure relates to a method, an electronic device and an apparatus for slide processing. The method comprises obtaining a first image of a tray comprising a plurality of slots. A slot is capable of holding a slide and the slide comprises a label and a specimen for pathologic analysis. The method further comprises determining a plurality of sub-images from the first image based on a structure of the tray, where each of the plurality of sub-images corresponds to one of the plurality of slots. The method further comprises extracting information from the label of the slide based on at least one of the sub-images. Through the solution, a plurality of slides can be automatically processed at one time, thus improving the efficiency of slides management.

Description

METHOD, DEVICE, AND COMPUTER PROGRAM PRODUCT FOR SLIDES PROCESSING FIELD
Embodiments of the present disclosure generally relate to the field of digital pathology and in particular, to a method, a device, a computer program product for slides processing.
BACKGROUND
In clinical medicine, specimens from a patient can greatly help the pathological analysis. For example, a pathologist may obtain a specimen of a patient, and then use a microscope to observe the specimen held by a slide. In a hospital or other pathological analysis institutes, a large amount of slides are being processed every day. In the different stages of slides processing, such as, sampling, embedding, slicing, staining, reading and archiving, a slide for a patient needs to be tracked for preventing a loss of the slide or incorrectly associating the slide with a different patient. Considering the amount of slides to be processed, people expect to further improve the efficiency for tracking.
EP2966493 A1 describes that it is possible to readily observe pathology specimens without spending much time and to readily observe suspected pathology specimens in detail. Described is a specimen observation device including an image capturing unit that acquires a partial image representing at least a part of one of multiple pathology specimens mounted on an accommodating section and a whole image of the multiple pathology specimens mounted on the accommodating section; an input unit for inputting identification information of the accommodating section; a display unit that displays an enlarged version of the partial image acquired by the image capturing unit; an image designating unit for designating the partial image displayed on the display unit; and a storage unit that stores the identification information input via the input unit and a position of the partial image designated via the image designating unit in relation to the whole image such that the position and the identification information are associated with the whole image.
SUMMARY
The present disclosure provides a solution for slides processing, where a plurality of slides can be automatically processed at one time, thus improving the efficiency of slides  management.
In a first aspect, there is provided a method for slide processing. The method comprises obtaining a first image of a tray comprising a plurality of slots. The slots are capable of holding a slide, and the slide comprises a label and a specimen for pathologic analysis. The method further comprises determining a plurality of sub-images from the first image based on a structure of the tray, each of the plurality of sub-images corresponding to one of the plurality of slots. The method further comprises extracting information from the label of the slide based on at least one of the sub-images.
The first image of the tray may be obtained by an image capturing device. The image capturing device may comprise a camera, a light source and a housing for enclosing the camera and the light source. The tray may be placed in the housing for image capturing.
Determining the plurality of sub-images may comprise obtaining a second image by enhancing the first image; and determining the plurality of sub-images from the second image based on the structure of the tray. Commonly, some pixels are not easy to detect for sub-image analysis. Due to the enhancing of the first image before determining the sub-images an efficiency of sub-image division for further object detection, specifically for slide detection, may be promoted.
Determining the plurality of sub-images from the second image may comprise determining a template corresponding to the structure of the tray, the template indicating a layout of the plurality of slots; and dividing the second image into the plurality of sub-images according to the determined template. The template may be configured for fitting a size of the sub-image which may specifically comprise or depict a shape of the slots and/or of the slides.
Determining the plurality of sub-images from the second image may comprise detecting edges of the plurality of slots in the second image; and determining the plurality of sub-images from the second image based on the edges. Due to the determining of the sub-images based on edges of the slots, the edges of the slides may be exactly identified for further identifying information provided by the label, specifically barcode information. Specifically, barcode areas may be positioned for detection.
Extracting information from the label of the slide may comprise dividing a sub-image into a plurality of regions, at least one of the plurality of regions having a size corresponding to the label; and extracting the information from the at least one of the  plurality of regions. The label may comprise a two-dimensional code, and extracting the information may comprise: for each of the plurality of regions: extracting the information encoded in the two-dimensional code from the region of the label; in accordance with a failure of extracting the information, obtaining data of a lightness channel corresponding to the region; generating a binary image based on the data of the lightness channel; filtering out a connective region in the binary image based on a shape or a size of the connective region; and extracting the information from the filtered binary image.
The method may further comprise storing the information in association with the first image. The method may further comprise determining an index of a sub-image corresponding to the label, the index indicating a position of a slot holding the slide on the tray; and the storing of the information may further comprise storing the information in association with the index. The storing of the information may further comprise storing the information in association with the sub-image corresponding to the slide.
The information may comprise at least one of: data encoded by a graphically encoded representation on the label, at least one character on the label, or at least one symbol on the label.
The method may further comprise: presenting the first image and a visual element indicating a status of one of the plurality sub-images, the status being selected from a group consisting of: a first status indicating that a label is contained in a corresponding sub-image and the information indicated by the label is successfully extracted, a second status indicating that no label is contained in a corresponding sub-image, and a third status indicating that a label is contained in a corresponding sub-image but the information indicated by the label fails to be extracted.
The information may indicate an identification of the slide and the method may further comprise: receiving an input indicating a target identification of a slide; obtaining a target image of a tray based on the target identification; and presenting the target image as a response to the input.
In a second aspect, there is provided an electronic device, comprising one or more processors, one or more memories coupled to the one or more processors and having computer-executable instructions stored thereon. The computer-executable instructions, when executed by the one or more processors, cause the device to perform acts as follows: obtaining a first image of a tray comprising a plurality of slots, a slot being capable of  holding a slide, the slide comprising a label and a specimen for pathologic analysis; determining a plurality of sub-images from the first image based on a structure of the tray, each of the plurality of sub-images corresponding to one of the plurality of slots; and extracting information from the label of the slide based on at least one of the sub-images.
The electronic device may be configured for performing the method for slide processing as described above or as will further be described below in more detail.
The first image of the tray may be obtained by an image capturing device, the image capturing device comprising a camera, a light source and a housing for enclosing the camera and the light source. The tray may be placed in the housing for image capturing.
Determining of the plurality of sub-images may comprise: obtaining a second image by enhancing the first image; and determining the plurality of sub-images from the second image based on the structure of the tray. Determining of the plurality of sub-images from the second image may comprise: determining a template corresponding to the structure of the tray, the template indicating a layout of the plurality of slots; and dividing the second image into the plurality of sub-images according to the determined template.
Extracting information from the label of the slide may comprise: detecting a region associated with the label in the at least one of the sub-images; and extracting the information based on the region associated with the label.
Extracting information from the label of the slide may comprise: dividing a sub-image into a plurality of regions, at least one of the plurality of regions having a size corresponding to the label; and extracting the information from the at least one of the plurality of regions. The label may comprise a two-dimensional code, and extracting the information may comprise: for each of the plurality of regions: extracting the information encoded in the two-dimensional code from the region of the label; in accordance with a failure of extracting the information, obtaining data of a lightness channel corresponding to the region; generating a binary image based on the data of the lightness channel; filtering out a connective region in the binary image based on a shape or a size of the connective region; and extracting the information from the filtered binary image.
The acts may further comprise storing the information in association with the first image. The acts may further comprise: determining an index of a sub-image corresponding to the label, the index indicating a position of a slot holding the slide on the tray. Moreover, storing of the information may further comprise storing the information in association with  the index. The storing of the information may further comprise storing the information in association with the sub-image corresponding to the slide.
The information may comprise at least one of: data encoded by a graphically encoded representation on the label, at least one character on the label, or at least one symbol on the label.
The acts may further comprise: presenting the first image and a visual element indicating a status of one of the plurality sub-images, the processing status being selected from a group consisting of: a first status indicating that a label is contained in a corresponding sub-image and the information indicated by the label is successfully extracted, a second status indicating that no label is contained in a corresponding sub-image, and a third status indicating that a label is contained in a corresponding sub-image but the information indicated by the label fails to be extracted.
The information may indicate an identification of the slide and the acts may further comprise: receiving an input indicating a target identification of a slide; obtaining a target image of a tray based on the target identification; and presenting the target image as a response to the input.
In a third aspect, there is provided a computer readable storage medium having computer-executable instructions stored thereon, the computer-executable instructions, when executed by a processor of an apparatus, causing the apparatus to perform the steps of the method in the first aspect described above.
In a fourth aspect, there is provided a computer program product comprising computer-executable instructions which, when executed by a processor of an apparatus, cause the apparatus to perform the steps of the method in the first aspect described above.
In a fifth aspect, there is provided a scanning device. The scanning device comprises a housing with an opening for loading a tray with a plurality of slots. The tray is capable of holding at least one slide, which comprises a label and a specimen for pathologic analysis. The scanning device further comprises a camera. The camera may be configured to capture an image of the tray for extracting information from the label. Further, the scanning device may comprise a light source, specifically a light source with a configurable illuminative parameter. The scanning device further comprises a processor configured to perform acts comprising, in response to the tray being loaded into the scanning device, obtaining an image of the tray, specifically by using the camera and the light source,  determining a plurality of sub-images from the image of the tray based on a structure of the tray, each of the plurality of sub-images corresponding to one of the plurality of slots; and extracting information from the label of the slide based on at least one of the sub-images. Further, the acts may comprise storing the information in association with the first image. The processor may be configured for performing the method for slide processing as described above or as will further be described below in more detail.
It is to be understood that the summary section is not intended to identify key or essential features of embodiments of the present disclosure, nor is it intended to be used to limit the scope of the present disclosure. Other features of the present disclosure will become easily comprehensible through the following description.
The term “pathologic analysis” as used herein is a broad term and is to be given its ordinary and customary meaning to a person of ordinary skill in the art and is not to be limited to a special or customized meaning. The term specifically may refer, without limitation, to processes and/or tests for examining an etiology, a pathogenesis, a course as well as effects of diseases including respective processes in a human or animal body. Specifically, the pathologic analysis may include an assessment of tissues based on macroscopic and microscopic aspects.
The term “specimen” as used herein is a broad term and is to be given its ordinary and customary meaning to a person of ordinary skill in the art and is not to be limited to a special or customized meaning. The term specifically may refer, without limitation, to an arbitrary element which is removed or collected from a human body or from an animal body for pathologic analysis. Specifically, the specimen may refer to a piece of tissue or to an entire organ. However, the specimen may also refer to a bodily fluid such as blood or urine.
The term “slide” as used herein is a broad term and is to be given its ordinary and customary meaning to a person of ordinary skill in the art and is not to be limited to a special or customized meaning. The term specifically may refer, without limitation, to an elongate element which is capable of receiving at least one specimen. Specifically, the slide may have at least one supporting surface configured for receiving the at least one specimen. The supporting surface may specifically be or may comprise at least one flat surface. The slide may specifically be made of at least one optically transparent material such as glass. However, also other materials may be feasible.
The term “label” as used herein is a broad term and is to be given its ordinary and customary meaning to a person of ordinary skill in the art and is not to be limited to a special or customized meaning. The term specifically may refer, without limitation, to an arbitrary identifier which may be attached to an article, such as to a sticker which may be attached or attachable on a surface of another element. The identifier, as an example, may comprise an optical identifier, such as a barcode or a QR code, and/or an electronic identifier, such as an RFID identifier. The label may comprise at least one adhesive surface. Specifically, the label may be configured for attachment on a surface of the slide. More specifically, the label may be configured for attachment on an area of a surface of the slide neighboring the supporting surface of the slide being configured for receiving the at least one specimen. The label may be configured for tracking the slide during different processing stages. As will be outlined in further detail below, the label may comprise at least one element for indicating information of the slide such as at least one one-dimensional code or at least one two-dimensional code. Thus, the term “extracting information from the label” specifically may refer, without limitation, to a machine-reading of data from an one-dimensional code or from a two-dimensional code via at least one optical reader as well as to an electronically processing of the data.
The term “slide processing” as used herein is a broad term and is to be given its ordinary and customary meaning to a person of ordinary skill in the art and is not to be limited to a special or customized meaning. The term specifically may refer, without limitation, to a procedure of applying one or more steps of at least one of slide preparation and slide analysis. Thus, the slide processing may comprise a collection of different slide preparation and slide analysis steps in pathologic analysis. The different steps may specifically include sampling, embedding, slicing, staining, reading and/or archiving.
The term “tray” as used herein is a broad term and is to be given its ordinary and customary meaning to a person of ordinary skill in the art and is not to be limited to a special or customized meaning. The term specifically may refer, without limitation, to an arbitrary carrier element, such as a flat element, which is configured for carrying at least one other object. Specifically, the tray may comprise at least one supporting surface configured for receiving the at least one other object. More specifically, the tray may comprise at least one recess or slot configured for receiving the at least one other object. The term “slot” as used herein is a broad term and is to be given its ordinary and customary meaning to a person of ordinary skill in the art and is not to be limited to a special or  customized meaning. The term specifically may refer, without limitation, to a recess or an opening within an arbitrary element such as within the tray. Specifically, the slot may have a surrounding frame configured for holding an object in a desired position. Thus, the surrounding frame may be configured for preventing a dislocation of the object at least to a large extent when the element is tilted or transported.
The term “image” as used herein is a broad term and is to be given its ordinary and customary meaning to a person of ordinary skill in the art and is not to be limited to a special or customized meaning. The term specifically may refer, without limitation, to data recorded by using a camera, such as a plurality of electronic readings from an imaging device, such as the pixels of the camera chip. The image itself, thus, may comprise pixels, the pixels of the image correlating to pixels of the camera chip. Consequently, when referring to "pixels" , reference is either made to the units of image information generated by the single pixels of the camera chip or to the single pixels of the camera chip directly. The image may comprise raw pixel data. For example, the image may comprise data in the RGB space, single color data from one of R, G or B pixels, a Bayer pattern image or the like. The term “sub-image” as used herein is a broad term and is to be given its ordinary and customary meaning to a person of ordinary skill in the art and is not to be limited to a special or customized meaning. The term specifically may refer, without limitation, to an image which depicts a part or a section of another image. Specifically, as will be outlined in further detail below, the image may be divided in a plurality of sub-images. Thereby, each sub-image may depict another section of the image.
As outlined above, the method for slide processing comprises determining a plurality of sub-images from the first image based on a structure of the tray. The term “based on a structure of the tray” as used herein is a broad term and is to be given its ordinary and customary meaning to a person of ordinary skill in the art and is not to be limited to a special or customized meaning. The term specifically may refer, without limitation, to the circumstance that the plurality of sub-images are obtained in dependency of an arrangement of the slots, a size of the slots, a shape of the slots, an orientation of the slots to each other and a number of the slots on the tray. Also, other parameters may be considered. Thus, the plurality of sub-images from the image may be determined such that one area of interest such as one single slot, one slide received in the slot or one area of the slide received in the slot may be depicted completely at least to a large extent on the sub-image.
The term “enhancing an image” as used herein is a broad term and is to be given its ordinary and customary meaning to a person of ordinary skill in the art and is not to be limited to a special or customized meaning. The term specifically may refer, without limitation, to an arbitrary process which improves an image color, a contrast and/or a quality of an image. The enhancing may exemplarily include converting a color image into a grayscale image, converting an RGB image into an image with another color space such as HSV, HLS or the like, sharpening an image and/or the elimination of blurs. Further details will be given below in more detail.
The term “detecting edges” as used herein is a broad term and is to be given its ordinary and customary meaning to a person of ordinary skill in the art and is not to be limited to a special or customized meaning. The term specifically may refer, without limitation, to an arbitrary mathematical method which aims at separating two-dimensional areas in a digital image from one another if they differ sufficiently in terms of color or gray value, brightness or texture along straight or curved lines.
BRIEF DESCRIPTION OF THE DRAWINGS
The following detailed description of the embodiments of the present disclosure can be best understood when read in conjunction with the following drawings, where:
Fig. 1 illustrates an environment 100 in which example embodiments of the present disclosure can be implemented;
Fig. 2 illustrates an example tray in accordance with some embodiments;
Fig. 3 illustrates an example flowchart of a method for slide processing according to an embodiment of the present disclosure.
Fig. 4 illustrates an example flowchart of a method for enhancing the image of the tray.
Fig. 5 illustrates an example diagram for dividing a tray image for batch scanning according to an embodiment of the present disclosure.
Fig. 6 illustrates an example flowchart of a method for filtering a label image.
Fig. 7 illustrates a schematic block diagram of an example device 700 for implementing embodiments of the present disclosure.
Throughout the drawings, the same or similar reference numerals represent the  same or similar element.
The following detailed description of the embodiments may refer to the method for slide processing, the electronic device, the computer-readable storage medium, the computer program product and the scanning device.
DETAILED DESCRIPTION
Principle of the present disclosure will now be described with reference to some embodiments. It is to be understood that these embodiments are described only for the purpose of illustration and help those skilled in the art to understand and implement the present disclosure, without suggesting any limitation as to the scope of the disclosure. The disclosure described herein can be implemented in various manners other than the ones described below.
In the following description and claims, unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skills in the art to which this disclosure belongs.
References in the present disclosure to “one embodiment, ” “an embodiment, ” “an example embodiment, ” and the like indicate that the embodiment described may include a particular feature, structure, or characteristic, but it is not necessary that every embodiment includes the particular feature, structure, or characteristic. Moreover, such phrases are not necessarily referring to the same embodiment. Further, when a particular feature, structure, or characteristic is described in connection with an example embodiment, it is submitted that it is within the knowledge of one skilled in the art to affect such feature, structure, or characteristic in connection with other embodiments whether or not explicitly described.
It shall be understood that although the terms “first” and “second” etc. may be used herein to describe various elements, these elements should not be limited by these terms. These terms are only used to distinguish one element from another. For example, a first element could be termed a second element, and similarly, a second element could be termed a first element, without departing from the scope of example embodiments. As used herein, the term “and/or” includes any and all combinations of one or more of the listed terms.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of example embodiments. As used  herein, the singular forms “a” , “an” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms “comprises” , “comprising” , “has” , “having” , “includes” and/or “including” , when used herein, specify the presence of stated features, elements, and/or components etc., but do not preclude the presence or addition of one or more other features, elements, components and/or combinations thereof.
As mentioned above, for pathological analysis, slides holding specimens of patients are prepared, read, and then analyzed. Those slides are critical for a patient, and need to be efficiently tracked during different processing stages. For example, when a specimen on a slide is to be stained, an identification of the slide may need to be recorded for preventing incorrectly associating the slide with a different patient.
For another example, when a patient is transferred to different hospital or institute for further diagnosis or treatment, the slides associated with the patient need also be physically transferred. In the transferring of slides, the original hospital/institute may need to record which of the slides are transferred the new hospital/institute may need to record which slides are received.
According to some traditional solutions, in the various stages discussed above, a label on a slide may be utilized for tracking. For example, a barcode contained on the label may be scanned, and the information encoded by the barcode may be extracted and recorded for tracking.
However, in the daily work of a hospital or other pathological analysis institutes, a large amount of slides need to be processed every day. According to the traditional solutions, people need to manually scan the labels on the slides one by one for recording information of the slides. In this case, a large amount of human costs and time costs are required.
According to example embodiments of the present disclosure, there is proposed a solution for slides processing. In this solution, a first image of a tray comprising a plurality of slots is first obtained, wherein the slots are capable of holding a slide, and the slide comprises a label and a specimen for pathologic analysis. A plurality of sub-images can then be determined from the first image based on a structure of the tray, wherein each of the plurality of sub-images is corresponding to one of the plurality of slots. Information can then be extracted from the label of the slide based on at least one of the sub-images.  Through the solution, information of a plurality of slides can be automatically extracted at one time, which can thus significantly reduce manual efforts paid in scanning the slides.
In the following, example embodiments of the present disclosure are described with reference to the drawings. Fig. 1 illustrates an environment 100 in which example embodiments of the present disclosure can be implemented. As shown in Fig. 1, the environment 100 may comprise a scanning device 110 which is configured to capture an image of a tray 116.
In some embodiments, a tray 116 may comprise a plurality of slots for holding the slides. Fig. 2 illustrates an example of the tray 116 in accordance with some embodiments. As shown in Fig. 2, the tray 116 may comprise twenty slots 210, a shape of which is designed to hold a slide 220. In the example of Fig. 2, the twenty slots 210 are arranged in two rows, each row comprising ten slots 210.
It should be understood that the shape of the tray 116 and the arrangement of the slots 210 as shown in Fig. 2 are only for example, and any other proper tray structure may be utilized. In one example, a tray with a circle shape could be utilized to hold the slides. In another example, the twenty slots 210 may be arranged in four rows, each comprising five slots.
When the tray 116 is utilized for the slide tracking, one or more slides 220 may be placed on the slots 210. An example slide 220 is shown in Fig. 2 for illustration. The slide 220 may comprise a label 222 and the corresponding specimen 224. In some embodiments, the label 222 may comprise at least one element for indicating information of the slide 220.
In some embodiments, the label 222 may comprise at least one character, which may indicate information associated with the slide 220. For example, the label 222 may comprise text “John” for indicating the name of a patient associated with the slide 220. Exemplarily, the at least one character such as the text “John” may be depicted in a first area 221 within the label 222 which is marked with dashed lines.
In some other embodiments, the label 222 may comprise at least a symbol, which may indicate information associated with the slide 220. For example, a logo of a hospital sampling the specimen 224 may be printed on the label 222 for indicating which hospital prepared this slide 220. Exemplarily, the at least one logo or at least one character with reference to a hospital such as the text “No. 3 HOSPITAL” may be depicted in a second area 223 within the label 222 which is marked with dashed lines.
In some further embodiments, the label 222 may comprise a graphically encoded representation. The examples of graphically encoded representation may comprise but are not limited to: one dimensional barcode, two dimensional code (e.g., QR code) and any other graphical presentations for encoding information. In the example of Fig. 2, a QR code is contained in the label 222.
Referring back to Fig. 1, for ease of capturing the image of the tray 116, in some embodiments, the structure of the scanning device 110 may be particularly designed. As shown in Fig. 1, the scanning device 110 may comprise a housing 118 enclosing a camera 114 for capture an image of the tray 116. In some embodiments, the housing 118 may comprise an opening 113 for loading and unloading the tray 116. Besides, a light source 112 may be provided within the housing 118. Examples of the light source 112 may comprise, but are not limited to, an incandescent lamp, a halogen lamps, a fluorescent lamp, a mercury lamp, light emitting diode (LED) lamp, or the like. The light source may have configurable illuminative parameters, such as luminous flux, color temperature, power, brightness, and the like, which are adjustable by the user.
During capturing an image of the tray 116, the tray 116 may be put within the housing 118, such that influence of environmental light outside of the housing 118 may be reduced and quality of the captured image may be improved.
In some embodiments, the scanning device 110 may further comprise a support 117. The tray 116 may be fixed onto the support 117 for image capturing. In this case, different trays may have a relative stable position in the captured image, thereby facilitating analysis of the captured image. It should be understood that, though only one camera 114 and one light source 112 are shown in the example of Fig. 1, multiple cameras 114 and/or multiple light sources may be included in the scanning device 110.
As shown in Fig. 1, the scanning device 110 may be communicatively coupled to a computing device 120. In some embodiments, the scanning device 110 may capture an image 140 of the tray 116 and then send the captured image 140 to the computing device 120 for analysis.
For one example, a pathologist may press a button (not shown in Fig. 1) on the scanning device 110 to cause the scanning device 110 to capture the image 140 and then transmit the image to the computing device 120.
For another example, the scanning device 110 may detect whether the tray 116 is  ready within the housing 118, and may starting capturing the image 140 of the tray 116 after a predetermine time period after the tray 116 is determined as ready.
For a further example, the scanning device 110 may receive an instruction from the computing device 120 and then start capturing the image 140 of the tray 116. For example, a user may interact with the computing device 120 through a graphical user interface for causing the scanning device 110 to capture the image 140 of the tray 116.
After receiving the image captured by the scanning device 110, the computing device 120 may extract information from the label (s) of the slide (s) from the captured image 140. The process of extracting information will be discussed in detail with reference to Figs. 3-7 below.
In some embodiments, the computing device 120 may be coupled with display 122. The display 122 may present to a user (e.g., a pathologist) a graphical user interface. The user may for example review the analysis results of the tray 116 through the graphical user interface. Additionally, the graphical user interface may also provide the user with some components for controlling the scanning device 110. In one example, the user may interact with the graphical user interface to power on and/or power off the scanning device 110. In another example, the user also may configure the parameters for capturing the image 140 of the tray 116 through the graphical user interface.
In some further embodiments, the computing device 120 may be further coupled to a storage device 130. For example, the computing device 120 may store the extracted information in the storage device 130, and may retrieve the stored information from the storage device 130 in response to a future query.
Though the computing device 120 is shown as a separate entity from the scanning device 110, the computing device 120 may be contained in or integrated with the scanning device 110 as a hardware and/or software component of the scanning device 110. In such cases, the image analysis as will be discussed in detail below may then be implemented by the scanning device 110 itself.
It should also be understood that, the scanning device 110 may have a different structure than the example in Fig. 1. For example, a pathologist may use a cellphone or camera to capture an image 140 of a tray 116 which is placed on a desk. The solution of slides processing as will be discussed in detail below can also be applied to an image of a tray captured by a different scanning device than the example in Fig. 1.
Fig. 3 illustrates a flowchart of an example process 300 for slide processing according to an embodiment of the present disclosure. The process 300 may be implemented by the computing device 120 as shown in Fig. 1. For ease of illustration, the process 300 will be described with reference to Figs. 1-2.
At block 310, the computing device 120 obtains a image 140 (also referred to first image 140) of a tray 116 comprising a plurality of slots 210, wherein a slot 210 is capable of holding a slide 220, and the slide 220 comprises a label 222 and a specimen 224 for pathologic analysis.
In some embodiments, to capture an image 140 of the tray 116, at least one slice 220 may be placed on the slot (s) 210. It should be noted that it is unnecessary to have all of the slots 210 occupied. For example, when the tray 116 comprising twenty slots 210 is used, six slices may be placed on any suitable slots 210 for image capturing, with other fourteen slots being empty.
When the tray 116 is prepared, a scanning device 110 may be used to capture an image 140 of the tray 116. In some embodiments, the scanning device 110 may send the captured image 140 of the tray 116 to the computing device 120 via wire or wireless network. Alternatively, after capturing the image 140 of the tray 116, the scanning device 110 may send the captured image 140 along with other captured images at one time, e.g., later on a day.
In some further embodiments, after being captured, an image 140 of the tray 116 may be stored in a storage device, for example the storage device 130. Further, the computing device 120 may later obtain the image 140 of the tray 116 from the storage device 130 for further analysis.
In some further embodiments, if the computing device 120 is implemented as an internal component of the scanning device 110, the computing device 120 may for example obtain the image 140 of the tray 116 from an image capturing component (e.g., the camera 114) through internal communication within the scanning device 110.
It should be understood that the image of the tray 116 may be captured by any suitable devices, including but not limited to the example scanning device 110 shown in Fig. 1.
At block 320, the computing device 120 determines a plurality of sub-images from the first image 140 based on a structure of the tray 116, wherein each of the plurality of  sub-images is corresponding to one of the plurality of slots 210.
At block 330, the computing device 120 extracts information from the label 222 of the slide 220 based on at least one of the sub-images.
In some embodiments, to improve the accuracy of image processing, the computing device 120 may first obtain a second image by enhancing the first image 140. Fig. 4 illustrates a flowchart of an example process 400 for enhancing the first image.
As shown in Fig. 4, at block 410, the computing device 120 may convert the captured first image 140 of the tray 116 into a grayscale image. In most cases, the first image 140 may be a color image, which main contain richer information for later analysis. In some embodiments, the computing device 120 may convert the first image 140 into an 8-bit grayscale image. In an 8-bit grayscale image, a grayscale value of a pixel is represented by an 8-bit byte (i.e., a value range of 0-255) .
In some embodiments, a maximum value of the three components of R, G, and B of a pixel in the first image 140 may be calculated as the grayscale value. Alternatively, a weighted average of the three components of R, G, and B of a pixel in the first image 140 may be calculated as the grayscale value.
As another example, a grayscale image can be obtained by converting the RGB image to another color space (e.g. HSV, HLS, or the like) and then calculating the gray value based on components of the other color space.
At block 420, the computing device 120 may obtain a second image by sharpening the grayscale image. In some embodiments, in order to eliminate blurs in the grayscale image, the grayscale image may also be sharpened by, for example, a Laplace operator. An example Laplace operator may be, for example, a 4-connection kernel, having a value of 4 for the element in the center, and a value of -1 for the four neighboring elements.
In addition, the sharpened grayscale image may be further adapted back into an 8-bit grayscale image for overflow values of pixels. By sharpening the grayscale image, more salient edges in the image could be obtained, which may facilitate the image analysis which will be described below.
At block 430, the computing device 120 may determine whether the standard deviation of the second image is greater than a threshold. In some embodiments, a standard deviation of the second image may be calculated and compared with a threshold.
If the standard deviation is greater than the threshold, it may indicate that the second image has sufficient edge information and the blurs have been effectively eliminated. In this case, the process 400 proceeds to block 440. At block 440, the computing device may divide the enhanced second image into a plurality of sub-images which may specifically be based on the structure of the tray 116.
If the standard deviation is determined as being less than or equal to the threshold at block 430, the process 400 proceeds to block 450. At block 450, the computing device 120 may prompt the user to capture another image of the tray. When capturing a new image of the tray, the user may configure illuminative parameters of the light source (for example, luminous flux, color temperature, power, brightness, etc. ) and/or the parameters of the camera (for example, resolution, white balance, focus, etc. ) , such that obtained new image may have a higher quality for recognition. In some embodiments, a preview of the image may be presented on the display to assist the user to adjust the parameters.
In some embodiments, an image that meets the threshold requirement might not be obtained after a number of attempts. In this case, one of the captured images with a highest standard deviation may be selected for the later processing. Alternatively, the threshold used in the block 430 may for example be decreased, and an image with a relative higher quality can then be selected.
To detect the label (s) included in the second image, the computing device may further determine the plurality of sub-images from the second image based on the structure of the tray. In some embodiments, the computing device 120 may determine a template corresponding to the structure of the tray 116, wherein the template may indicate a layout of the plurality of slots 210.
In some embodiments, the tray (s) 116 for holding the slides are designed with a single structure. In other words, the layouts of the slots 210 on the tray (s) 116 are the same. In this case, a template for indicating the layout of the plurality of slots 210 in the tray 116 may be predetermined and maintained. For example, the template may indicate that there are twenty slots arranged in two rows, each row comprising ten slots evenly arranged in space.
In some further embodiments, tray (s) with multiple types of structures may be used. For example, multiples trays with different structures may be provided, and a pathologist may select one from them for holding the slices. In this case, multiple templates  corresponding to the multiple trays could be predetermined, and then maintained in association with an identification of the tray. For example, a first template indicating a first layout of the slots may be maintained in association with a first identification, and a second template indicating a second layout of the slots may be maintained in association with a second identification.
When determining the template corresponding to the tray 116 being used, an identification of the tray 116 may be first obtained. In one example, a pathologist may first input, through a graphical user interface, that a tray 116 with a first identification is being used. The computing device 120 may then obtain the maintained first template based on the first identification.
In another example, the computing device 120 may automatically detect the identification of the tray 116 for example based on the first or second image. For example, a number “1” might be printed on the tray 116 for indicating that the tray 116 has a first identification. The computing device 120 may then obtain the maintained first template based on the first identification.
After obtaining the layout of the slots 210 in the tray 116, the computing device 120 may then divide the second image into the plurality of sub-images according to the determined template. For example, the computing device 120 may device the second image into twenty sub-images according the obtained template indicating that there are twenty slots arranged in two rows, each row comprising ten slots evenly arranged in space.
In some other embodiments, the computing device 120 may detect edges of the plurality of slots 210 in the second image. It should be noted that any proper edge detection algorithms, e.g., Sobel algorithm or Canny algorithm, might be applied for detecting the edges of a slot. The present disclosure is not aimed to be limited in this aspect.
Further, the computing device 120 may then determine the plurality of sub-images from the second image based on the edges. For example, the computing device 120 may determine the plurality of regions bounded by the edges as the plurality of sub-images.
Fig. 5 illustrates a schematic diagram 500 for slide processing according to an embodiment of the present disclosure. As shown in Fig. 5, an enhanced second image 510 may be obtained according to the process discussed above. The computing device 120 may further determine twenty sub-images 520 from the second image 510 based on the process discussed above.
In some embodiments, during the information extracting, each of the plurality of sub-images 520 may be assigned with an index. In the example of Fig. 5, twenty indexes (e.g. 1, 2, 3…K) are assigned to the sub-images 520. The assigned indexes may help associate the information to be extracted with the corresponding sub-image.
In some further embodiments, the computing device 120 may also determine the plurality of sub-images directly from the first image 140, without enhancing the first image into a second image. Additionally, the plurality of sub-images obtained from the first image 140 may be enhanced for later processing according to the process 400 as discussed with reference to Fig. 4.
Further, to facilitate extracting information form the label (s) , the computing device 120 may further divide a sub-image 520 into a plurality of regions, wherein at least one of the plurality of regions having a size corresponding to the label 222.
Typically, as shown in Fig. 2, a slide 220 may have an elongate shape, where the label portion is positioned at one end of the slide and the specimen portion at another end. Further, different labels 222 may have a same size. In order to reduce the calculation cost for detecting the label, the computing device 120 may divide a sub-image 520 into multiple regions 530. In the example of Fig. 5, a sub-image 520 is divided into three regions 530, and a size of a region 530 has a size corresponding to the label 520. That is, a label 2020 always falls within one of these regions 530, and would not be included in two or more regions 530.
Since the label 222 is typically located at one end of the slide 520, only a region 530 at the end of the sub-image 510 is to be considered as a potential region 540 including a label 222. Accordingly, the computing device 120 may extract the information from the at least one of the plurality of regions 530, without considering regions for example in the middle of the sub-image.
In some embodiments, if a direction in which the slide (s) 220 is placed on the tray 116 is always the same and a direction in which the tray 116 is placed in the scanning device 110 is always the same, only one of the plurality of regions 530 may need to be processed. For example, if it could be ensured that a label 222 is always placed on the top of slide 220 as shown in Fig. 5, the computing device 120 may then only process the top region 530 (e.g., a region with an index “91” ) of the sub-image 520 (e.g., a sum-image with an index “9” ) , thereby reducing the calculation cost.
In some further embodiments, if the directions in which the slide (s) 220 is placed on the tray 116 may be different, two or more of the plurality of regions 530 may need to be processed. In the example of Fig. 5, in which a sum-image 520 is divided into three regions 530, the top region (e.g., a region with an index “91” ) and the bottom region (e.g., a region with an index “93” ) of the three regions may be processed, and the middle region (e.g., a region with an index “92” ) may be skipped accordingly. In this way, a manner of placing the slides may be flexible, and the calculation cost could be reduced.
Regions determined as potentially containing a label are collected for the subsequent recognition. As discussed above, a label 222 may comprise different types of information, and different recognition solutions may be utilized then. For example, a label may comprise at least one of: a character, a symbol or a graphically encoded representation.
In some embodiments, for each of the collected images, Optical Character Recognition (OCR) may be applied to recolonize the characters or the graphical symbols on the label 222. For example, the text “IM123456789” may be extracted from the label 222 through OCR, and may be determined as an identification of the slide 220.
In some other embodiments, for each of the collected images, a suitable barcode recognition algorithm (for example, for Data Matrix, QRCode, PDF417, etc. ) may be utilized to decode the graphically encoded representation. For example, an identification “IM123456789” may be encoded by a QR code on the label 222, and the computing device 120 may extract the identification by decoding the QR code.
Additionally, since each of the sub-images includes at most one label image (one label for one slide) , the recognition algorithm is not necessarily applied to all of the images divided from the same sub-image. For example, if some information has been extracted from the image identified as “11” , then the image identified as “13” does not need to be processed for recognition and can be skipped, thus accelerating the batch scanning of the present disclosure.
However, since the label may be stained during slide preparation and the barcode recognition algorithms are generally vulnerable to noises, it may fail to extract information in the barcode, in particular, a two dimensional code.
Now refer to Fig. 6. Fig. 6 illustrates a flowchart of an example process 600 for filtering a label image, e.g. including a two dimensional code, so as to make the batch scanning of the present disclosure more robust. At block 610, specifically for each image in  which the recognition algorithms fails to extract information, the computing device 120 may obtain an RGB image of the image. As mentioned above, the images which the recognition algorithm has been applied to but fails to extract information are enhanced grayscale images (as in steps 410-420) . Now raw images (RGB image) of these images may be obtained to get more details which may be lost during the previous processes.
At block 620, the computing device 120 may convert the RGB image to a grayscale image using L (lightness) channel of HLS (Hue, Lightness, Saturation) color space. In some embodiments, the RGB image may be converted to an HLS image by calculating the components of the HLS, which is known in the art, and the value of L channel of each pixel is extracted as the grayscale of the pixel.
At block 630, the computing device 120 may convert the grayscale image to a binary image by, for example, an adaptive threshold. In some embodiments, a threshold is applied to the grayscale image to generate a binary image, where if the value of the pixel surpasses the adaptive threshold, the pixel is set to be white (e.g. a value of 255) , otherwise the pixel is set to be black (e.g. a value of 0) . The threshold varies according to local statistics of the pixel to be applied to. For example, if the pixel is located in a region where pixels around it generally have relatively high gray values, the threshold applied to the pixel may adaptively increase. Likewise, if the pixel is located in an region where pixels around it generally have relatively low gray values, the threshold applied to the pixel may adaptively decrease. As such, the adaptive threshold approach may prevent image details in overexposure or underexposure regions from being lost when generating the binary image.
At block 640, the computing device 120 may detect edges of the binary image. According to some embodiments, a canny operator may be applied to the binary image to generate multi-edges.
At block 650, the computing device 120 may dilute the detected edges. According to some embodiments, the kernel of the dilution operation may be a 4-connection kernel (consider 4 pixels to the up, left, down and right of the center pixel) or an 8-connection kernel (consider 8 pixels around the center pixel) . Based on the dilution, isolated pixels, most of which are noises statistically, are effectively removed.
The computing device 120 may further filter pixels in the diluted image to generate an image for recognition by an algorithm for the two dimensional code. At block 660, the computing device 120 may filter pixels in the dilated image based on connectivity.  If the pixel is not an 8-connected pixel, the pixel is an isolated pixel and should be abandoned (e.g. setting its value to be 255 as background, which means white) .
At block 670, the computing device 120 may filter pixels in the dilated image based on a shape, specifically based on a shape of the connective region of the pixels. In some embodiments, only the pixels which are located in an approximately square region can be retained (e.g. setting its value to be 0 as foreground, which means black) . For example, the ratio of the width and height of the connective region of the pixel may be calculated. The ratio should fall in a range of a predefined values (e.g. 0.7-1.2) , which are dependent on types of the two dimensional codes. Pixels otherwise should be abandoned.
At block 680, the computing device 120 may filter pixels in the dilated image based on a size, specifically based on the size of the connective region of the pixels. In some embodiments, only the pixels which are located in a connective region having a proper size can be retained (e.g. setting its value to be 0 as foreground, which means black) . For example, the product of the width and height of the connective region of the pixel, a quick approach to calculate the area of the region, should fall in a range of a predefined values, which are dependent on types of the two dimensional codes. Pixels otherwise should be abandoned.
The algorithm for decoding two dimensional barcode may be applied again to the results of the method 600 to extract information in the label. If it still fails to recognize and extract information, a Laplace operator may be applied to the images to sharpen or enhance the image, followed by applying the algorithm again to extract information again. The Laplace operator may be, for example, a 3 by 3 kernal, having a value of 2, 4, 6, 8, or 10 for the element in center, and a value of -1, 0, or 1 for the other 8 surrounding elements. In some embodiments, the images may be processed with the Laplace operator repeatedly, before applying the algorithm to extract information.
Based on the forgoing discussion, the embodiments of the present disclosure can extract information from the slides at one time and reliably, thus improving the efficiency of slides management.
In some embodiments, the computing device 120 may present, for example via the display 122, the captured image 140 (the first image) of the tray 116 and a visual element indicating a status of one of the plurality sub-images.
In some embodiments, the computing device 120 may present via the display 122 a  first status indicating that a label is contained in a corresponding sub-image and the information indicated by the label is successfully extracted. For example, a block with a green color may be presented for indicating that a corresponding sub-image is successfully recognized, and the information is extracted. Additionally, the extracted information may for example be presented in response to a click on the block.
In some other embodiments, the computing device 120 may present a second status indicating that no label is contained in a corresponding sub-image. For example, a block with a grey color may be presented for indicating that no slide is placed on the corresponding slot.
In some further embodiments, the computing device may preset a third status indicating that a label is contained in a corresponding sub-image but the information indicated by the label fails to be extracted. For example, a block with a red color may be presented for indicating that recognition of a corresponding sub-image is failed.
In some embodiments, the computing device 120 may record the extracted information. As discussed above, the extracted information may comprise an identification of the slide 220. In some embodiments, the computing device 120 may update information associated with the identification for indicating a particular process (e.g., a retaining process) has been finished for the slide 220. In this case, the computing device 120 may later provide, for example in response to a query based on identification, information about processing stages of a slide based on the stored information. As such, by extracting information from slides in a batch, the efficiency of slides tracking may be improved.
In some other embodiments, the computing device 120 may store the extracted information in association with the image 140 of the tray 116. In this case, the captured image 140 may help verifying whether the extracted information is correct, and a user may updated the information by reviewing the stored image 140. Additionally, by storing the information visually indicated or encoded on the label in association with the captured image 140, a user may retrieve the specimen of the slide for further review or analysis after the slides have been archived.
In some embodiments, the computing device 120 may receive an input indicating a target identification of a slide, and may then obtaining a target image of a tray based on the target identification. For example, the computing device 120 may look up the target image from the storage device 130 using the identification. Further, the computing device 120  may present the target image as a response to the input. In this way, a user may easily retrieve the original captured image of the tray using a particular identification of a slide.
In some further embodiments, the computing device 120 may further determine an index of a sub-image corresponding to the label, wherein the index may indicates a position of a slot holding the slide on the tray. As discussed above, each of the sub-images may be assigned with a respective index. The computing device 120 may further store storing the extracted information in association with the index.
For example, an identification extracted from the label may be stored in association with an index “1” . In this way, a user can easily identify from the stored image 140 which one of the plurality of slides is corresponding to the identification.
It should be understood that the extracted information of a slide may be applied for any other proper purposes, detailed of which are not listed herein.
Fig. 7 illustrates a schematic block diagram of an example device 700 for implementing embodiments of the present disclosure. For example, the computing device 120 according to the embodiment of the present disclosure can be implemented by the device 700. As shown, the device 700 includes a central processing unit (CPU) 701, which can execute various suitable actions and processing based on the computer program instructions stored in a read-only memory (ROM) 702 or computer program instructions loaded in a random-access memory (RAM) 703 from a storage unit 708. The RAM 703 may also store all kinds of programs and data required by the operations of the device 700. The CPU 701, ROM 702 and RAM 703 are connected to each other via a bus 704. The input/output (I/O) interface 705 is also connected to the bus 704.
A plurality of components in the device 700 is connected to the I/O interface 705, including: an input unit 706, for example, a keyboard, a mouse, and the like; an output unit 707, for example, various kinds of displays and loudspeakers, and the like; a storage unit 708, such as a magnetic disk and an optical disk, and the like; and a communication unit 709, such as a network card, a modem, a wireless transceiver, and the like. The communication unit 709 allows the device 700 to exchange information/data with other devices via the computer network, such as Internet, and/or various telecommunication networks.
The above described process and processing, for example, the process 200, can also be performed by the processing unit 701. For example, in some embodiments, the  process 300 may be implemented as a computer software program being tangibly included in the machine-readable medium, for example, the storage unit 708. In some embodiments, the computer program may be partially or fully loaded and/or mounted to the device 700 via the ROM 702 and/or communication unit 709. When the computer program is loaded to the RAM 703 and executed by the CPU 701, one or more steps of the above described methods or processes can be implemented.
The present disclosure may be a method, a device, a system and/or a computer program product. The computer program product may include a computer-readable storage medium, on which the computer-readable program instructions for executing various aspects of the present disclosure are loaded.
The computer-readable storage medium may be a tangible device that maintains and stores instructions utilized by the instruction executing devices. The computer-readable storage medium may be, but is not limited to, an electrical storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device or any appropriate combination of the above. More concrete examples of the computer-readable storage medium (non-exhaustive list) include: a portable computer disk, a hard disk, a random-access memory (RAM) , a read-only memory (ROM) , an erasable programmable read-only memory (EPROM or flash) , a static random-access memory (SRAM) , a portable compact disk read-only memory (CD-ROM) , a digital versatile disk (DVD) , a memory stick, a floppy disk, a mechanical coding device, a punched card stored with instructions thereon, or a projection in a slot, and any appropriate combination of the above. The computer-readable storage medium utilized herein is not interpreted as transient signals per se, such as radio waves or freely propagated electromagnetic waves, electromagnetic waves propagated via waveguide or other transmission media (such as optical pulses via fiber-optic cables) , or electric signals propagated via electric wires.
The described computer-readable program instructions may be downloaded from the computer-readable storage medium to each computing/processing device, or to an external computer or external storage via Internet, local area network, wide area network and/or wireless network. The network may include copper-transmitted cables, optical fiber transmissions, wireless transmissions, routers, firewalls, switches, network gate computers and/or edge servers. The network adapter card or network interface in each computing/processing device receives computer-readable program instructions from the  network and forwards the computer-readable program instructions for storage in the computer-readable storage medium of each computing/processing device.
The computer program instructions for executing operations of the present disclosure may be assembly instructions, instructions of instruction set architecture (ISA) , machine instructions, machine-related instructions, microcodes, firmware instructions, state setting data, or source codes or target codes written in any combination of one or more programming languages, where the programming languages consist of object-oriented programming languages, e.g., Smalltalk, C++, and so on, and conventional procedural programming languages, such as “C” language or similar programming languages. The computer-readable program instructions may be implemented fully on a user computer, partially on the user computer, as an independent software package, partially on the user computer and partially on a remote computer, or completely on the remote computer or a server. In the case where a remote computer is involved, the remote computer may be connected to the user computer via any type of network, including a local area network (LAN) and a wide area network (WAN) , or to the external computer (e.g., connected via Internet using an Internet service provider) . In some embodiments, state information of the computer-readable program instructions is used to customize an electronic circuit, e.g., a programmable logic circuit, a field programmable gate array (FPGA) or a programmable logic array (PLA) . The electronic circuit may execute computer-readable program instructions to implement various aspects of the present disclosure.
Various aspects of the present disclosure are described herein with reference to a flow chart and/or block diagram of method, device (system) and computer program products according to embodiments of the present disclosure. It should be appreciated that each block of the flow chart and/or block diagram and the combination of various blocks in the flow chart and/or block diagram can be implemented by computer-readable program instructions.
The computer-readable program instructions may be provided to the processing unit of a general-purpose computer, dedicated computer or other programmable data processing devices to manufacture a machine, such that the instructions, when executed by the processing unit of the computer or other programmable data processing apparatuses, generate an apparatus for implementing functions/actions stipulated in one or more blocks in the flow chart and/or block diagram. The computer-readable program instructions may also be stored in the computer-readable storage medium and cause the computer,  programmable data processing apparatus and/or other devices to work in a particular manner, such that the computer-readable medium stored with instructions contains an article of manufacture, including instructions for implementing various aspects of the functions/actions stipulated in one or more blocks of the flow chart and/or block diagram.
The computer-readable program instructions may also be loaded into a computer, other programmable data processing apparatuses or other devices, so as to execute a series of operation steps on the computer, other programmable data processing apparatuses or other devices to generate a computer-implemented procedure. Therefore, the instructions executed on the computer, other programmable data processing apparatuses or other devices implement functions/actions stipulated in one or more blocks of the flow chart and/or block diagram.
The flow chart and block diagram in the drawings illustrate system architectures, functions and operations that may be implemented by a system, a method and a computer program product according to multiple implementations of the present disclosure. In this regard, each block in the flow chart or block diagram can represent a module, a portion of program segment or code, where the module and the portion of program segment or code include one or more executable instructions for performing stipulated logic functions. In some alternative implementations, it should be appreciated that the functions indicated in the block may also take place in an order different from the one indicated in the drawings. For example, two successive blocks may be in fact executed in parallel or sometimes in a reverse order depending on the involved functions. It should also be appreciated that each block in the block diagram and/or flow chart and combinations of the blocks in the block diagram and/or flow chart may be implemented by a hardware-based system exclusively for executing stipulated functions or actions, or by a combination of dedicated hardware and computer instructions.
Various implementations of the present disclosure have been described above and the above description is only exemplary rather than exhaustive and is not limited to the implementations of the present disclosure. Many modifications and alterations, without deviating from the scope and spirit of the explained various implementations, are obvious for those skilled in the art. The selection of terms in the text aims to best explain principles and actual applications of each implementation and technical improvements made in the market by each embodiment, or enable others of ordinary skill in the art to understand implementations of the present disclosure.
LIST OF EMBODIMENTS
Embodiment 1: A method for slide processing comprising:
obtaining a first image of a tray comprising a plurality of slots, a slot being capable of holding a slide, the slide comprising a label and a specimen for pathologic analysis;
determining a plurality of sub-images from the first image based on a structure of the tray, each of the plurality of sub-images corresponding to one of the plurality of slots; and
extracting information from the label of the slide based on at least one of the sub-images.
Embodiment 2: The method of embodiment 1, wherein the first image of the tray is obtained by an image capturing device, the image capturing device comprising a camera, a light source and a housing for enclosing the camera and the light source, and wherein the tray is placed in the housing for image capturing.
Embodiment 3: The method of embodiment 1, wherein determining the plurality of sub-images comprises:
obtaining a second image by enhancing the first image; and
determining the plurality of sub-images from the second image based on the structure of the tray.
Embodiment 4: The method of embodiment 3, wherein determining the plurality of sub-images from the second image comprises:
determining a template corresponding to the structure of the tray, the template indicating a layout of the plurality of slots; and
dividing the second image into the plurality of sub-images according to the determined template.
Embodiment 5: The method of embodiment 3, wherein determining the plurality of sub-images from the second image comprises:
detecting edges of the plurality of slots in the second image; and
determining the plurality of sub-images from the second image based on the edges.
Embodiment 6: The method of embodiment 1, wherein extracting information from the label of the slide comprises:
dividing a sub-image into a plurality of regions, at least one of the plurality of regions having a size corresponding to the label; and
extracting the information from the at least one of the plurality of regions.
Embodiment 7: The method of embodiment 6, wherein the label comprises a two-dimensional code, and wherein extracting the information comprises:
for each of the plurality of regions:
extracting the information encoded in the two-dimensional code from the region of the label;
in accordance with a failure of extracting the information,
obtaining data of a lightness channel corresponding to the region;
generating a binary image based on the data of the lightness channel;
filtering out a connective region in the binary image based on a shape or a size of the connective region; and
extracting the information from the filtered binary image.
Embodiment 8: The method of embodiment 1, further comprising:
storing the information in association with the first image.
Embodiment 9: The method of embodiment 8, further comprising:
determining an index of a sub-image corresponding to the label, the index indicating a position of a slot holding the slide on the tray; and
wherein storing the information further comprises:
storing the information in association with the index.
Embodiment 10: The method of embodiment 8, wherein storing the information  further comprises:
storing the information in association with the sub-image corresponding to the slide.
Embodiment 11: The method of embodiment 1, wherein the information comprises at least one of:
data encoded by a graphically encoded representation on the label,
at least one character on the label, or
at least one symbol on the label.
Embodiment 12: The method of embodiment 1, further comprising:
presenting the first image and a visual element indicating a status of one of the plurality sub-images, the status being selected from a group consisting of:
a first status indicating that a label is contained in a corresponding sub-image and the information indicated by the label is successfully extracted,
a second status indicating that no label is contained in a corresponding sub-image, and
a third status indicating that a label is contained in a corresponding sub-image but the information indicated by the label fails to be extracted.
Embodiment 13: The method of embodiment 1, wherein the information indicates an identification of the slide, the method further comprising:
receiving an input indicating a target identification of a slide;
obtaining a target image of a tray based on the target identification; and
presenting the target image as a response to the input.
Embodiment 14: An electronic device, comprising:
one or more processors;
one or more memories coupled to the one or more processors and having computer-executable instructions stored thereon, the computer-executable instructions, when executed by the one or more processors, causing the device to perform acts comprising:
obtaining a first image of a tray comprising a plurality of slots, a slot being capable of holding a slide, the slide comprising a label and a specimen for pathologic analysis;
determining a plurality of sub-images from the first image based on a structure of the tray, each of the plurality of sub-images corresponding to one of the plurality of slots; and
extracting information from the label of the slide based on at least one of the sub-images.
Embodiment 15: The device of embodiment 14, wherein the first image of the tray is obtained by an image capturing device, the image capturing device comprising a camera, a light source and a housing for enclosing the camera and the light source, and wherein the tray is placed in the housing for image capturing.
Embodiment 16: The device of embodiment 14, wherein determining the plurality of sub-images comprises:
obtaining a second image by enhancing the first image; and
determining the plurality of sub-images from the second image based on the structure of the tray.
Embodiment 17: The device of embodiment 16, wherein determining the plurality of sub-images from the second image comprises:
determining a template corresponding to the structure of the tray, the template indicating a layout of the plurality of slots; and
dividing the second image into the plurality of sub-images according to the determined template.
Embodiment 18: The device of embodiment 14, wherein extracting information from the label of the slide comprises:
detecting a region associated with the label in the at least one of the sub-images; and extracting the information based on the region associated with the label.
Embodiment 19: The device of embodiment 14, wherein extracting information from the label of the slide comprises:
dividing a sub-image into a plurality of regions, at least one of the plurality of regions having a size corresponding to the label; and
extracting the information from the at least one of the plurality of regions.
Embodiment 20: The device of embodiment 19, wherein the label comprises a two-dimensional code, and wherein extracting the information comprises:
for each of the plurality of regions:
extracting the information encoded in the two-dimensional code from the region of the label ;
in accordance with a failure of extracting the information,
obtaining data of a lightness channel corresponding to the region;
generating a binary image based on the data of the lightness channel;
filtering out a connective region in the binary image based on a shape or a size of the connective region; and
extracting the information from the filtered binary image.
Embodiment 21: The device of embodiment 14, the acts further comprising:
storing the information in association with the first image.
Embodiment 22: The device of embodiment 21, wherein the acts further comprise:
determining an index of a sub-image corresponding to the label, the index indicating a position of a slot holding the slide on the tray; and
wherein storing the information further comprises:
storing the information in association with the index.
Embodiment 23: The device of embodiment 21, wherein storing the information further comprises:
storing the information in association with the sub-image corresponding to the slide.
Embodiment 24: The device of embodiment 14, wherein the information comprises at least one of:
data encoded by a graphically encoded representation on the label,
at least one character on the label, or
at least one symbol on the label.
Embodiment 25: The device of embodiment 14, wherein the acts further comprise:
presenting the first image and a visual element indicating a status of one of the plurality sub-images, the processing status being selected from a group consisting of:
a first status indicating that a label is contained in a corresponding sub-image and the information indicated by the label is successfully extracted,
a second status indicating that no label is contained in a corresponding sub-image, and
a third status indicating that a label is contained in a corresponding sub-image but the information indicated by the label fails to be extracted.
Embodiment 26: The device of claim 14, wherein the information indicates an identification of the slide, and wherein the acts further comprise:
receiving an input indicating a target identification of a slide;
obtaining a target image of a tray based on the target identification; and
presenting the target image as a response to the input.
Embodiment 27: A computer readable storage medium having computer-executable instructions stored thereon, the computer-executable instructions, when executed by a processor of an apparatus, causing the apparatus to perform the steps of any one of the method according to embodiments 1-13.
Embodiment 28: A computer program product comprising computer-executable instructions which, when executed by a processor of an apparatus, cause the apparatus to perform the steps of any one of the method according to embodiments 1-13.
Embodiment 29: A scanning device, comprising:
a housing with an opening for loading a tray with a plurality of slots, the tray being capable of holding at least one slide, a slide comprising a label and a specimen for pathologic analysis; and
a camera configured to capture an image of the tray for extracting information from the label .
Embodiment 30: The device of embodiment 29, further comprising a light source with a configurable illuminative parameter.
Embodiment 31: The device of embodiment 29, further comprising:
a processor configured to perform acts comprising, in response to the tray being loaded into the scanning device,
obtaining the image of the tray;
determining a plurality of sub-images from the image of the tray based on a structure of the tray, each of the plurality of sub-images corresponding to one of the plurality of slots; and
extracting information from the label of the slide based on at least one of the sub-images.
LIST OF REFERENCE NUMBERS
100        environment
110        scanning device
112        light source
113        opening
114        camera
116        tray
117        support
118        housing
120        computing device
122        display
130        storage device
140        first image
210        slot
220        slide
221        first area
222        label
223        second area
224        specimen
300        example process
310        obtain a first image of a tray comprising a plurality of slots, a slot being capable of
           holding a slide, the slide comprising a label and a specimen for pathologic analysis
320        determine a plurality of sub-images from the first image based on a structure of the
           tray, each of the plurality of sub-images corresponding to one of the plurality of slots
330        extract information from the label of the slide based on at least one of the sub-images
400        example process
410        convert the captured first image of the tray into a grayscale image
420        sharpen the grayscale image
430        Is standard deviation greater than a threshold?
440        divide the enhanced second image into sub-images
450        prompt to capture another image of the tray
500        schematic diagram
510        second image
520        sub-image
530        region
600        example process
610        obtain an RGB image
620        convert the RGB image to a grayscale image using L channel of HLS
630        convert the grayscale image to a binary image
640        detect edges of the binary image
650        dilate the detected edges
660        filter pixels based on connectivity
670        filter pixels based on shape
680        filter pixels based on size
700        device
701        CPU
702        ROM
703        RAM
704        bus
705        I/O interface
706        input unit
707        output unit
708        storage unit
709        communication unit

Claims (30)

  1. A method for slide processing comprising:
    obtaining a first image (140) of a tray (116) comprising a plurality of slots (210) , a slot (210) being capable of holding a slide (220) , the slide (220) comprising a label (222) and a specimen (224) for pathologic analysis;
    determining a plurality of sub-images from the first image (140) based on a structure of the tray (116) , each of the plurality of sub-images corresponding to one of the plurality of slots (210) ; and
    extracting information from the label (222) of the slide (220) based on at least one of the sub-images.
  2. The method of claim 1, wherein the first image (140) of the tray (116) is obtained by an image capturing device, the image capturing device comprising a camera (114) , a light source (112) and a housing (118) for enclosing the camera (114) and the light source (112) , and wherein the tray (116) is placed in the housing (118) for image capturing.
  3. The method of claim 1, wherein determining the plurality of sub-images comprises:
    obtaining a second image (510) by enhancing the first image (140) ; and
    determining the plurality of sub-images (520) from the second image (510) based on the structure of the tray (116) .
  4. The method of claim 3, wherein determining the plurality of sub-images (520) from the second image (510) comprises:
    determining a template corresponding to the structure of the tray (116) , the template indicating a layout of the plurality of slots (210) ; and
    dividing the second image (510) into the plurality of sub-images (520) according to the determined template.
  5. The method of claim 3, wherein determining the plurality of sub-images (520) from the second image (510) comprises:
    detecting edges of the plurality of slots (210) in the second image (510) ; and
    determining the plurality of sub-images (520) from the second image (510) based on  the edges.
  6. The method of claim 1, wherein extracting information from the label (222) of the slide (220) comprises:
    dividing a sub-image (520) into a plurality of regions (530) , at least one of the plurality of regions (530) having a size corresponding to the label (222) ; and
    extracting the information from the at least one of the plurality of regions (530) .
  7. The method of claim 6, wherein the label (222) comprises a two-dimensional code, and wherein extracting the information comprises:
    for each of the plurality of regions (530) :
    extracting the information encoded in the two-dimensional code from the region (530) of the label (222) ;
    in accordance with a failure of extracting the information,
    obtaining data of a lightness channel corresponding to the region (530) ;
    generating a binary image based on the data of the lightness channel;
    filtering out a connective region in the binary image based on a shape or a size of the connective region; and
    extracting the information from the filtered binary image.
  8. The method of claim 1, further comprising:
    storing the information in association with the first image (140) .
  9. The method of claim 8, further comprising:
    determining an index of a sub-image corresponding to the label (222) , the index indicating a position of a slot (210) holding the slide (220) on the tray (116) ; and
    wherein storing the information further comprises:
    storing the information in association with the index.
  10. The method of claim 8, wherein storing the information further comprises:
    storing the information in association with the sub-image corresponding to the slide (220) .
  11. The method of claim 1, wherein the information comprises at least one of:
    data encoded by a graphically encoded representation on the label (222) ,
    at least one character on the label (222) , or
    at least one symbol on the label (222) .
  12. The method of claim 1, further comprising:
    presenting the first image (140) and a visual element indicating a status of one of the plurality sub-images, the status being selected from a group consisting of:
    a first status indicating that a label (222) is contained in a corresponding sub-image and the information indicated by the label (222) is successfully extracted,
    a second status indicating that no label (222) is contained in a corresponding sub-image, and
    a third status indicating that a label (222) is contained in a corresponding sub-image but the information indicated by the label (222) fails to be extracted.
  13. The method of claim 1, wherein the information indicates an identification of the slide (220) , the method further comprising:
    receiving an input indicating a target identification of a slide (220) ;
    obtaining a target image of a tray (116) based on the target identification; and
    presenting the target image as a response to the input.
  14. An electronic device, comprising:
    one or more processors;
    one or more memories coupled to the one or more processors and having computer-executable instructions stored thereon, the computer-executable instructions, when executed by the one or more processors, causing the device to perform acts comprising:
    obtaining a first image (140) of a tray (116) comprising a plurality of slots (210) , a slot (210) being capable of holding a slide (220) , the slide (220) comprising a label (222) and a specimen (224) for pathologic analysis;
    determining a plurality of sub-images from the first image (140) based on a structure of the tray (116) , each of the plurality of sub-images corresponding to one of the plurality of slots (210) ; and
    extracting information from the label (222) of the slide (220) based on at least one of the sub-images.
  15. The device of claim 14, wherein the first image (140) of the tray (116) is obtained by an image capturing device, the image capturing device comprising a camera (114) , a light source (112) and a housing (118) for enclosing the camera (114) and the light source (112) , and wherein the tray (116) is placed in the housing (118) for image capturing.
  16. The device of claim 14, wherein determining the plurality of sub-images comprises:
    obtaining a second image (510) by enhancing the first image (140) ; and
    determining the plurality of sub-images (520) from the second image (510) based on the structure of the tray (116) .
  17. The device of claim 16, wherein determining the plurality of sub-images (520) from the second image (510) comprises:
    determining a template corresponding to the structure of the tray (116) , the template indicating a layout of the plurality of slots (210) ; and
    dividing the second image (510) into the plurality of sub-images (520) according to the determined template.
  18. The device of claim 14, wherein extracting information from the label (222) of the slide (220) comprises:
    detecting a region associated with the label (222) in the at least one of the sub-images; and
    extracting the information based on the region associated with the label (222) .
  19. The device of claim 14, wherein extracting information from the label (222) of the slide (220) comprises:
    dividing a sub-image into a plurality of regions, at least one of the plurality of regions having a size corresponding to the label (222) ; and
    extracting the information from the at least one of the plurality of regions.
  20. The device of claim 19, wherein the label (222) comprises a two-dimensional code, and wherein extracting the information comprises:
    for each of the plurality of regions:
    extracting the information encoded in the two-dimensional code from the region of the label (222) ;
    in accordance with a failure of extracting the information,
    obtaining data of a lightness channel corresponding to the region;
    generating a binary image based on the data of the lightness channel;
    filtering out a connective region in the binary image based on a shape or a size of the connective region; and
    extracting the information from the filtered binary image.
  21. The device of claim 14, the acts further comprising:
    storing the information in association with the first image (140) .
  22. The device of claim 21, wherein the acts further comprise:
    determining an index of a sub-image corresponding to the label (222) , the index indicating a position of a slot (210) holding the slide (220) on the tray (116) ; and
    wherein storing the information further comprises:
    storing the information in association with the index.
  23. The device of claim 21, wherein storing the information further comprises:
    storing the information in association with the sub-image corresponding to the slide (220) .
  24. The device of claim 14, wherein the information comprises at least one of:
    data encoded by a graphically encoded representation on the label (222) ,
    at least one character on the label (222) , or
    at least one symbol on the label (222) .
  25. The device of claim 14, wherein the acts further comprise:
    presenting the first image (140) and a visual element indicating a status of one of the plurality sub-images, the processing status being selected from a group consisting of:
    a first status indicating that a label (222) is contained in a corresponding sub-image and the information indicated by the label (222) is successfully extracted,
    a second status indicating that no label (222) is contained in a corresponding sub-image, and
    a third status indicating that a label (222) is contained in a corresponding sub-image but the information indicated by the label (222) fails to be extracted.
  26. The device of claim 14, wherein the information indicates an identification of the slide (220) , and wherein the acts further comprise:
    receiving an input indicating a target identification of a slide (220) ;
    obtaining a target image of a tray (116) based on the target identification; and
    presenting the target image as a response to the input.
  27. A computer readable storage medium having computer-executable instructions stored thereon, the computer-executable instructions, when executed by a processor of an apparatus, causing the apparatus to perform the steps of any one of the method according to claims 1-13.
  28. A computer program product comprising computer-executable instructions which, when executed by a processor of an apparatus, cause the apparatus to perform the steps of any one of the method according to claims 1-13.
  29. A scanning device (110) , comprising:
    a housing (118) with an opening for loading a tray (116) with a plurality of slots (210) , the tray (116) being capable of holding at least one slide (220) , a slide (220) comprising a label (222) and a specimen (224) for pathologic analysis; and
    a camera (114) configured to capture an image of the tray (116) for extracting information from the label (222) ,
    a processor configured to perform acts comprising, in response to the tray (116) being loaded into the scanning device,
    obtaining the image of the tray (116) ;
    determining a plurality of sub-images from the image of the tray (116) based on a structure of the tray (116) , each of the plurality of sub-images corresponding to one of the plurality of slots (210) ; and
    extracting information from the label (222) of the slide (220) based on at least one of the sub-images.
  30. The device of claim 29, further comprising a light source (112) with a configurable illuminative parameter.
PCT/CN2021/135516 2020-12-04 2021-12-03 Method, device, and computer program product for slides processing WO2022117094A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202180092893.5A CN116829958A (en) 2020-12-04 2021-12-03 Methods, apparatus and computer program products for slide processing

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN2020134048 2020-12-04
CNPCT/CN2020/134048 2020-12-04

Publications (1)

Publication Number Publication Date
WO2022117094A1 true WO2022117094A1 (en) 2022-06-09

Family

ID=79282985

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/135516 WO2022117094A1 (en) 2020-12-04 2021-12-03 Method, device, and computer program product for slides processing

Country Status (2)

Country Link
CN (1) CN116829958A (en)
WO (1) WO2022117094A1 (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2966493A1 (en) 2014-07-09 2016-01-13 Olympus Corporation Specimen observation device
CN106973258A (en) * 2017-02-08 2017-07-21 上海交通大学 Pathological section information quick obtaining device
CN110556172A (en) * 2019-09-18 2019-12-10 杭州智团信息技术有限公司 Slide filing method, device terminal, slide filing system, and readable storage medium
US20200124631A1 (en) * 2018-10-19 2020-04-23 Diagnostic Instruments, Inc. Barcode scanning of bulk sample containers
US20200200531A1 (en) * 2018-12-20 2020-06-25 Carl Zeiss Microscopy Gmbh Distance determination of a sample plane in a microscope system

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2966493A1 (en) 2014-07-09 2016-01-13 Olympus Corporation Specimen observation device
CN106973258A (en) * 2017-02-08 2017-07-21 上海交通大学 Pathological section information quick obtaining device
US20200124631A1 (en) * 2018-10-19 2020-04-23 Diagnostic Instruments, Inc. Barcode scanning of bulk sample containers
US20200200531A1 (en) * 2018-12-20 2020-06-25 Carl Zeiss Microscopy Gmbh Distance determination of a sample plane in a microscope system
CN110556172A (en) * 2019-09-18 2019-12-10 杭州智团信息技术有限公司 Slide filing method, device terminal, slide filing system, and readable storage medium

Also Published As

Publication number Publication date
CN116829958A (en) 2023-09-29

Similar Documents

Publication Publication Date Title
US11449998B2 (en) Processing of histology images with a convolutional neural network to identify tumors
US20230186657A1 (en) Convolutional neural networks for locating objects of interest in images of biological samples
US20240119595A1 (en) Computer supported review of tumors in histology images and post operative tumor margin assessment
DK2973397T3 (en) Tissue-object-based machine learning system for automated assessment of digital whole-slide glass
Veta et al. Automatic nuclei segmentation in H&E stained breast cancer histopathology images
US11226280B2 (en) Automated slide assessments and tracking in digital microscopy
WO2020253508A1 (en) Abnormal cell detection method and apparatus, and computer readable storage medium
JP6342810B2 (en) Image processing
US20230360354A1 (en) Detection of annotated regions of interest in images
CN113724235B (en) Semi-automatic Ki67/ER/PR negative and positive cell counting system and method under condition of changing environment under mirror
WO2022117094A1 (en) Method, device, and computer program product for slides processing
CN115943305A (en) Information processing apparatus, information processing method, program, and information processing system
CN110647889B (en) Medical image recognition method, medical image recognition apparatus, terminal device, and medium
CN117252825A (en) Dental caries identification method and device based on oral panoramic image
CN115917594A (en) Entire slide annotation transfer using geometric features
JP2005069939A (en) Method, device, and program for recognizing shape of optical member
CN116519688B (en) High-throughput acquisition and automatic analysis method and system for berry phenotype characteristics
US20240152692A1 (en) Information processing device, information processing method, information processing system, and conversion model

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21839316

Country of ref document: EP

Kind code of ref document: A1

DPE1 Request for preliminary examination filed after expiration of 19th month from priority date (pct application filed from 20040101)
NENP Non-entry into the national phase

Ref country code: DE

WWE Wipo information: entry into national phase

Ref document number: 202180092893.5

Country of ref document: CN

122 Ep: pct application non-entry in european phase

Ref document number: 21839316

Country of ref document: EP

Kind code of ref document: A1