WO2017145172A1 - Système et procédé d'extraction et d'analyse d'échantillons au microscope - Google Patents

Système et procédé d'extraction et d'analyse d'échantillons au microscope Download PDF

Info

Publication number
WO2017145172A1
WO2017145172A1 PCT/IN2016/000239 IN2016000239W WO2017145172A1 WO 2017145172 A1 WO2017145172 A1 WO 2017145172A1 IN 2016000239 W IN2016000239 W IN 2016000239W WO 2017145172 A1 WO2017145172 A1 WO 2017145172A1
Authority
WO
WIPO (PCT)
Prior art keywords
cells
structures
computing device
image
smart computing
Prior art date
Application number
PCT/IN2016/000239
Other languages
English (en)
Inventor
Kumar Pandey ROHIT
Arnand APURV
Cheluvaraju BHARATH
Rai Dastidar TATHAGATO
Original Assignee
Sigtuple Technologies Private Limited
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sigtuple Technologies Private Limited filed Critical Sigtuple Technologies Private Limited
Publication of WO2017145172A1 publication Critical patent/WO2017145172A1/fr

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/69Microscopic objects, e.g. biological cells or cellular parts

Definitions

  • the embodiments herein are generally related to classification of different types of cells and structures.
  • the embodiments herein are particularly related to a system and method for an image or video acquisition and classification of cells and structures via a mobile application.
  • the embodiments herein are more particularly related to a system and method for extraction and analysis of samples under a microscope.
  • classification of cells plays a very important role.
  • a classification of different cells in a sample is performed by a visual microscopic examination.
  • the visual microscopic examination is performed for a quantitative and qualitative analysis of the blood samples for diagnosing the several diseases.
  • the visual microscopic examination performed manually is tedious, time consuming and susceptible to a human error.
  • the primary object of the embodiments herein is to provide a method and system for an automatic classification of several types of cells and structures using microscopic images captured with an application installed on a smart computing device.
  • Another object of the embodiments herein is to provide a system and method for extraction and analysis of samples under a microscope.
  • Yet another object of the embodiments herein is to provide a method and system for an automated image or video acquisition, image preprocessing, image identification process and extraction of features and classification of cells and structures with a smart computing device.
  • Yet another object of the embodiments herein is to provide a system and method to capture image or video manually using an application installed on a smart computing device.
  • Yet another object of the embodiments herein is to provide a system and method to capture the image or video of a slide kept under a microscope automatically by controlling the movement of the stage with a robot using an application installed on a smart computing device.
  • Yet another object of the embodiments herein is to provide a system and method for executing image pre-processing operation including normalization of multiple parameters of the image using an application installed on a smart computing device.
  • Yet another object of the embodiments herein is to provide a system and method for extraction of multiple patches of interest containing different types of cells with an application installed on a smart computing device, thereby ensuring that all the true positives are identified.
  • Yet another object of the embodiments herein is to provide a system and method for the classification and subsequent identification of different types of cells and structures using a smart computing device or a server using machine-learning techniques.
  • Yet another object of the embodiments herein is to provide a system and method for generating reports based on the cell classification results at a high performance server.
  • the various embodiments herein provide a system and method for an extraction and analysis of a plurality of types of cells and structures using microscopic images captured with an application installed on a smart computing device.
  • the smart computing device includes, but is not limited to, smart phones and tablet devices.
  • the system performs the steps of image acquisition, image processing, extraction and classification of cells and structures on the smart computing device and report generation on a server.
  • the image of a specimen kept on a slide under the microscope is captured by the application installed in the smart computing device.
  • the captured image is preprocessed by normalizing the plurality of parameters of the captured image. Further, the patches of the plurality of the cells and structures in the captured image are identified and the plurality of features and attributes of the cells and structures in the captured image are extracted.
  • the plurality of classes of cells is classified based on the features and attributes of the cells in the extracted patches of the image.
  • the classification is performed by using a plurality of pre-trained machine learning models on one of the smart computing devices and the server. Further, the report having the diagnosis information is generated based on the results obtained from the classification module.
  • the smart computing device further comprises an application for capturing the plurality of images to digitize the sample observed through the microscope.
  • a system for extraction and analysis of cells and structures in a sample comprises a smart computing device and a server.
  • the smart computing device is configured to extract features and attributes of the cells and the structures of interest in the sample observed through a microscope.
  • the smart computing device is configured to extract the features and the attributes from a plurality of images of samples captured and processed using the smart computing device.
  • the server is configured to analyze the features and the attributes of the cells and the structures of interest extracted for generating reports, wherein the analysis of features and the attributes of the cells and the structures of interest is performed by executing pre-trained machine learning models for classifying the cells and the structures into a plurality of predefined classes.
  • the smart computing device further comprises an image acquisition module, an image processing module, an optional extraction module and an optional classification module.
  • the image acquisition module runs on a processor in the smart computing device and is configured to capture the plurality of images or videos of the sample observed through a microscope. The plurality of images are captured using an in-built camera in the smart computing device.
  • the image-processing module runs on the processor in the smart computing device and is configured to process the plurality of captured images or videos by performing normalization and image quality assessment.
  • the optional extraction module run on the processor in the smart computing device and is configured to extract the features and the attributes of cells and structures of interest in the sample. The extraction is performed by executing an extraction logic based on the type of the cells and the structures of interest.
  • the optional classification module is run on the processor in the smart computing device and is configured to classify the plurality of the cells and the structures into pre-defined classes.
  • the smart computing device is selected from a group consisting of smart phones and tablet devices.
  • the smart computing device further comprises an application installed for activating the image acquisition module, the image processing module, the optional extraction module and the optional classification module.
  • the image-processing module is configured to perform normalization and image quality assessment of the captured images by standardizing a plurality of parameters of the camera for ensuring same quality of consecutive images captured by the camera.
  • the plurality of parameters includes but are not limited to auto-focus setting, ISO setting, exposure time, lens aperture, auto white balance settings and colour temperature settings.
  • a plurality of image characteristics of the captured images is adjusted to be in a permissible range for ensuring a desired quality of the plurality of captured images.
  • the plurality of image characteristics includes but is not limited to blur, sharpness and focus of image, density of cells and structures of interest visible in the captured field of view, spacing between the cells and structures of interest in the captured field of view, brightness and contrast of image, colour profile and tone of image.
  • a plurality of Digital Image Processing (DIP) techniques is applied for normalizing the color scheme and contrast of the captured image.
  • the plurality of DIP techniques includes but are not limited to histogram equalization, blur detection, and similarity detection techniques.
  • the features and the attributes extracted by the smart computing device includes but are not limited to a density of cells in the image, size and areas of the image under a plurality of cell types, color attributes of the plurality of types of patches of interest etc.
  • the smart computing device is configured to classify the cells and the structures based on the type of the cells and structures of interest.
  • the smart computing device is configured to upload the extracted features and attributes of the cells and the structures, extracted patches of cells and structures and the classification of the cells and the structures to the server.
  • the server further comprises an Application Programming interface (API) Gateway for exposing APIs to receive the uploads from the smart computing device.
  • API Application Programming interface
  • the server further comprises a classification module, an extraction module and a report generation module.
  • the classification module is run on a hardware processor in a computer system and is configured to classify the extracted cells and structures into predefined classes using an artificial intelligence platform.
  • the artificial intelligence platform is configured to analyze the images in real time and batch mode using a list of procedures.
  • the report generation module is run on a hardware processor in a computer system, and is configured to generate the report based on the analysis during classification of extracted features, and attributes using a custom logic.
  • the server is configured to publish the generated report on a webpage or a user interface of the smart computing device using APIs in the API gateway.
  • the report generated by the server comprises a plurality of sections including but not limited to metrics section, charts and graphs, visual section and suggestions.
  • both the optional classification module provided in the smart computing device and a classification module provided in the server are trained in a training phase with machine learning models.
  • the training of the optional classification module and the classification module mentioned herein does not take place during the process as explained in the course of the present invention- the optional classification module and the classification module are pre-trained using artificial intelligent models.
  • the entire classification is performed in the optional classification module of the smart computing device after the extraction of features and attributes from the image by the optional extraction module; or otherwise, the entire classification is performed with the classification module in the server, when the extracted data is sent directly to the server.
  • the entire classification is performed in the optional classification module in the smart computing device, then the classified data is sent to the server for collating the same to generate the reports using the report generation module.
  • the server is configured to perform the complete classification using the classification module along with report generation for cases when the extracted data is directly sent to the server from the optional extraction module without any classification taking place in the optional classification module.
  • the classification of cells is partially performed in the optional classification module and the result of the partly completed classification is sent to the server for further classification by the classification module.
  • an extraction module is provided in the server. The extraction is performed after performing an image processing operation with the image processing module.
  • the classification is carried out either with the optional classification module in the smart computing device or with the classification module in the server or the classification is partially performed with the optional classification module and the remaining classification process is performed in the classification module, depending on the case.
  • the extraction is carried out directly in the server, then the complete classification is also done in the server using the classification module.
  • the report is generated after the completion of the classification of the extracted data.
  • the extraction process is partially performed in the smart computing device and the remaining extraction operation is performed in the server.
  • a method for extraction and analysis of cells and structures in a sample comprising capturing a plurality of images of the sample observed through a microscope using an application installed in a smart computing device.
  • the plurality of captured images are processed by performing normalization and image quality assessment using the application.
  • the features and attributes of cells and structures of interest are extracted in the sample and image patches containing extracted cells and structures of interest using the application in the smart computing device.
  • the extraction process is performed by executing an extraction logic based on the type of the cells and the structures of interest.
  • the extracted cells and structures are analyzed to identify and classify the cells and structures into pre-defined classes by running a hierarchy of artificial intelligence models in an artificial intelligence platform in a server.
  • the statistical parameters are calculated for suggesting abnormal conditions in the sample based on the output of classification and the extracted features and attributes of the cells and structures.
  • the report is generated by collaborating statistical parameters and suggested abnormal conditions using a custom logic in the server.
  • the report generated by the server comprises a plurality of sections including but not limited to metrics section, charts and graphs, visual section and suggestions.
  • the method further comprises reviewing the generated report on a webpage or a user interface of the smart computing device.
  • the method further comprises uploading the extracted features and attributes of cells and structures of interest in the sample and the image patches containing extracted cells and structures of interest from the smart computing device to the server.
  • FIG. 1A illustrates block diagram of a system for extraction and analysis of samples under a microscope, according to one embodiment herein.
  • FIG. IB illustrates a hardware block diagram of a system for extraction and analysis of samples under a microscope, according to one embodiment herein.
  • FIG. 2 illustrates a flowchart explaining a method for an automatic classification of the types of cells and structures using an application installed on a smart computing device, in accordance with one embodiment herein.
  • FIG. 3 illustrates a flowchart explaining a method for extraction and analysis of samples under a microscope, in accordance with one embodiment herein.
  • the various embodiments herein provide a system and method for an extraction and analysis of a plurality of types of cells and structures using microscopic images captured with an application installed on a smart computing device.
  • the smart computing device includes but is not limited to smart phone and tablet devices etc.
  • the system performs the steps of image acquisition, image processing, extraction and classification of cells and structures on the smart computing device and report generation on a server.
  • the image or a video of a specimen kept on a slide under the microscope is captured by the application installed in the smart computing device.
  • the captured image or a video is preprocessed by normalizing the plurality of parameters of the captured image or video.
  • the patches of the plurality of the cells and structures in the captured image or a video are identified and the plurality of features and attributes of the cells and structures in the captured image are extracted.
  • the plurality of classes of cells is classified based on the features and attributes of the cells in the extracted patches of the image.
  • the classification is performed by using a plurality of pre-trained machine learning models on one of the smart computing devices and the server. Further, the report having the diagnosis information is generated based on the results obtained from the classification module.
  • a system for extraction and analysis of cells and structures in a sample comprises a smart computing device and a server.
  • the smart computing device is configured to extract features and attributes of the cells and the structures of interest in the sample observed through a microscope.
  • the smart computing device is configured to extract the features and the attributes from a plurality of images or videos of samples captured and processed using the smart computing device.
  • the server is configured to analyze the features and the attributes of the cells and the structures of interest extracted for generating reports, wherein the analysis of features and the attributes of the cells and the structures of interest is performed by executing pre-trained machine learning models for classifying the cells and the structures into a plurality of predefined classes.
  • the smart computing device further comprises an application for capturing the plurality of images or videos to digitize the sample observed through the microscope.
  • the smart computing device further comprises an image acquisition module, an image processing module, an optional extraction module and an optional classification module.
  • the image acquisition module runs on a processor in the smart computing device and is configured to capture the plurality of images or videos of the sample observed through a microscope. The plurality of images or videos are captured using an in-built camera in the smart computing device.
  • the image-processing module runs on the processor in the smart computing device and is configured to process the plurality of captured images or videos by performing normalization and image quality assessment.
  • the optional extraction module run on the processor in the smart computing device and is configured to extract the features and the attributes of cells and structures of interest in the sample. The extraction is performed by executing an extraction logic based on the type of the cells and the structures of interest.
  • the optional classification module is run on the processor in the smart computing device and is configured to classify the plurality of the cells and the structures into pre-defined classes.
  • the features and the attributes extracted by the smart computing device includes but are not limited to a density of cells in the image, size and areas of the image under a plurality of cell types and color attributes of the plurality of types of patches of interest.
  • the smart computing device is selected from a group consisting of smart phones and tablet devices.
  • the smart computing device further comprises an application installed for activating the image acquisition module, the image processing module, the optional extraction module and the optional classification module.
  • the smart computing device is configured to classify the cells and the structures based on the type of the cells and structures of interest.
  • the image-processing module is configured to perform normalization and image quality assessment of the captured images and videos by standardizing a plurality of parameters of the camera for ensuring same quality of consecutive images captured by the camera.
  • the plurality of parameters include but are not limited to auto-focus setting, ISO setting, exposure time, lens aperture, auto white balance settings and colour temperature settings.
  • a plurality of image characteristics of the captured images or videos is adjusted to be in a permissible range for ensuring a desired quality of the plurality of captured images.
  • the plurality of image characteristics includes but is not limited to blur, sharpness and focus of image, density of cells and structures of interest visible in the captured field of view, spacing between the cells and structures of interest in the captured field of view, brightness and contrast of image, color profile and tone of image.
  • a plurality of Digital Image Processing (DIP) techniques is applied for normalizing the color scheme and contrast of the captured image.
  • the plurality of DIP techniques includes but are not limited to histogram equalization, blur detection, and similarity detection techniques.
  • the smart computing device is configured to upload the extracted features and attributes of the cells and the structures, extracted patches of cells and structures and the classification of the cells and the structures to the server.
  • the server further comprises an Application Programming interface (API) Gateway for exposing APIs to receive the uploads from the smart computing device.
  • API Application Programming interface
  • the server further comprises of an extraction module, a classification module, and a report generation module.
  • the classification module is run on a hardware processor in a computer system and is configured to classify the extracted cells and structures into predefined classes using an artificial intelligence platform.
  • the artificial intelligence platform is configured to analyze images in real time or batch mode using a list of procedures.
  • the report generation module is run on a hardware processor in a computer system, and is configured to generate the report based on the analysis during classification of extracted features, and attributes using a custom logic.
  • the server is configured to publish the generated report on a webpage or a user interface of the smart computing device using APIs in the API gateway.
  • the report generated by the server comprises a plurality of sections including but not limited to metrics section, charts and graphs, visual section and suggestions.
  • both the optional classification module provided in the smart computing device and a classification module provided in the seryer are trained in a training phase with machine learning models.
  • the training of the optional classification module and the classification module mentioned herein does not take place during the process as explained in the course of the present invention- the optional classification module and the classification module are pre-trained using artificial intelligent models.
  • the entire classification is performed in the optional classification module of the smart computing device after the extraction of features and attributes from the image by the optional extraction module; otherwise, the entire classification is also performed with the classification module in the server, when the extracted data is sent directly to the server.
  • the entire classification is performed in the optional classification module in the smart computing device, then the classified data is sent to the server for collating all the data to generate the reports using the report generation module.
  • the server is configured to perform the complete classification using the classification module along with report generation for cases when the extracted data is directly sent to the server from the optional extraction module without any classification taking place in the optional classification module.
  • the classification of cells is partially performed in the optional classification module and the result of the partly completed classification is sent to the server for further classification by the classification module.
  • an extraction module is provided in the server.
  • the extraction performed after performing an image processing operation with the image processing module in the smart computing device or the server.
  • the classification is carried out either with the optional classification module in the smart computing device or with the classification module in the server or the classification is partially performed with the optional classification module and the remaining classification process is performed in the classification module, depending on the case.
  • the extraction is carried out directly in the server, then the complete classification is also done in the server using the classification module.
  • the report is generated after the completion of the classification of the extracted data.
  • the extraction process is partially performed in the smart computing device and the remaining extraction operation is performed in the server.
  • a method for extraction and analysis of cells and structures in a sample comprising capturing a plurality of images or videos of the sample observed through a microscope using an application installed in a smart computing device.
  • the plurality of captured images or videos are processed by performing normalization and image quality assessment using the application.
  • the features and attributes of cells and structures of interest are extracted in the sample and image patches containing extracted cells and structures of interest using the application in the smart computing device.
  • the extraction process is performed by executing an extraction logic based on the type of the cells and the structures of interest.
  • the extracted cells and structures are analyzed to identify and classify the cells and structures into pre-defined classes by running a hierarchy of artificial intelligence models in an artificial intelligence platform in a server.
  • the statistical parameters are calculated for suggesting abnormal conditions in the sample based on the output of classification and the extracted features and attributes of the cells and structures.
  • the report is generated by collaborating statistical parameters and suggested abnormal conditions using a custom logic in the server.
  • the report generated by the server comprises a plurality of sections including but not limited to metrics section, charts and graphs, visual section and suggestions.
  • the method further comprises reviewing the generated report on a webpage or a user interface of the smart computing device.
  • the method further comprises uploading the extracted features and attributes of cells and structures of interest in the sample and the image patches containing extracted cells and structures of interest from the smart computing device to the server.
  • a system for an automatic classification of the plurality of the types of cells and structures using images or videos captured by the microscope with an application installed on a smart computing device.
  • the system comprises an image acquisition module, an image pre-processing module, an optional extraction module, an optional classification module, an extraction module, a classification module and a report generation module.
  • the image acquisition module is coupled to a camera in-built on the smart computing device,
  • the application installed on the smart computing device is run and configured to initiate the analysis. On activating the application, an image capturing mode is selected.
  • the application is run and configured to capture the image or video through the in-built camera.
  • the image pre-processing module is configured to normalize the captured image or video by standardizing the camera parameters and performing various digital signal-processing techniques on the captured image.
  • the image processing is performed on the smart computing device.
  • the optional extraction module installed in the smart computing device and the extraction module of the server are configured to initially identify the patches containing the several types of cells and structures in the captured image. Further, optional extraction module and the extraction module are configured to extract the features and attributes of the cells and structures in the captured image.
  • the optional classification module and the classification module are configured to employ a plurality of pre-trained machine learning models to classify the cells and structures based on the extracted features and attributes. Further, when the extracted features of the cells, the extracted patches and the results obtained during classification using the optional classification module are uploaded from the smart computing device to the server, the report generation module of the server, which is generates analytical reports based on the data uploaded by the application installed in the smart computing device.
  • a method for an automatic classification of the types of cells and structures using the enhanced images or videos captured using a microscope with the application installed on a smart computing device.
  • the method involves activating the application installed in the smart computing device by a user.
  • the application communicatively coupled to the smart computing device with a built-in camera is activated to select an image capture mode in the smart computing device.
  • the image capture mode is activated to capture the image or video of a specimen kept on a slide under the microscope.
  • the application is configured to initiate the preprocessing of the captured image.
  • the preprocessing of the application includes standardizing a plurality of parameters of the camera and processing the image or video using digital processing techniques.
  • the patches of the plurality of the cells are identified from the image or video to extract the features and attributes of the cells.
  • the extracted features and attributes are utilized for classifying the cells into a plurality of types using a plurality of pre-trained machine learning models. Further, a report is generated based on the extracted features, attributes and classification of the cells.
  • FIG. 1A illustrates block diagram of a system for extraction and analysis of samples under a microscope, according to one embodiment herein.
  • FIG. IB illustrates a hardware block diagram of a system for extraction and analysis of samples under a microscope, according to one embodiment herein.
  • the system comprises a smart computing device 102 and a server 112.
  • the smart computing device captures and digitizes the sample observed through the microscope.
  • the examples of smart computing device include but are not limited to smart phone and tablet devices.
  • the smart computing device 102 comprises a camera 116, an application 118, a processor 120, a storage memory 122 and an operating system 124.
  • the smart computing device 102 is capable of communicating using short range communication protocols such as Bluetooth.
  • the application 118 is installed in the smart computing device 102.
  • the application 118 enables the digitization of samples observed under a microscope.
  • the processor 120 is configured to execute the steps of extraction and classification.
  • the smart computing device 102 comprises a storage device 122 configured to store instructions, algorithms and software models to be executed by the processor 120.
  • the examples of the operating system 124 include but are not limited to Google Android, Apple iOS etc.
  • the smart computing device 102 comprises an image acquisition module 104, image processing module 106, an optional extraction module 108a and an optional classification module 110a.
  • the image acquisition module 104 is coupled to on the smart computing device 102 provided with an inbuilt camera 116.
  • the smart computing device 102 is attached to the eyepiece of the microscope.
  • the smart computing device is attached to the microscope using a smart computing device holder.
  • the smart computing device holder comprises a receptacle capable of holding the smart computing device 102.
  • the smart computing device holder enables the user to align the camera 116 and the eyepiece of the microscope.
  • the receptacle on the smart computing device holder aligns the center of the camera 116 and the center of the eyepiece automatically.
  • the smart computing device holder further enables a user to position the camera 116 at a proper distance away from the eyepiece of the microscope. In order to achieve proper distance, the receptacle on the smart computing device holder is moved forward and backward along a rail running through the smart computing device holder.
  • the user is enabled to activate the application 118 on the smart computing device 102.
  • the application 118 is run on the smart computing device and configured to select or activate an image capture mode.
  • the camera 116 is adjusted to focus on the image of a sample kept on a slide under the microscope.
  • the captured image is displayed as a split screen image on the smart computing device 102.
  • the split screen view comprises a full field view and an enlarged view.
  • the user provides commands based on the full field view through the application 118 to a robot to adjust and move the slide to a particular position.
  • the robot retrofitted to the microscope is configured to adjust a movement of the slide along the X, Y and Z axis, thereby moving the slide to a desired position.
  • the robot receives the commands from the application 118 using a short range communication protocol, wherein said short range communication protocol can be Bluetooth. Further, the voice or gesture commands are provided based on the enlarged view through the application to capture the images or videos.
  • the captured image or video is further processed by the image processing module 106 in the smart computing device 102.
  • the video is sampled into a set of images based on the "frames-per-second" captured by the camera.
  • the image processing module 102 is controlled by the application 118 in the smart computing device 102.
  • the image processing module 106 is configured to run on the processor 120.
  • the image processing module 106 is configured to normalizes the captured image in order to standardize the image quality and color scheme. The normalization is performed to ensure that the quality of the consecutive images is independent of the changes in the lighting conditions, slide color scheme, camera settings of the smart computing device camera etc.
  • the image processing module 106 is configured for performing normalization and image quality assessment based on a plurality of characteristics of the image.
  • the plurality of characteristics of the image includes blur/sharpness/focus of image, density of cells and structures of interest visible in the captured field of view, spacing between the cells and structures of interest in the captured field of view, brightness and contrast of image, colour profile and tone of image.
  • the plurality of characteristics is adjusted to be within a permissible range to ensure that the captured image is of desired quality for analysis.
  • the permissible range for the plurality of characteristics depends on the types of cells and structures identified.
  • the application 118 of the smart computing device 102 is configured to download the permissible range from the server 112 periodically, thereby quality of the captured images.
  • the preprocessing operation of the image is carried out in two steps. Firstly, a plurality of parameters of the camera 116 is standardized.
  • the plurality of parameters include auto-focus setting, ISO setting, exposure time, lens aperture, auto white balance settings and color temperature settings.
  • the predefined setting of the plurality of parameters helps to ensure same quality for the consecutive images captured by the camera 116. Therefore, the user is enabled to capture the images without any additional processing by the camera chipset of the smart computing device 102.
  • the image processing module 106 is configured to apply a plurality of Digital Image Processing (DIP) techniques for normalizing the color scheme and contrast of the captured image.
  • DIP Digital Image Processing
  • the pluralities of the DIP techniques for normalization include but are not limited to histogram equalization, blur detection, and similarity detection techniques. Histogram techniques are employed to normalize the contrast of the image.
  • Blur detection techniques are applied to identify whether the captured image is of desired sharpness and focus. The captured image is discarded from further processing when the desired sharpness and focus is not met.
  • the similarity detection technique is applied to identify similar images and discard duplicate images.
  • the optional extraction module 108a on the smart computing device 102 is configured to identify the patches containing a plurality of types of cells and structures to extract features and areas of interest from the image.
  • the optional extraction module 108a is configured to run on the processor 120 of the smart computing device 102.
  • the extraction module 108b is configured to run a logic depending on the type of cells and structures of interest from the image. For example, the logic used for extracting blood cells from a peripheral blood smear is different from the logic used to extract sperm cells from a semen slide.
  • the optional extraction module 108a and the extraction module 108b are configured to perform a plurality of steps for extracting features and attributes of the cells and structures. The plurality of steps includes identification of patches, and extraction of patches.
  • the step of identification of the patches includes identifying the cells and structures of interest.
  • the identification of the patches is performed by executing custom logic based on the type of cells and structures of interest to be extracted.
  • the optional extraction module 108a and the extraction module 108b are configured to apply a plurality of image processing techniques for identifying the patches to identify the cells and structures of interest present in each normalized image.
  • the identification of the patches is performed by including all the true positives even when some false positive are received. The false positives are discarded in the subsequent processing steps. Further, the features and attributes are extracted from the image for classifying the cells and generating reports.
  • the features and attributes of the image include but are not limited to density of cells in the image, size and areas of the image under a plurality of cell types, color attributes of the plurality of types of patches of interest etc.
  • the extraction of features and attribute reduces the size of the data under consideration.
  • the cells and structures are extracted from the image patches, wherein the size of said cells and structures are based on their types of feature and the objects of interest.
  • the optional extraction module and the extraction module are configured to subtract the background, thereby generating an image with only the extracted cells and structures of interest visible.
  • the system is also configured to extract cells and structures on the server 112 based on the size of the captured and normalized images transferred to the server 112 and the complexity of extraction logic. The extraction logic is selected based on types of cells and structures identified.
  • the extracted features and attributes from the image are utilized by the optional classification module 110a in the smart computing device 102 or the classification module in the server for classifying the plurality of types of cells and structures.
  • the optional classification module 110a is run on the smart computing device 102 and is activated depending upon the type of cells or structures to be identified.
  • the cells are classified to identify and label each extracted cell and structure into any one of a predefined classes determined by the content of the slide under analysis.
  • the optional classification module 110a and the classification module are operated in two phases.
  • the first phase is a training phase where the optional classification module and the classification module are trained with machine leaning models to identify the cells belonging to the plurality of classes from the annotated images of the cells.
  • the machine leaning models are provided to understand the typical attributes of each class to differentiate between the plurality of cell types.
  • the training of the optional classification module and the classification module mentioned herein does not take place during the process as explained in the course of the present invention- the optional classification module and the classification module are pre-trained using artificial intelligent models.
  • the optional classification module and the classification module are operated in the second phase after completion of an identification of the cells and the attributes of each class of cells with a sufficient and required degree of accuracy.
  • the second phase is an execution phase where the pre-trained machine leaning models are employed on a new set of data to accurately identify different types of cells and structures from the plurality of patches extracted during the extraction process.
  • the machine learning model used is a deep learning model.
  • the deep learning models are activated as a decision tree based structure with a plurality of nodes. Each node of the decision tree among the plurality of nodes is treated and configured as a deep leaning model.
  • the nodes at the top of the decision tree are configured to segregate the data into broad classes.
  • the lower nodes of the decision tree are configured to classify the broad classes into specific classes corresponding to each type of cells and structures to be identified. The classification is performed in a hierarchal manner to facilitate a differential classification.
  • a classification process is not done with the mobile application.
  • a classification module 110b installed on the server 112 is configured to execute the classification process in the server. Further, the extracted features and attributes of the images, extracted patches of cells and structures and the results obtained during classification are uploaded from the smart computing device 102 to the server 112.
  • the server 112 is a high performance server.
  • the server 112 comprises of the extraction module 108b, classification module 110b and a report generation module 114.
  • the classification module 110b in the server 112 is configured to run on a hardware processor 132.
  • the server 112 comprises an Application Programming interface (API) Gateway 128 and an Artificial intelligence platform 130.
  • the API gateway 128 is a distributed cloud cluster.
  • the API Gateway 128 is configured to expose APIs and to upload the captured images, the extracted features and attributes to the server 112. Further, the API Gateway 128 is configured to access the final report after performing classification.
  • the API Gateway 128 is configured to provide APIs for integration with third party lab information system and other computer systems.
  • the artificial intelligence platform 130 is another distributed cloud cluster.
  • the artificial intelligence platform 130 is configured to perform analysis on images in real time and batch mode.
  • the artificial intelligence platform 130 is configured with a list of procedures for performing analysis of the images. All the procedures among the list of procedures are interdependent. Therefore, the application program interface 130 is configured to ensure that a procedure is run only after receiving the outputs of other dependent procedures.
  • Each procedure involves the steps of running a statistical or machine learned model on the images, creating report constructs and collating output of different procedures for creating a final report.
  • the report constructs includes but are not limited to calculating metrics in report, creating interactive charts of plurality of parameters etc.
  • the report generation module 114 is configured to generate charts, graphs and report based on the analysis during classification of extracted features and attributes.
  • the report generation module 114 is run on the hardware processor 132.
  • the report generation module 114 is configured to generate reports based on the analysis performed by the artificial intelligence platform 130.
  • the outputs of the artificial intelligence platform 130 are collaborated to render the report.
  • the API gateway 128 is configured to provide the API to display the outputs of the artificial intelligence platform 130 to render the report on a webpage or on the user interface of the smart computing device 102.
  • the report is communicated back to the smart computing device 102 through the application to be viewed by the clinician or technician.
  • the report comprises details including differential count of each cells or structure of interest from the image, histograms and other charts representing key attributes of all cells/structures of interest from the images, pertinent parameters of cells and structures of interest using available attributes from the image and outputs of machine learning models, with some form of regression analysis.
  • the report generation module 114 is configured to generate reports by applying a custom logic based on the cells and structures of interest identified and quantified in the analyzed images.
  • the generated report comprises a plurality of sections including but not limited to metrics section, charts and graphs, visual section and suggestions.
  • the metric section includes metrics computed during analysis
  • the metrics includes at least one of direct properties of individual cells and volumetric quantities.
  • An example of direct properties of individual cells includes count or size of each type of cells and structures of interest.
  • an example of volumetric quantities includes concentration per unit volume.
  • the metrics are calculated either directly based on captured images or derived using statistical models on combination of directly calculated metrics.
  • the volumetric quantities are generally derived using statistical models on count and concentration of cells in each image.
  • both the optional classification module 110a provided in the smart computing device 102 and a classification module 110b provided in the server 112 are trained in a training phase with machine learning models.
  • the entire classification is performed in the optional classification module 110a of the smart computing device after the extraction of features and attributes from the image by the optional extraction module 108a; or otherwise, the entire classification is performed with the classification module 110b in the server, when the extracted data is sent directly to the server 112.
  • the entire classification is performed in the optional classification module 110a in the smart computing device, then the classified data is sent to the server for collating and to generate the reports using the report generation module 114.
  • the server 112 is configured to perform the complete classification using the classification module 110b along with report generation cases when the extracted data is directly sent to the server from the optional extraction module 108a without any classification taking place in the optional classification module 110a.
  • the classification of cells is partially performed in the optional classification module 110a and the result of the partly completed classification is sent to the server for further classification by the classification module 110b.
  • an extraction module 108b is provided in the server 112. The extraction performed after performing an image processing operation with the image processing module 106 in the smart computing device 102 or the server 112.
  • the classification is carried out either with the optional classification module 110a in the smart computing device or with the classification module 110b in the server or the classification is partially performed with the optional classification module 110a and the remaining classification process is performed in the classification module 110b, depending on the case.
  • the extraction is carried out directly in the server 112, then the complete classification is also done in the server 112 using the classification module (110b).
  • the report is generated after the completion of the classification of the extracted data.
  • the extraction process is partially performed in the smart computing device (102) and the remaining extraction operation is performed in the server (112).
  • the plurality of charts and graphs includes a set of interactive charts and graphs based on calculated attributes of cells and structures of interest.
  • the set of interactive charts and graphs includes histograms, line graphs, bar graphs, scatter plots etc.
  • the set of interactive charts and graphs provides an insight on distribution of the cell properties and attributes across the captured images.
  • the visual/monitor section is configured to display a small patch of the captured image containing the cells and structures of interest identified during analysis.
  • the visual section enables the user to view the identified types of cells and structures of interest visually.
  • the cells and structure of interest are identified and grouped into a plurality of types during classification.
  • the visual chart enables the user of the report to correct an incorrectly assigned label to any cell image.
  • the section of suggestions provides suggestions based on holistic analysis of the metrics, charts and classification performed on cells and structures of interest identified during analysis. For example, when malaria parasite is observed during analysis of a blood smear, then a suggestion is provided in the report for suspected malarial infection.
  • FIG. 2 illustrates a flowchart explaining a method for an automatic classification of the types of cells using an application installed on a smart computing device, according to one embodiment herein.
  • the method involves a user activating the application installed in the smart computing device (202).
  • the application is configured to direct the user to select or activate an image capture mode to operate a camera inbuilt on the smart computing device to capture an image of the specimen on the slide.
  • the user is enabled to capture a single image or a plurality of images or videos of a specimen kept on a slide under the microscope using manual or automated methods (204).
  • the user is enabled to capture the images or videos manually also with the application in the smart computing device.
  • the user is enabled to automate the image or video capture process by adjusting the movement of the slide under the microscope using a robot through the mobile application.
  • the user is configured to capture the image or video using voice or gesture activated commands provided through the mobile application.
  • the application is configured to initiate the preprocessing operation of the captured image or video, wherein the video is sampled into a set of images based on the "frames-per-second" captured by the camera.
  • the preprocessing operation of the image is performed by standardizing the quality of the image (206).
  • the quality of image is standardized by standardizing a plurality of parameters of the camera and processing the image using digital processing techniques.
  • the plurality of parameters include auto-focus setting, ISO setting, exposure time, lens aperture, auto white balance settings and color temperature settings.
  • the patches of the plurality of types of cells and structures are identified from the processed image to extract the features and attributes of the cells and structures (208).
  • the features and attributes of the image includes but are not limited to density of cells in the image, size and areas of the image under the plurality of cell types, color attributes of the plurality of types of patches of interest etc.
  • the features and attributes extracted are used for classifying the cells and structures into a plurality of types.
  • the classification of the cells and structures is performed by using a plurality of pre-trained machine learning models (210).
  • the classification is executed on the smart computing device based on the cells and structures to be identified.
  • the classification is executed from the server.
  • a report is generated based on the extracted features, attributes and classification of the cells and structures in the server using the report generation module (212).
  • FIG. 3 illustrates a flowchart explaining a method for extraction and analysis of samples under a microscope, in accordance with one embodiment herein.
  • the method includes capturing single or multiple images or videos of the sample kept under the microscope using an application installed in the smart computing device (302).
  • a user is enabled to activate the application for directing the user to select or activate an image or video capture mode to operate an inbuilt camera on the smart computing device to capture an image of the sample on a slide.
  • the application is configured to capture the image or video in a manual mode or an automated mode.
  • the step of processing the captured image involves a plurality of steps.
  • a first step includes assessing the quality of the captured images.
  • the quality of the captured images is assessed by comparing each captured image against a list of parameters.
  • the list of parameters includes blur, sharpness and focus of image, density of cells and structures of interest visible in the captured field of view, spacing between the cells and structures of interest in the captured field of view, brightness and contrast of image, color profile and tone of image.
  • the captured images that fail the quality assessment are not further processed.
  • a second step includes normalizing the captured image. During normalization process, each captured image is pre-processed to ensure that all the captured images is of similar properties such as dynamic range, color, brightness etc.
  • a third step includes identifying cells and structures of interest. The cells and structures of interest are identified using image processing techniques. In the third step, custom logic is executed based on type of cells and structures of interest to be extracted.
  • a fourth step involves extracting smaller image patches comprising all cells and structures of interest and multiple features and attributes of the cells and structures. Further, a background subtraction is performed so that the image patch contains only the extracted cell or structure of interest visible.
  • the extracted cells and structures are analyzed to identify and classify the cells and structures into pre-defined subclasses based on the type of the sample (306).
  • the extracted cells and structures are analyzed using a hierarchy of artificial intelligence models to identify and classify the cells and structures into multiple pre-defined subclasses.
  • the pre-defined subclasses for blood slides include but are not limited to red blood cells, white blood cells, platelets etc.
  • the statistical parameters are calculated to estimate abnormal conditions in the sample based on the output of classification and the extracted features and attributes of the cells and structures (308).
  • the plurality of statistical models is employed to calculate a list of statistical parameters for creating a report. Further, the abnormal conditions in the samples are identified for generating suggestions based on configurable rules.
  • the report is generated and published on a webpage or a user interface of the smart computing device (310).
  • the report includes a plurality of sections including but not limited to metrics section, charts and graphs, visual section and suggestions.
  • the user is enabled to review the report using the smart phone application. Further, the report is viewed remotely on a web browser or hand held device by sharing the report through emails or uploading the report on cloud.
  • the embodiments herein envisage a system and method for an automatic classification of the plurality of types of cells using an application installed on a smart computing device.
  • the system is configured to automatically identify and classify the cells - and complex structures of interest on a slide under a microscope, thereby making the analysis process fast and efficient.
  • the system is implemented by installing the application on the smart computing device. Therefore, the system is cost effective and is capable of being used in any laboratories. Further, the system is configured to execute most of the steps of analysis on the smart computing device rather than the server. Therefore, the need for a server with a high processing capacity is eliminated and the processing load on the server is reduced largely.
  • the image or video acquisition process is performed automatically using a robot controlled by the application.
  • the extraction process is performed on the smart computing device and care is taken not to miss any true positive.
  • the classification is performed using machine learning models including deep leaning techniques.
  • the deep learning models have certain technical advancement over other traditional machine learning techniques, thereby allowing to reach near-human accuracy levels in the image identification processes.

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

L'invention concerne un système et un procédé permettant de classer et d'identifier automatiquement les types de cellules et de structures au moyen des images microscopiques capturées avec une application installée sur un dispositif informatique intelligent. L'image capturée est prétraitée en normalisant les paramètres de l'image capturée. L'application est exécutée pour identifier les correctifs des cellules et des structures dans l'image capturée afin d'extraire les caractéristiques et les attributs des cellules et des structures dans l'image capturée. Les modèles/algorithmes d'apprentissage automatique préalablement formés sont appliqués pour classer les cellules et les structures de l'image d'après les caractéristiques et les attributs extraits des cellules et des structures. Un rapport est généré sur un serveur d'après le classement des cellules.
PCT/IN2016/000239 2016-02-23 2016-10-03 Système et procédé d'extraction et d'analyse d'échantillons au microscope WO2017145172A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
IN201641006272 2016-02-23
IN201641006272 2016-02-23

Publications (1)

Publication Number Publication Date
WO2017145172A1 true WO2017145172A1 (fr) 2017-08-31

Family

ID=59684915

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IN2016/000239 WO2017145172A1 (fr) 2016-02-23 2016-10-03 Système et procédé d'extraction et d'analyse d'échantillons au microscope

Country Status (1)

Country Link
WO (1) WO2017145172A1 (fr)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108564114A (zh) * 2018-03-28 2018-09-21 电子科技大学 一种基于机器学习的人体粪便白细胞自动识别方法
CN109815974A (zh) * 2018-12-10 2019-05-28 清影医疗科技(深圳)有限公司 一种细胞病理玻片分类方法、系统、设备、存储介质
CN111912763A (zh) * 2020-08-15 2020-11-10 湖南伊鸿健康科技有限公司 一种多功能细胞分析系统
CN116703917A (zh) * 2023-08-07 2023-09-05 广州盛安医学检验有限公司 一种女性生殖道细胞病理智能分析系统
US11815673B2 (en) 2018-10-19 2023-11-14 Nanotronics Imaging, Inc. Method and system for mapping objects on unknown specimens

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7383237B2 (en) * 1998-05-01 2008-06-03 Health Discovery Corporation Computer-aided image analysis

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7383237B2 (en) * 1998-05-01 2008-06-03 Health Discovery Corporation Computer-aided image analysis

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108564114A (zh) * 2018-03-28 2018-09-21 电子科技大学 一种基于机器学习的人体粪便白细胞自动识别方法
CN108564114B (zh) * 2018-03-28 2022-05-27 电子科技大学 一种基于机器学习的人体粪便白细胞自动识别方法
US11815673B2 (en) 2018-10-19 2023-11-14 Nanotronics Imaging, Inc. Method and system for mapping objects on unknown specimens
TWI833822B (zh) * 2018-10-19 2024-03-01 美商奈米創尼克影像公司 用於自動映射流動體物體在基板上之方法及系統
CN109815974A (zh) * 2018-12-10 2019-05-28 清影医疗科技(深圳)有限公司 一种细胞病理玻片分类方法、系统、设备、存储介质
CN111912763A (zh) * 2020-08-15 2020-11-10 湖南伊鸿健康科技有限公司 一种多功能细胞分析系统
CN116703917A (zh) * 2023-08-07 2023-09-05 广州盛安医学检验有限公司 一种女性生殖道细胞病理智能分析系统
CN116703917B (zh) * 2023-08-07 2024-01-26 广州盛安医学检验有限公司 一种女性生殖道细胞病理智能分析系统

Similar Documents

Publication Publication Date Title
AU2020200835B2 (en) System and method for reviewing and analyzing cytological specimens
WO2021139258A1 (fr) Procédé et appareil de reconnaissance et de comptage de cellules sur la base de la reconnaissance d'images et dispositif informatique
JP6791864B2 (ja) 検査室自動化のためのサイドビューサンプルチューブ画像におけるバーコードタグ検出
US11210787B1 (en) Systems and methods for processing electronic images
WO2017145172A1 (fr) Système et procédé d'extraction et d'analyse d'échantillons au microscope
US10395091B2 (en) Image processing apparatus, image processing method, and storage medium identifying cell candidate area
KR102155381B1 (ko) 인공지능 기반 기술의 의료영상분석을 이용한 자궁경부암 판단방법, 장치 및 소프트웨어 프로그램
Hortinela et al. Identification of abnormal red blood cells and diagnosing specific types of anemia using image processing and support vector machine
US20210090248A1 (en) Cervical cancer diagnosis method and apparatus using artificial intelligence-based medical image analysis and software program therefor
US11062168B2 (en) Systems and methods of unmixing images with varying acquisition properties
US11721023B1 (en) Distinguishing a disease state from a non-disease state in an image
CN113763348A (zh) 图像质量确定方法、装置、电子设备及存储介质
Yang et al. Smartphone-supported malaria diagnosis based on deep learning
KR20210033902A (ko) 인공지능 기반 기술의 의료영상분석을 이용한 자궁경부암 진단방법, 장치 및 소프트웨어 프로그램
KR102220574B1 (ko) 영상 데이터 필터링을 위한 품질 점수 임계값 산출 방법, 장치 및 컴퓨터프로그램
KR20210113573A (ko) 인공지능을 이용하여 정렬된 염색체 이미지의 분석을 통한 염색체 이상 판단 방법, 장치 및 컴퓨터프로그램
KR20220138069A (ko) 인공지능 기반 기술의 의료영상분석을 이용한 자궁경부암 판독방법, 장치 및 소프트웨어 프로그램
De Leon et al. Detection of Sickle Cell Anemia in Blood Smear using YOLOv3
Blahova et al. Blood Smear Leukocyte Identification Using an Image Segmentation Approach
EP4207095A1 (fr) Système et procédé d'apprentissage de similarité dans une pathologie numérique
CN118675219A (zh) 基于眼底图像的糖尿病视网膜病变小病灶检测方法及系统

Legal Events

Date Code Title Description
NENP Non-entry into the national phase

Ref country code: DE

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16891349

Country of ref document: EP

Kind code of ref document: A1

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 25/02/2019)

122 Ep: pct application non-entry in european phase

Ref document number: 16891349

Country of ref document: EP

Kind code of ref document: A1