CN115100151B - Result-oriented cell image high-definition identification marking method - Google Patents
Result-oriented cell image high-definition identification marking method Download PDFInfo
- Publication number
- CN115100151B CN115100151B CN202210736984.4A CN202210736984A CN115100151B CN 115100151 B CN115100151 B CN 115100151B CN 202210736984 A CN202210736984 A CN 202210736984A CN 115100151 B CN115100151 B CN 115100151B
- Authority
- CN
- China
- Prior art keywords
- image
- cell
- images
- position data
- resolution
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T11/00—2D [Two Dimensional] image generation
- G06T11/40—Filling a planar surface by adding surface attributes, e.g. colour or texture
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformation in the plane of the image
- G06T3/40—Scaling the whole image or part thereof
- G06T3/4038—Scaling the whole image or part thereof for image mosaicing, i.e. plane images composed of plane sub-images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/764—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/60—Type of objects
- G06V20/69—Microscopic objects, e.g. biological cells or cellular parts
- G06V20/698—Matching; Classification
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H30/00—ICT specially adapted for the handling or processing of medical images
- G16H30/40—ICT specially adapted for the handling or processing of medical images for processing medical images, e.g. editing
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H50/00—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
- G16H50/20—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10056—Microscopic image
- G06T2207/10061—Microscopic image from scanning electron microscope
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20112—Image segmentation details
- G06T2207/20132—Image cropping
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30024—Cell structures in vitro; Tissue sections in vitro
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30204—Marker
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/03—Recognition of patterns in medical or anatomical images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/07—Target detection
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Medical Informatics (AREA)
- General Health & Medical Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Databases & Information Systems (AREA)
- Multimedia (AREA)
- Public Health (AREA)
- Biomedical Technology (AREA)
- Evolutionary Computation (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Epidemiology (AREA)
- Artificial Intelligence (AREA)
- Computing Systems (AREA)
- Software Systems (AREA)
- Radiology & Medical Imaging (AREA)
- Primary Health Care (AREA)
- Life Sciences & Earth Sciences (AREA)
- Quality & Reliability (AREA)
- Molecular Biology (AREA)
- Data Mining & Analysis (AREA)
- Pathology (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The invention provides a result-oriented high-definition cell image identification and marking method, which comprises the following steps: array scanning the slide to obtain a plurality of original images; cutting an original image according to the minimum resolution; obtaining the position of a suspicious cell by using a target detection model; removing garbage cells by using a garbage classification model; identifying positive cells by a yin-yang classification model; synchronously, manufacturing a blank image template according to the optimal resolution; reading suspicious cell position data, garbage cell position data and positive cell position data, and filling images in an original image to a blank image template according to the position data by coordinate conversion and optimal resolution; through the steps, high-definition cell image recognition marks guided by results are achieved. The invention directly identifies by low-resolution images, provides images with the best resolution to assist doctors to diagnose, omits images of normal cells and greatly improves the splicing, data transmission and identification efficiency.
Description
Technical Field
The invention relates to a cell image processing method, belongs to the field of medical image processing, and particularly relates to a result-oriented high-definition cell image identification marking method.
Background
In the prior art, image recognition is carried out after cervical cells are collected and processed, and the method is an effective measure for screening cervical cancer. For example, the scheme described in the chinese patent document CN110797097A artificial intelligence cloud diagnosis platform can make the cell screening service spread to remote areas and areas with insufficient medical resources. The prior art measures are to scan the image with an array scanning microscope and then to stitch and identify the image. The applicant of the present invention has developed and proposed a scheme capable of scanning and acquiring cell images by using a mobile phone, and further reducing the cost of an array type scanning microscope, for example, a scheme described in a mobile phone-based micro-image acquisition device and an image stitching and identification method described in patent document CN 110879999A. However, because the size of the image collected by the mobile phone is large, each picture is usually 3 to 10mb in size, and usually one slide needs to collect 30 to 40 pictures which are combined into 1200 pictures for splicing. Therefore, the storage size of a pieced image usually needs 3.6GB, and more resources are occupied in the running process. For example, CN110807732A is used in a panoramic stitching system and method for microscopic images, and the stitching method described in the method needs to adjust the overlapping area of the scanned images, which also needs a lot of time. In order to improve the recognition efficiency, technicians adopt a scheme for processing by reducing the size of a picture, such as the scheme recorded in a CN111651268A microscopic image rapid processing system. However, reducing the picture size, while increasing speed, also results in less redundant information in the final image and a physician reading the image lacks image information available for further analysis. That is, in the prior art, there is a case where there is a mutual contradiction between the efficiency and accuracy of image recognition.
Disclosure of Invention
The invention aims to provide a result-oriented high-definition cell image identification marking method, which can further improve the identification efficiency and provide a high-resolution image in a final image so as to have enough redundant information for analysis of a doctor.
In order to solve the technical problems, the technical scheme adopted by the invention is as follows: a result-oriented high-definition cell image identification marking method comprises the following steps:
s1, array scanning a slide to obtain a plurality of original images;
s2, cutting an original image according to the minimum resolution ratio for artificial intelligent recognition;
s3, acquiring the position of the suspicious cell by using a target detection model, marking and storing the position data of the suspicious cell;
s5, removing garbage cells by using a garbage classification model, marking and storing garbage cell position data;
s6, identifying positive cells by using a yin-yang classification model, and marking and storing position data of the positive cells;
s01, synchronously with S1, manufacturing a blank image template according to the optimal resolution;
s02, reading suspicious cell position data, garbage cell position data and positive cell position data, and filling images in an original image to a blank image template according to the position data through coordinate conversion and optimal resolution;
through the steps, high-definition cell image identification marks guided by results are realized.
In a preferred embodiment, in step S02, the image in the original image is filled into the blank image template according to the optimal resolution in a non-stacking manner, and the specific steps are as follows:
s021, the suspicious cell position data, the garbage cell position data and the positive cell position data comprise outer frame outlines, and the outer frame outline data are read;
s022, filling the outline of the outer frame into a blank image template in a sequence filling mode;
s023, wherein the suspicious cell position data, the garbage cell position data and the positive cell position data comprise coordinate data of an outline border, and the expression mode of the coordinate data comprises an origin coordinate and vector data or four-vertex coordinate data;
and converting the coordinate data corresponding to the minimum resolution cropping image into the coordinate data corresponding to the original image through the cropping scaling, and filling the cropped coordinate data into the corresponding outline of the blank image template according to the cropping scaling of the optimal resolution.
In a preferred scheme, in the step S1, in the process of scanning the slide by the array, the step values of each scanning are controlled to be the same, and the row and column values of each picture are obtained according to the scanning path and stored;
in step S2, the cut minimum resolution image is spliced, and the splicing specifically includes:
s21, reading 1 row and 1 column and 1 row and 2 columns of images, scanning pixels to obtain image overlapping fields, and horizontally stacking and splicing the image overlapping fields;
synchronously reading 1 row and 1 column images and 2 rows and 1 column images, scanning pixels to obtain image overlapping fields of view, and vertically stacking and splicing the image overlapping fields of view;
s22, acquiring relative overlapping coordinates of the x direction and the y direction of the image;
s23, splicing other subsequent images according to the row and column values and the relative overlapping coordinates in the x direction and the y direction;
and splicing the minimum resolution images into a panoramic view through the steps.
In a preferred embodiment, in step S3, the target detection model includes one of YoloV4, yoloV3, yoloV5, SSD, retineDet, reinadet, or EfficientDet model, and the target detection model is used to label cells other than normal cells to form the suspicious cell data set.
In a preferred embodiment, in step S4, the garbage classification model includes one of YoloV4, yoloV3, yoloV5, SSD, retineDet, refinaDet, or EfficientDet models, which is used to reject garbage and form a non-garbage cell dataset;
the garbage classification model takes a suspicious cell data set as a rejection range.
In a preferred embodiment, in step S5, the yin-yang classification model includes EfficientNet, resNet50 series, inclusion, xception, and ImageNet series, and is used to classify negative cells and positive cells to form a positive cell dataset;
the yin-yang classification model takes a non-garbage cell data set as a classification range.
In a preferred scheme, the method further comprises a multi-classification model, wherein the multi-classification model comprises EffectientNet, resNet50 series, inclusion, xconcentration and ImageNet series and is used for grading and counting the positive cells, and the multi-classification model takes a positive cell data set as a working range.
In a preferred scheme, in the step S2, when the original image is cut by the minimum resolution, the minimum scaling is obtained;
in step S01, when an original image is cut with the optimal resolution, the optimal scaling is obtained;
converting the relative overlay coordinates into relative overlay coordinates suitable for the image of the optimal resolution according to the relative overlay coordinates suitable for the image of the minimum resolution, the minimum scaling ratio, and the optimal scaling ratio in step S02;
during filling, reading original data from an original image by using position data, wherein the original data comprises marked suspicious cell images, junk cell images and positive cell images, and filling the image with the optimal resolution into a blank image template by using relative overlapping coordinates suitable for the image with the optimal resolution after clipping into the optimal resolution.
A result-oriented high-definition cell image identification and marking method comprises the following steps:
cutting and splicing the original images according to the minimum resolution ratio for artificial intelligent recognition;
obtaining the position of a suspicious cell by using a target detection model, wherein the target detection model comprises one of a YoloV4 model, a YoloV3 model, a YoloV5 model, an SSD model, a RetineDet model, a RefinaDet model or an EfficientDet model, and the target detection model is used for marking out cells except normal cells to form a suspicious cell data set;
the garbage classification model eliminates garbage cells in the range of the suspicious cell data set, and comprises one of a YoloV4 model, a YoloV3 model, a YoloV5 model, an SSD model, a RetineDet model, a RefinaDet model or an EfficientDet model, and is used for eliminating garbage and forming a non-garbage cell data set;
identifying positive cells in a non-garbage cell data set range by a yin-yang classification model, wherein the yin-yang classification model comprises EffectientNet, resNet50 series, inclusion, xception and ImageNet series and is used for classifying the negative cells and the positive cells to form a positive cell data set;
cell detection and classification are realized in a pipeline mode through the steps.
In a preferred scheme, the method further comprises a multi-classification model, wherein the multi-classification model comprises EfficientNet, resNet50 series, inclusion, xception and ImageNet series and is used for grading and counting the positive cells, and the multi-classification model takes a positive cell data set as a working range.
The invention provides a result-oriented high-definition cell image identification and marking method, which has the following beneficial effects compared with the prior art:
the invention directly identifies with the low-resolution image, provides the image with the best resolution to assist the diagnosis of doctors, omits the image of normal cells and greatly improves the splicing, data transmission and identification efficiency. In the preferred scheme, the step of splicing is omitted, and the identification efficiency is further improved. In another optional scheme, the stitching coordinate data is obtained, and the stitching of the high-resolution image is completed by the stitching coordinate data, so that repeated scanning and calculation of the image, especially the high-resolution image, are avoided, and the stitching identification marking efficiency is greatly improved. The invention realizes the result of the high-resolution image at the speed of splicing the low-resolution image, and greatly improves the accuracy of the subsequent analysis of the doctor. In the identification process, an artificial intelligence model-based pipeline processing mode is adopted, the complexity of each intelligent model is reduced, and the subsequent identification process only needs to mark or classify the result in the previous process, so that the identification efficiency is improved on the whole, and the identification accuracy is improved. In the splicing process, the scheme of splicing by using the first-row and first-column splicing experience parameters is adopted, so that the scanning process is greatly reduced, and the splicing efficiency is further improved. Even if a slight error occurs in the stitching process, the error does not cause a difference in the recognition result. According to the scheme provided by the invention, the method can effectively cope with the explosive increase of customers caused by the cost reduction of the array scanning microscope and the convenience in sample collection, and particularly in the cervical cancer screening field, the image processing efficiency is further greatly improved. Can make cervical cancer image screening more popular and convenient.
Drawings
The invention is further illustrated by the following examples in conjunction with the accompanying drawings:
FIG. 1 is a schematic flow chart of the present invention.
Fig. 2 is a schematic flow chart of the fast splicing of the present invention.
FIG. 3 is a schematic diagram of the present invention showing low resolution recognition and high resolution.
FIG. 4 is a schematic diagram of the best resolution image stitching with minimum resolution parameters according to the present invention.
FIG. 5 is an example flow chart of the present invention.
FIG. 6 is a schematic diagram showing the comparison between the minimum resolution labeled cell image and the best resolution labeled cell image in the present invention.
Detailed Description
Example 1:
as shown in fig. 1, a result-oriented high-definition cell image recognition labeling method includes the following steps:
s1, array scanning a slide to obtain a plurality of original images;
s2, cutting an original image according to the minimum resolution ratio for artificial intelligent recognition; for example, 1024 × 1024 pixels, and the original image is clipped at the minimum resolution; and in the cutting process, the proportion between the minimum resolution image and the original image is used as the minimum resolution scaling.
S3, obtaining the position of the suspicious cell by using a target detection model, marking and storing the position data of the suspicious cell; after the image is read specifically, preprocessing and color normalization processing are carried out, and the image is sent into a YoloV4 target detection model to carry out positive cell detection work; the mark position is 416 x 416 pixel size, with the upper left and lower right coordinates defining the coordinates of the mark position. The position data refers to the coordinate position of the marker in the image, and may be expressed as the origin of a rectangle + vector data or four-corner coordinates of a rectangle, taking the marker of a rectangle as an example.
S5, removing garbage cells by using a garbage classification model, marking and storing garbage cell position data; and intercepting the non-garbage cell position image, wherein the size of a single mark position is 256 multiplied by 256 pixels, and the coordinates of the mark position are defined by the coordinates of the upper left corner and the lower right corner.
S6, identifying positive cells by using a yin-yang classification model, wherein the yin-yang classification model adopts a two-classification model, storing position data of the positive cells, intercepting a position image of the positive cells, and defining coordinates of a marking position by using coordinates of the upper left corner and the lower right corner, wherein the size of each marking position is 256 multiplied by 256 pixels; an example of the process flow of the sample graph is shown in fig. 5. Positive cell location data is marked and stored. A plurality of artificial intelligence models, such as a target detection model, a garbage classification model and a yin-yang classification model, are adopted for identification in a pipeline mode, so that the complexity of a single artificial intelligence model is greatly reduced, and the identification accuracy is improved.
S01, synchronously with S1, manufacturing a blank image template according to the optimal resolution; for example, a blank image template is created with a resolution of 1200 images in total after stitching, with a resolution of 2048 × 2048 as the resolution of a single image. And taking the ratio between the optimal resolution image and the original image as an optimal resolution scaling ratio.
S02, reading suspicious cell position data, garbage cell position data and positive cell position data, and filling images in an original image to a blank image template according to the position data through coordinate conversion and optimal resolution; in the adding process, the coordinate data is calculated through scaling to realize synchronization.
Through the steps, high-definition cell image recognition marks guided by results are achieved.
Example 2:
the physician needs to identify only the positive cells, and the relevant statistics of the positive cells, such as the grading characteristics and the proportion characteristics of the positive cells to the normal cells, etc., so that the specific position of the specific positive cells on the cell scanning image has little effect on the diagnosis assistance of the physician.
Preferably, as shown in fig. 5, in step S02, the image in the original image is filled into the blank image template in an unstacked manner according to the optimal resolution, and the specific steps are as follows:
s021, the suspicious cell position data, the garbage cell position data and the positive cell position data comprise outer frame outlines, and the outer frame outline data are read;
s022, filling the outline of the outer frame into a blank image template in a sequence filling mode;
s023, wherein the suspicious cell position data, the garbage cell position data and the positive cell position data comprise coordinate data of an outline border, and the expression mode of the coordinate data comprises an origin coordinate and vector data or four-vertex coordinate data;
and converting the coordinate data corresponding to the minimum resolution cropping image into the coordinate data corresponding to the original image through the cropping scaling, and filling the coordinate data into the corresponding outline of the blank image template according to the cropping scaling of the optimal resolution after cropping.
Example 3:
in the embodiment 2, it can be used as the basis for assisting the quick diagnosis of the physician, but because of the existence of the overlapping position, part of the images at the overlapping position may be repeatedly calculated, so that, a more accurate solution, preferably as in fig. 2, in the step S1 of the array scanning slide, the step value of each scanning is controlled to be the same, and the row and column values of each picture are obtained according to the scanning path and stored;
in step S2, the cut minimum resolution image is spliced, and the splicing specifically includes:
s21, reading 1 row and 1 column and 1 row and 2 columns of images, scanning pixels to obtain image overlapping fields, and horizontally stacking and splicing the image overlapping fields;
synchronously reading 1 row and 1 column images and 2 rows and 1 column images, scanning pixels to obtain image overlapping fields, and vertically stacking and splicing the image overlapping fields;
s22, acquiring relative overlapping coordinates of the x direction and the y direction of the image;
s23, splicing other subsequent images according to the row and column values and the relative overlapping coordinates in the x direction and the y direction;
and splicing the minimum resolution images into a panoramic view through the steps.
In a preferred embodiment, in step S3, the target detection model includes one of YoloV4, yoloV3, yoloV5, SSD, retineDet, reinadet, or EfficientDet model, and the target detection model is used to label cells other than normal cells to form the suspicious cell data set.
In a preferred embodiment, in step S4, the garbage classification model includes one of YoloV4, yoloV3, yoloV5, SSD, retineDet, refinaDet, or EfficientDet models, which is used to reject garbage and form a non-garbage cell dataset;
the garbage classification model takes the suspicious cell data set as a rejection range.
A preferred embodiment is shown in fig. 5, wherein in step S5, the yin-yang classification model includes EfficientNet, resNet50 series, inclusion, xception, and ImageNet series, and is used to classify negative cells and positive cells to form a positive cell dataset;
the yin-yang classification model takes a non-garbage cell data set as a classification range.
The preferred scheme is as shown in fig. 5, which further comprises a multi-classification model, wherein the multi-classification model comprises EfficientNet, resNet50 series, inclusion, xception and ImageNet series, and is used for grading and counting the positive cells, and the multi-classification model takes the positive cell data set as a working range.
The preferred scheme is as shown in fig. 3, in step S2, when the original image is cut with the minimum resolution, the minimum scaling is obtained;
in step S01, when an original image is cut with the optimal resolution, the optimal scaling is obtained;
converting the relative overlay coordinates into relative overlay coordinates suitable for the image of the optimal resolution according to the relative overlay coordinates suitable for the image of the minimum resolution, the minimum scaling ratio, and the optimal scaling ratio in step S02; the scaling ratio between the minimum scaling ratio and the optimum scaling ratio is obtained, and the obtained relative overlapping coordinates are multiplied by the scaling ratio between the minimum scaling ratio and the optimum scaling ratio, so that the relative overlapping coordinates suitable for the image with the optimum resolution can be obtained.
During filling, after the position data is scaled according to the minimum scaling, original image data is read from an original image, wherein the original data comprises marked suspicious cell images, junk cell images and positive cell images, and after the read original image data is cut into the optimal resolution, the image with the optimal resolution is filled into a blank image template by using relative overlapping coordinates suitable for the image with the optimal resolution.
In some schemes for improving the image data transmission efficiency, the original data is clipped at the optimal resolution after being transmitted to the cloud, and at this time, the marked suspicious cell image, the marked spam cell image and the marked positive cell image can be directly obtained from the image at the optimal resolution, that is, the corresponding position data is multiplied by the scaling between the minimum scaling and the optimal scaling. The filling scheme described above should be considered as including the image with the best resolution of suspicious cells, garbage cells and positive cells obtained after cropping from the original image.
Example 4:
as shown in fig. 1, a result-oriented high-definition cell image recognition labeling method includes the following steps:
cutting and splicing the original images according to the minimum resolution ratio for artificial intelligent recognition;
obtaining the position of a suspicious cell by using a target detection model, wherein the target detection model comprises one of a YoloV4 model, a YoloV3 model, a YoloV5 model, an SSD model, a RetineDet model, a RefinaDet model or an EfficientDet model, and the target detection model is used for marking out cells except normal cells to form a suspicious cell data set;
the garbage classification model eliminates garbage cells in the range of the suspicious cell data set, and comprises one of a YoloV4 model, a YoloV3 model, a YoloV5 model, an SSD model, a RetineDet model, a RefinaDet model or an EfficientDet model, and is used for eliminating garbage and forming a non-garbage cell data set;
identifying positive cells in a non-garbage cell data set range by a yin-yang classification model, wherein the yin-yang classification model comprises EffectientNet, resNet50 series, inclusion, xception and ImageNet series and is used for classifying the negative cells and the positive cells to form a positive cell data set;
cell detection and classification are realized in a pipeline mode through the steps. The above methods can be applied individually.
In a preferred scheme, the method further comprises a multi-classification model, wherein the multi-classification model comprises EfficientNet, resNet50 series, inclusion, xception and ImageNet series and is used for grading and counting the positive cells, and the multi-classification model takes a positive cell data set as a working range.
The above-described embodiments are merely preferred technical solutions of the present invention, and should not be construed as limiting the present invention, and the embodiments and features in the embodiments in the present application may be arbitrarily combined with each other without conflict. The protection scope of the present invention is defined by the claims, and includes equivalents of technical features of the claims. I.e., equivalent alterations and modifications within the scope hereof, are also intended to be within the scope of this invention.
Claims (7)
1. A result-oriented high-definition cell image identification and marking method is characterized by comprising the following steps:
s1, array scanning a slide to obtain a plurality of original images;
s2, cutting an original image according to the minimum resolution ratio for artificial intelligent recognition;
s3, obtaining the position of the suspicious cell by using a target detection model, marking and storing the position data of the suspicious cell;
s5, removing garbage cells by using a garbage classification model, marking and storing garbage cell position data;
s6, identifying positive cells by using a yin-yang classification model, and marking and storing position data of the positive cells;
s01, synchronously with S1, manufacturing a blank image template according to the optimal resolution;
s02, reading suspicious cell position data, garbage cell position data and positive cell position data, and filling images in an original image to a blank image template according to the position data through coordinate conversion and optimal resolution;
in step S02, the image in the original image is filled into the blank image template according to the optimal resolution in a non-stacking manner, and the specific steps are as follows:
s021, including the outline of the suspicious cell position data, the junk cell position data and the positive cell position data, reading the outline data;
s022, filling the outline of the outer frame into a blank image template in a sequence filling mode;
s023, the suspicious cell position data, the garbage cell position data and the positive cell position data comprise coordinate data of an outer frame outline, and the expression mode of the coordinate data comprises an origin coordinate and vector data or four-vertex coordinate data;
converting the coordinate data corresponding to the minimum resolution cutting image into the coordinate data corresponding to the original image through cutting and scaling, and filling the cut coordinate data into the outline of the corresponding outer frame of the blank image template according to the cutting and scaling of the optimal resolution after cutting;
through the steps, high-definition cell image recognition marks guided by results are achieved.
2. The result-oriented high-definition cellular image recognition marking method according to claim 1, wherein: in the step S1, in the process of scanning the slide by the array, controlling the step values of each scanning to be the same, obtaining the row and column values of each picture according to the scanning path, and storing the row and column values;
in step S2, the cut minimum resolution image is spliced, and the splicing specifically includes:
s21, reading 1 row and 1 column and 1 row and 2 columns of images, scanning pixels to obtain image overlapping fields, and horizontally stacking and splicing the image overlapping fields;
synchronously reading 1 row and 1 column images and 2 rows and 1 column images, scanning pixels to obtain image overlapping fields, and vertically stacking and splicing the image overlapping fields;
s22, acquiring relative overlapping coordinates of the x direction and the y direction of the image;
s23, splicing other subsequent images according to the row and column values and the relative overlapping coordinates in the x direction and the y direction;
and splicing the minimum resolution images into a panoramic view through the steps.
3. A result-oriented high-definition cellular image recognition marking method as claimed in claim 1, wherein: in step S3, the target detection model includes one of a YoloV4, yoloV3, yoloV5, SSD, retineDet, reflina det, or EfficientDet model, and the target detection model is used to label cells other than normal cells to form a suspicious cell dataset.
4. A result-oriented high-definition cellular image recognition marking method according to claim 3, wherein: in step S4, the garbage classification model includes one of a YoloV4, yoloV3, yoloV5, SSD, retineDet, refinaDet, or EfficientDet model, and is used to remove garbage and form a non-garbage cell data set;
the garbage classification model takes the suspicious cell data set as a rejection range.
5. The result-oriented high-definition cellular image recognition marking method according to claim 4, wherein: in step S5, the yin-yang classification model comprises EffectientNet, resNet50 series, inclusion, xception and ImageNet series and is used for classifying negative cells and positive cells to form a positive cell data set;
the yin-yang classification model takes a non-garbage cell data set as a classification range.
6. A result-oriented high-definition cellular image recognition marking method according to claim 5, wherein: the multi-classification model comprises an EfficientNet, a ResNet50 series, an inclusion, an Xception and an ImageNet series and is used for grading and counting the positive cells, and the multi-classification model takes a positive cell data set as a working range.
7. The result-oriented high-definition cellular image recognition marking method according to claim 2, wherein: in the step S2, when the original image is cut with the minimum resolution, the minimum scaling is obtained;
in step S01, when an original image is cut with the optimal resolution, the optimal scaling is obtained;
converting the relative overlay coordinates into relative overlay coordinates suitable for the image of the optimal resolution according to the relative overlay coordinates suitable for the image of the minimum resolution, the minimum scaling ratio, and the optimal scaling ratio in step S02;
during filling, reading original data from an original image by using position data, wherein the original data comprises marked suspicious cell images, junk cell images and positive cell images, and filling the images with the optimal resolution into a blank image template by using relative overlapping coordinates suitable for the images with the optimal resolution after cutting into the optimal resolution.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210736984.4A CN115100151B (en) | 2022-06-27 | 2022-06-27 | Result-oriented cell image high-definition identification marking method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210736984.4A CN115100151B (en) | 2022-06-27 | 2022-06-27 | Result-oriented cell image high-definition identification marking method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN115100151A CN115100151A (en) | 2022-09-23 |
CN115100151B true CN115100151B (en) | 2023-02-24 |
Family
ID=83294432
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210736984.4A Active CN115100151B (en) | 2022-06-27 | 2022-06-27 | Result-oriented cell image high-definition identification marking method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115100151B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115830044B (en) * | 2023-01-10 | 2024-04-05 | 广东省测绘产品质量监督检验中心 | Image segmentation method and device, electronic equipment and storage medium |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106056118A (en) * | 2016-06-12 | 2016-10-26 | 合肥工业大学 | Recognition and counting method for cells |
CN109886928A (en) * | 2019-01-24 | 2019-06-14 | 平安科技(深圳)有限公司 | A kind of target cell labeling method, device, storage medium and terminal device |
CN112132166A (en) * | 2019-06-24 | 2020-12-25 | 杭州迪英加科技有限公司 | Intelligent analysis method, system and device for digital cytopathology image |
CN112634243A (en) * | 2020-12-28 | 2021-04-09 | 吉林大学 | Image classification and recognition system based on deep learning under strong interference factors |
CN113724842A (en) * | 2021-09-08 | 2021-11-30 | 武汉兰丁智能医学股份有限公司 | Cervical tissue pathology auxiliary diagnosis method based on attention mechanism |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113838009B (en) * | 2021-09-08 | 2023-10-31 | 江苏迪赛特医疗科技有限公司 | Abnormal cell detection false positive inhibition method based on semi-supervision mechanism |
-
2022
- 2022-06-27 CN CN202210736984.4A patent/CN115100151B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106056118A (en) * | 2016-06-12 | 2016-10-26 | 合肥工业大学 | Recognition and counting method for cells |
CN109886928A (en) * | 2019-01-24 | 2019-06-14 | 平安科技(深圳)有限公司 | A kind of target cell labeling method, device, storage medium and terminal device |
CN112132166A (en) * | 2019-06-24 | 2020-12-25 | 杭州迪英加科技有限公司 | Intelligent analysis method, system and device for digital cytopathology image |
CN112634243A (en) * | 2020-12-28 | 2021-04-09 | 吉林大学 | Image classification and recognition system based on deep learning under strong interference factors |
CN113724842A (en) * | 2021-09-08 | 2021-11-30 | 武汉兰丁智能医学股份有限公司 | Cervical tissue pathology auxiliary diagnosis method based on attention mechanism |
Non-Patent Citations (1)
Title |
---|
The artificial intelligence-assisted cytology diagnostic system in large-scale cervical cancer screening: A population-based cohort study of 0.7 million women;Bao H 等;《Cancer Med》;20200722;全文 * |
Also Published As
Publication number | Publication date |
---|---|
CN115100151A (en) | 2022-09-23 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109182081B (en) | Single cell sorting system based on image processing model | |
CN109344820B (en) | Digital ammeter reading identification method based on computer vision and deep learning | |
CN113962975B (en) | System for carrying out quality evaluation on pathological slide digital image based on gradient information | |
CN111524145A (en) | Intelligent picture clipping method and system, computer equipment and storage medium | |
CN115100151B (en) | Result-oriented cell image high-definition identification marking method | |
CN112529090B (en) | Small target detection method based on improved YOLOv3 | |
CN110807775A (en) | Traditional Chinese medicine tongue image segmentation device and method based on artificial intelligence and storage medium | |
CN112464802B (en) | Automatic identification method and device for slide sample information and computer equipment | |
CN110110667B (en) | Processing method and system of diatom image and related components | |
CN116612292A (en) | Small target detection method based on deep learning | |
CN115100646B (en) | Cell image high-definition rapid splicing identification marking method | |
CN116189191A (en) | Variable-length license plate recognition method based on yolov5 | |
CN116385374A (en) | Cell counting method based on convolutional neural network | |
CN113392819B (en) | Batch academic image automatic segmentation and labeling device and method | |
CN113065400A (en) | Invoice seal detection method and device based on anchor-frame-free two-stage network | |
CN115775226B (en) | Medical image classification method based on transducer | |
CN111881914A (en) | License plate character segmentation method and system based on self-learning threshold | |
CN115861768A (en) | Honeysuckle target detection and picking point positioning method based on improved YOLOv5 | |
CN115760875A (en) | Full-field medical picture region segmentation method based on self-supervision learning | |
CN114821582A (en) | OCR recognition method based on deep learning | |
CN115457559A (en) | Method, device and equipment for intelligently correcting text and license pictures | |
CN112825141B (en) | Method and device for recognizing text, recognition equipment and storage medium | |
CN115760609B (en) | Image optimization method and system | |
Wang et al. | Object detection method based on improved cascade R-CNN for antacid bacilli | |
CN117764977A (en) | Magnetic shoe surface defect detection method based on global feature enhancement and multi-scale fusion |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |