CN110390676A - The cell detection method of medicine dye image, intelligent microscope system under microscope - Google Patents
The cell detection method of medicine dye image, intelligent microscope system under microscope Download PDFInfo
- Publication number
- CN110390676A CN110390676A CN201910684849.8A CN201910684849A CN110390676A CN 110390676 A CN110390676 A CN 110390676A CN 201910684849 A CN201910684849 A CN 201910684849A CN 110390676 A CN110390676 A CN 110390676A
- Authority
- CN
- China
- Prior art keywords
- image
- processing
- network
- subgraph
- cell detection
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 239000003814 drug Substances 0.000 title claims abstract description 70
- 238000001514 detection method Methods 0.000 title claims abstract description 62
- 238000012545 processing Methods 0.000 claims abstract description 111
- 210000005036 nerve Anatomy 0.000 claims abstract description 45
- 238000012549 training Methods 0.000 claims abstract description 44
- 238000013528 artificial neural network Methods 0.000 claims abstract description 41
- 238000003672 processing method Methods 0.000 claims abstract description 26
- 239000013598 vector Substances 0.000 claims description 47
- 238000003709 image segmentation Methods 0.000 claims description 21
- 101150029707 ERBB2 gene Proteins 0.000 claims description 10
- 238000004590 computer program Methods 0.000 claims description 10
- 238000004043 dyeing Methods 0.000 claims description 8
- 238000013527 convolutional neural network Methods 0.000 claims description 7
- 238000005520 cutting process Methods 0.000 claims description 3
- 238000000034 method Methods 0.000 abstract description 22
- 238000003860 storage Methods 0.000 abstract description 13
- 238000013473 artificial intelligence Methods 0.000 abstract description 5
- 210000004027 cell Anatomy 0.000 description 56
- 238000010586 diagram Methods 0.000 description 13
- 230000008569 process Effects 0.000 description 8
- 238000005516 engineering process Methods 0.000 description 7
- 238000005070 sampling Methods 0.000 description 7
- 101001012157 Homo sapiens Receptor tyrosine-protein kinase erbB-2 Proteins 0.000 description 6
- 102100030086 Receptor tyrosine-protein kinase erbB-2 Human genes 0.000 description 6
- 230000008901 benefit Effects 0.000 description 6
- 238000003062 neural network model Methods 0.000 description 5
- 230000000694 effects Effects 0.000 description 4
- 230000006870 function Effects 0.000 description 3
- 238000004519 manufacturing process Methods 0.000 description 3
- 238000012986 modification Methods 0.000 description 3
- 230000004048 modification Effects 0.000 description 3
- 238000012360 testing method Methods 0.000 description 3
- 101500025419 Homo sapiens Epidermal growth factor Proteins 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 238000004891 communication Methods 0.000 description 2
- 229940079593 drug Drugs 0.000 description 2
- 235000013399 edible fruits Nutrition 0.000 description 2
- 238000000605 extraction Methods 0.000 description 2
- 239000003102 growth factor Substances 0.000 description 2
- 229940116978 human epidermal growth factor Drugs 0.000 description 2
- 239000000203 mixture Substances 0.000 description 2
- GVUGOAYIVIDWIO-UFWWTJHBSA-N nepidermin Chemical compound C([C@@H](C(=O)N[C@@H]([C@@H](C)CC)C(=O)NCC(=O)N[C@@H](CCC(O)=O)C(=O)N[C@@H](CCCNC(N)=N)C(=O)N[C@@H](CS)C(=O)N[C@@H](CCC(N)=O)C(=O)N[C@@H](CC=1C=CC(O)=CC=1)C(=O)N[C@@H](CCCNC(N)=N)C(=O)N[C@@H](CC(O)=O)C(=O)N[C@@H](CC(C)C)C(=O)N[C@@H](CCCCN)C(=O)N[C@@H](CC=1C2=CC=CC=C2NC=1)C(=O)N[C@@H](CC=1C2=CC=CC=C2NC=1)C(=O)N[C@@H](CCC(O)=O)C(=O)N[C@@H](CC(C)C)C(=O)N[C@@H](CCCNC(N)=N)C(O)=O)NC(=O)CNC(=O)[C@@H](NC(=O)[C@@H](NC(=O)[C@H](CS)NC(=O)[C@H](CC(N)=O)NC(=O)[C@H](CS)NC(=O)[C@H](C)NC(=O)[C@H](CC=1C=CC(O)=CC=1)NC(=O)[C@H](CCCCN)NC(=O)[C@H](CC(O)=O)NC(=O)[C@H](CC(C)C)NC(=O)[C@H](C)NC(=O)[C@H](CCC(O)=O)NC(=O)[C@@H](NC(=O)[C@H](CC=1C=CC(O)=CC=1)NC(=O)[C@H](CCSC)NC(=O)[C@H](CS)NC(=O)[C@@H](NC(=O)CNC(=O)[C@H](CC(O)=O)NC(=O)[C@H](CC=1NC=NC=1)NC(=O)[C@H](CC(C)C)NC(=O)[C@H](CS)NC(=O)[C@H](CC=1C=CC(O)=CC=1)NC(=O)CNC(=O)[C@H](CC(O)=O)NC(=O)[C@H](CC=1NC=NC=1)NC(=O)[C@H](CO)NC(=O)[C@H](CC(C)C)NC(=O)[C@H]1N(CCC1)C(=O)[C@H](CS)NC(=O)[C@H](CCC(O)=O)NC(=O)[C@H](CO)NC(=O)[C@H](CC(O)=O)NC(=O)[C@H](CO)NC(=O)[C@@H](N)CC(N)=O)C(C)C)[C@@H](C)CC)C(C)C)C(C)C)C1=CC=C(O)C=C1 GVUGOAYIVIDWIO-UFWWTJHBSA-N 0.000 description 2
- 102000005962 receptors Human genes 0.000 description 2
- 108020003175 receptors Proteins 0.000 description 2
- 238000011160 research Methods 0.000 description 2
- 230000003190 augmentative effect Effects 0.000 description 1
- 210000000170 cell membrane Anatomy 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 230000005611 electricity Effects 0.000 description 1
- 230000004927 fusion Effects 0.000 description 1
- 238000011532 immunohistochemical staining Methods 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 230000011218 segmentation Effects 0.000 description 1
- 238000000926 separation method Methods 0.000 description 1
- 230000001360 synchronised effect Effects 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01N—INVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
- G01N15/00—Investigating characteristics of particles; Investigating permeability, pore-volume or surface-area of porous materials
- G01N15/10—Investigating individual particles
- G01N15/14—Optical investigation techniques, e.g. flow cytometry
- G01N15/1468—Optical investigation techniques, e.g. flow cytometry with spatial resolution of the texture or inner structure of the particle
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01N—INVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
- G01N15/00—Investigating characteristics of particles; Investigating permeability, pore-volume or surface-area of porous materials
- G01N15/10—Investigating individual particles
- G01N15/14—Optical investigation techniques, e.g. flow cytometry
- G01N15/1468—Optical investigation techniques, e.g. flow cytometry with spatial resolution of the texture or inner structure of the particle
- G01N2015/1472—Optical investigation techniques, e.g. flow cytometry with spatial resolution of the texture or inner structure of the particle with colour
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10056—Microscopic image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30204—Marker
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Theoretical Computer Science (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Chemical & Material Sciences (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Radiology & Medical Imaging (AREA)
- Quality & Reliability (AREA)
- Medical Informatics (AREA)
- Dispersion Chemistry (AREA)
- Life Sciences & Earth Sciences (AREA)
- Analytical Chemistry (AREA)
- Biochemistry (AREA)
- Immunology (AREA)
- Pathology (AREA)
- Investigating Or Analysing Biological Materials (AREA)
Abstract
This disclosure relates to computer vision technique neural network based in artificial intelligence field.Specifically, present disclose provides a kind of cell detection method based on the medicine dye image under microscope, image processing method neural network based, intelligent microscope system, image processing apparatus neural network based, electronic equipment and computer readable storage mediums.Cell detection method includes: to obtain the medicine dye image;The medicine dye image is divided into multiple subgraphs;Each using first nerves network for the multiple subgraph executes the first processing, to extract multiple subgraph features;Second processing is executed for the multiple subgraph feature using nervus opticus network, to obtain the cell detection results of the medicine dye image.The cell detection method can carry out automatically classification processing for large-sized medicine dye image, and keep high-precision, while mark without additional training image.
Description
Technical field
This disclosure relates to artificial intelligence field, more specifically, this disclosure relates to a kind of medicine dyeing based under microscope
The cell detection method of image, image processing method neural network based, intelligent microscope system, figure neural network based
As processing unit, electronic equipment and computer readable storage medium.
Background technique
Computer vision is how a research makes the science of machine " seeing " further just refer to and use video camera
It replaces human eye the machine vision such as to be identified, tracked and measured to target with computer, and further does graphics process, make at computer
Reason becomes the image for being more suitable for eye-observation or sending instrument detection to.As a branch of science, computer vision research phase
The theory and technology of pass, it is intended to establish the artificial intelligence system that information can be obtained from image or multidimensional data.Computer
Vision technique generally includes image procossing, image recognition, image, semantic understanding, image retrieval, OCR, video processing, video semanteme
Understanding, video content/Activity recognition, three-dimension object reconstruction, 3D technology, virtual reality, augmented reality, synchronous positioning and map structure
It the technologies such as builds, further includes the biometrics identification technologies such as common recognition of face, fingerprint recognition.Neural network is by artificial intelligence
Boundary is widely applied.Neural network is a kind of extensive, multi-parameters optimization tool.By a large amount of training data, neural network
It can learn the hiding feature for being difficult to summarize in data out, to complete the Various Complex task of such as computer vision.
Computer vision technique neural network based can be used for the figure of the medicine dye image under such as microscope
As carrying out classification processing automatically hence for medicine dye image in processing.Since the picture size under microscope is excessive, tradition
Convolutional neural networks model is not easy directly to use.If carrying out simple down-sampling to image, precision of images loss is excessively high, leads
Cause result that cannot ensure.If large-size images are cut into several subimage blocks by the way of piecemeal, need to subgraph
As block is respectively trained, result in the need for additionally marking.For example, different subimage blocks may have different points under single field of microscope
Grade needs to be trained and test for each subimage block, this, which will lead to, needs additional refinement to mark (that is, for each
Subimage block requires to mark again), and need in test to the results of all subimage blocks into fusion.
Summary of the invention
Propose the disclosure in view of the above problems.Present disclose provides a kind of medicine dye images based under microscope
Cell detection method, image processing method neural network based, intelligent microscope system, at image neural network based
Manage device, electronic equipment and computer readable storage medium.
According to one aspect of the disclosure, a kind of cell detection side based on the medicine dye image under microscope is provided
Method, comprising: obtain the medicine dye image;The medicine dye image is divided into multiple subgraphs;Utilize first nerves
Each first processing of execution of network for the multiple subgraph, to extract multiple subgraph features;Utilize nervus opticus
Network executes second processing for the multiple subgraph feature, to obtain the cell detection results of the medicine dye image.
In addition, according to the cell detection method of disclosure one aspect, wherein the medicine dye image is described micro-
ErbB-2 image under the mirror visual field, and the cell detection results be the human epidermal growth because
The cell tissue classification results of sub- 2 image of receptor instruction.
In addition, according to the cell detection method of disclosure one aspect, wherein the first nerves network is convolutional Neural
Network, and the nervus opticus network is full Connection Neural Network.
In addition, according to the cell detection method of disclosure one aspect, wherein described to divide the medicine dye image
It include: that the medicine dye image is divided into N × N number of subgraph for multiple subgraphs, it is described to utilize first nerves network pair
Each in the multiple subgraph executes the first processing, includes: to utilize first nerves net to extract multiple subgraph features
For the N × N number of subgraph, each executes the first processing to network, and to extract N × N number of M dimensional feature vector, M is as thin
The number of the cell tissue classification of born of the same parents' testing result, and the utilization nervus opticus network is for the multiple subgraph feature
Second processing is executed, includes: by the N × N number of M dimensional feature vector to obtain the cell detection results of the medicine dye image
It connects and executes second processing, to obtain 1 M dimensional feature vector, and it is corresponding with maximum probability in the M dimensional feature vector
Cell tissue classification be used as the cell detection results.
In addition, according to the cell detection method of disclosure one aspect, further includes: dye training with the medicine of multiple marks
The image training first nerves network and the nervus opticus network, wherein the medicine dyeing training figure of the multiple mark
Each of picture is labeled with its corresponding cell tissue classification results.
A kind of image processing method neural network based another aspect of the present disclosure provides, comprising: will be to
Processing image segmentation is multiple subgraphs;Each using first nerves network for the multiple subgraph executes at first
Reason, to extract multiple subgraph features;Second processing is executed for the multiple subgraph feature using nervus opticus network, with
Obtain the processing result image for corresponding to the image to be processed.
In addition, according to the image processing method of disclosure other side, wherein the first nerves network is convolution mind
Through network, and the nervus opticus network is full Connection Neural Network.
In addition, according to the image processing method of disclosure other side, wherein described by image segmentation to be processed is more
It is N × N number of subgraph that a subgraph, which includes: by the image segmentation to be processed, described to utilize first nerves network for described
Multiple subgraphs each execute first processing, with extract multiple subgraph features include: using first nerves network for
Each executes the first processing to the N × N number of subgraph, and to extract N × N number of M dimensional feature vector, M is as image procossing
As a result the number of classification, and the utilization nervus opticus network executes second processing for the multiple subgraph feature,
With obtain correspond to the image to be processed processing result image include: by the N × N number of M dimensional feature vector series connection and
Second processing is executed, to obtain 1 M dimensional feature vector, and is made with the corresponding classification of maximum probability in the M dimensional feature vector
For described image processing result.
In addition, according to the image processing method of disclosure other side, further includes: instructed with the training image of multiple marks
Practice the first nerves network and the nervus opticus network, wherein each of training image of the multiple mark mark
It is marked with its corresponding classification.
A kind of intelligent microscope system another aspect of the present disclosure provides, comprising: microscope unit is used for
Observe medicine dye image;Camera unit, for shooting the medicine dye image under the microscope unit;Processing is single
Member, for executing cell detection method as described above based on the medicine dye image under the microscope unit.
According to the another aspect of the disclosure, a kind of image processing apparatus neural network based is provided, comprising: image
Cutting unit, for being multiple subgraphs by image segmentation to be processed;First nerves network unit, for for the multiple son
Each of image executes the first processing, to extract multiple subgraph features;And nervus opticus network unit, for for institute
It states multiple subgraph features and executes second processing, to obtain the processing result image for corresponding to the image to be processed.
In addition, according to the image processing apparatus of disclosure another aspect, wherein the first nerves network unit is volume
Product neural network unit, and the nervus opticus network unit is full Connection Neural Network unit.
In addition, according to the image processing apparatus of disclosure another aspect, wherein described image cutting unit will it is described to
Processing image segmentation is N × N number of subgraph, and for the N × N number of subgraph, each holds the first nerves network unit
The processing of row first, to extract N × N number of M dimensional feature vector, M is the number of the classification as processing result image, and described the
The N × N number of M dimensional feature vector is connected and executes second processing by two neural network units, with obtain 1 M dimensional feature to
Amount, and using the corresponding classification of maximum probability in the M dimensional feature vector as described image processing result.
In addition, according to the image processing apparatus of disclosure another aspect, further includes: training unit is used for multiple marks
The training image training first nerves network of note and the nervus opticus network, wherein the training figure of the multiple mark
Each of picture is labeled with its corresponding classification.
Still another aspect of the present disclosure provides a kind of electronic equipment, comprising: processor;And memory, it is used for
Store computer program instructions;Wherein, when the computer program instructions are loaded and run by the processor, the processing
Device executes cell detection method and image processing method as described above.
It is described computer-readable to deposit still another aspect of the present disclosure provides a kind of computer readable storage medium
Storage media is stored with computer program instructions, wherein when the computer program instructions are loaded and are run by processor, the place
It manages device and executes cell detection method and image processing method as described above.
As will be described in detail, according to a kind of medicine dye image based under microscope of the embodiment of the present disclosure
Cell detection method, image processing method neural network based, intelligent microscope system, image procossing neural network based
Device, electronic equipment and computer readable storage medium, by being multiple subgraphs by image segmentation to be processed, by first nerves
Network exports the individual features vector of multiple subgraphs, finally defeated by nervus opticus network by the way that all feature vectors are connected
The processing result image of such as class label out.In this way, for such as high-resolution (for example, 2048 × 2048 pixel resolutions)
Image to be processed (for example, ErbB-2 image under field of microscope) being capable of output category knot in real time
Fruit (for example, cell tissue classification results of ErbB-2 image instruction), due to not to image to be processed
Down-sampling is carried out, thus without result in the loss of significance of image procossing, and due in the training process without to each subgraph
As additionally being marked, the mark difficulty of training image, and the complexity of training and detection are reduced.
It is to be understood that foregoing general description and following detailed description are both illustrative, and it is intended to
In the further explanation of the claimed technology of offer.
Detailed description of the invention
The embodiment of the present disclosure is described in more detail in conjunction with the accompanying drawings, the above-mentioned and other purpose of the disclosure,
Feature and advantage will be apparent.Attached drawing is used to provide to further understand the embodiment of the present disclosure, and constitutes explanation
A part of book is used to explain the disclosure together with the embodiment of the present disclosure, does not constitute the limitation to the disclosure.In the accompanying drawings,
Identical reference label typically represents same parts or step.
Fig. 1 is the schematic diagram for illustrating the intelligent microscope system according to the embodiment of the present disclosure;
Fig. 2 is the cell detection method based on the medicine dye image under microscope illustrated according to the embodiment of the present disclosure
Flow chart;
Fig. 3 A and 3B are the cell detections based on the medicine dye image under microscope illustrated according to the embodiment of the present disclosure
The schematic diagram of method;
Fig. 4 is the flow chart for illustrating the image processing method neural network based according to the embodiment of the present disclosure;
Fig. 5 is flow chart of the further diagram according to the image processing method neural network based of the embodiment of the present disclosure;
Fig. 6 is the functional block diagram for illustrating the image processing apparatus neural network based according to the embodiment of the present disclosure;
Fig. 7 is the hardware block diagram for illustrating the electronic equipment according to the embodiment of the present disclosure;And
Fig. 8 is the schematic diagram for illustrating computer readable storage medium according to an embodiment of the present disclosure.
Specific embodiment
In order to enable the purposes, technical schemes and advantages of the disclosure become apparent, root is described in detail below with reference to accompanying drawings
According to the example embodiment of the disclosure.Obviously, described embodiment is only a part of this disclosure embodiment, rather than this public affairs
The whole embodiments opened, it should be appreciated that the disclosure is not limited by example embodiment described herein.
Scheme provided by the embodiments of the present application is related to computer vision technique neural network based in artificial intelligence field,
It is illustrated especially by following examples.
Firstly, schematically describing the application scenarios of the disclosure referring to Fig.1.Fig. 1 is illustrated according to the embodiment of the present disclosure
The schematic diagram of intelligent microscope system.
As shown in Figure 1, including microscope unit 10, camera unit according to the intelligent microscope system 1 of the embodiment of the present disclosure
20 and processing unit 30.Microscope unit 10, camera unit 20 and processing unit 30 are configurable in same physics
Position, or even be configured to belong to same physical equipment.Alternatively, microscope unit 10, camera unit 20 and processing unit
30 are configurable in different location, and are connected by wired or cordless communication network, thus transmitting between each other
Data or order.
Specifically, microscope unit 10 is for observing medical image.In embodiment of the disclosure, the medical image packet
It includes but is not limited to medicine dye image.More specifically, the medicine dye image is the human epidermal under the field of microscope
Growth factor acceptor 2 (HER2) image.HER2 immunohistochemical staining be sliced by the way that cell membrane is dyed brown, by nuclei dyeing at
Blue reflects the expression degree of tissue HER2.HER2 slice is generally divided into 0,1+, 2+, tetra- class of 3+.It is often necessary to which doctor is aobvious
(20x or 40x) artificial judgment HER2 slice classification under the middle high power field of micro mirror.
Camera unit 20 is used to shoot the medicine dye image 100 under the microscope unit 10.
Medicine dye under the microscope unit 10 that processing unit 30 is used to shoot based on the camera unit 20
Chromatic graph, by executing the cell detection method based on the medicine dye image under microscope that will be described in later, is obtained as 100
Obtain the cell detection results of the medicine dye image 100.People in the case where the medicine dye image is the field of microscope
In the case where skins growth factor acceptor 2 (HER2) image, the cell detection results are the human epidermal growth factor
The cell tissue classification results of 2 image of receptor instruction.Processing unit 30 is, for example, server, graphics workstation, personal computer
Deng.It will be described in as follows, processing unit 30 can use the neural network model wherein configured to the camera unit 20
The medicine dye image 100 under the microscope unit 10 of shooting executes feature extraction processing, the feature based on extraction
Generate processing result image.It needs to advance with and the training image of cell tissue classification results is labeled with by doctor to execute instruction
Practice, to obtain the neural network model configured in processing unit 30.
Different from being needed in simple microscope system by doctor's artificial judgment medicine dye image to provide cell tissue point
Grade, according to the intelligent microscope system 1 of the embodiment of the present disclosure using preparatory trained neural network model, for camera unit
The medicine dye image 100 under the microscope unit 10 of 20 shootings provides cell tissue classification automatically in real time.It is as follows
Will be described in, according to the intelligent microscope system 1 of the embodiment of the present disclosure for such as high-resolution (for example, 2048 ×
2048 pixel resolutions) field of microscope under ErbB-2 image can export cell tissue in real time
Classification results, and due to not carrying out down-sampling to image to be processed, thus without result in the loss of significance of image procossing, and
And due to without additionally being marked to each subgraph, reducing the mark difficulty of training image in the training process, and
The complexity of training and detection.
Hereinafter, being described in further detail according to the embodiment of the present disclosure referring to Fig. 2 to Fig. 3 B based on the medicine under microscope
The cell detection method of dye image.Fig. 2 is illustrated according to the embodiment of the present disclosure based on the medicine dye image under microscope
Cell detection method flow chart;Fig. 3 A and 3B are illustrated according to the embodiment of the present disclosure based on the medicine dyeing under microscope
The schematic diagram of the cell detection method of image.
As shown in Fig. 2, according to the cell detection method based on the medicine dye image under microscope of the embodiment of the present disclosure
Include the following steps.
In step s 201, the medicine dye image is obtained.In one embodiment of the present disclosure, the medicine dyeing
Image 100 is the ErbB-2 image under the field of microscope.
In step S202, the medicine dye image is divided into multiple subgraphs.As shown in Figure 3A, due to the doctor
The picture size for learning dye image 100 is excessive, and traditional convolution neural network model is not easy directly to use.And if to large-sized
The medicine dye image 100 carries out simple down-sampling, then precision of images loss is excessively high, and processing result cannot ensure.Cause
This, first contaminates the medicine according to the cell detection method based on the medicine dye image under microscope of the embodiment of the present disclosure
Chromatic graph is divided into multiple subgraphs as 100.Specifically, the medicine dye image can be divided into N × N number of subgraph.More
Specifically, as shown in Figure 3B, the medicine dye image is divided into 2 × 2 subgraph P1-P4.That is, N=2.
In step S203, each using first nerves network for the multiple subgraph executes the first processing,
To extract multiple subgraph features.In one embodiment of the present disclosure, first nerves network 301 is convolutional neural networks,
The including but not limited to network structures such as VGG, inception-v3, resnet.Specifically, using first nerves network 301 for
Each executes the first processing to the N × N number of subgraph, and to extract N × N number of M dimensional feature vector, M is as cell detection
As a result the number of cell tissue classification.More specifically, as shown in Figure 3B, first nerves network 301 is for 2 × 2 subgraphs
Each of P1-P4 executes the first processing, to extract 2 × 2 sub- characteristics of image 302, i.e. 4 dimensional feature vector f1-f4.
In step S204, second processing is executed for the multiple subgraph feature using nervus opticus network, to obtain
Obtain the cell detection results of the medicine dye image.In one embodiment of the present disclosure, nervus opticus network 303 is Quan Lian
Connect neural network.Specifically, nervus opticus network 303 by the N × N number of M dimensional feature vector series connection and executes second processing,
To obtain 1 M dimensional feature vector, and using the corresponding cell tissue classification of maximum probability in the M dimensional feature vector as institute
State cell detection results.More specifically, as shown in Figure 3B, nervus opticus network 303 connects 2 × 24 dimensional feature vectors
And second processing is executed, to obtain 14 dimensional feature vector, and it is corresponding thin with maximum probability in 4 dimensional feature vector
Born of the same parents' organizational hierarchy is as the cell detection results 200.The cell detection results 200 be the human epidermal growth factor by
The cell tissue classification results (one of 0,1+, 2+, 3+) of 2 image of body instruction.Alternatively, nervus opticus network 303 can be
Global maximum pond network or global average pond network.
Medicine dye image in the case where describing above with reference to Fig. 2 to Fig. 3 B according to the embodiment of the present disclosure based on microscope
Cell detection method in, by trained in advance first nerves network and nervus opticus network, realize for such as high-resolution
ErbB-2 image under the field of microscope of rate (for example, 2048 × 2048 pixel resolutions) is defeated in real time
Cell tissue classification results out, and due to not carrying out down-sampling to image to be processed, thus without result in image procossing
Loss of significance.
Further, different from executing cell detection results respectively for each of the multiple subgraph (P1-P4)
Judgement, so as to cause each side being additionally labeled needed in the training process to the multiple subgraph (P1-P4)
Formula, according to the cell detection method of the embodiment of the present disclosure without being marked for each of the multiple subgraph (P1-P4)
Note, and only needs to be labeled a large scale training image, reduces the mark difficulty of training image, and training and
The complexity of detection.
Hereinafter, Fig. 4 and Fig. 5 description will be referred to further according to the image procossing neural network based of the embodiment of the present disclosure
Method.It is not limited to use according to the image processing method neural network based of the embodiment of the present disclosure for the medicine under microscope
The cell detection of dye image, but neural network can be applied more generally in the processing of large-size images.
As shown in figure 4, the image processing method neural network based according to the embodiment of the present disclosure includes the following steps.
In step S401, with the training image training first nerves network and nervus opticus network of multiple marks.At this
In disclosed one embodiment, each of the training image of the multiple mark is labeled with its corresponding classification.?
That is even if subsequent first nerves network and nervus opticus network during executing image processing method to it is large-sized to
Processing image is divided, without being additionally labeled for each subgraph after segmentation in the training process, and only
Only a large scale training image is labeled.
It is multiple subgraphs by image segmentation to be processed in step S402.Picture size to be processed it is larger (for example,
2048 × 2048 pixel resolutions) in the case where, traditional convolution neural network model is not easy directly to use.And if to large scale
Image to be processed carry out simple down-sampling, then precision of images loss is excessively high, and processing result cannot ensure.Therefore, according to
The image processing method neural network based of the embodiment of the present disclosure is multiple to be divided by the image segmentation to be processed first
Subgraph.
In step S403, each using first nerves network for the multiple subgraph executes the first processing,
To extract multiple subgraph features.In one embodiment of the present disclosure, first nerves network is convolutional neural networks comprising
But it is not limited to the network structures such as VGG, inception-v3, resnet.
In step s 404, second processing is executed for the multiple subgraph feature using nervus opticus network, to obtain
Obtain the processing result image corresponding to the image to be processed.In one embodiment of the present disclosure, nervus opticus network is complete
Connection Neural Network.Alternatively, nervus opticus network can be global maximum pond network or global average pond network
Fig. 5 is flow chart of the further diagram according to the image processing method neural network based of the embodiment of the present disclosure.
In step S501, training process identical with step S401 is executed, i.e., it is only necessary to a large scale is instructed
Practice image to be labeled.
It is N × N number of subgraph by the image segmentation to be processed in step S502.In embodiment of the disclosure, N
It can be the natural number that value is greater than 2.
In step S503, using first nerves network, for the N × N number of subgraph, each is executed at first
Reason, to extract N × N number of M dimensional feature vector, M is the number of the classification as processing result image.
In step S504, the N × N number of M dimensional feature vector is connected using nervus opticus network and executes second
Processing, to obtain 1 M dimensional feature vector, and using the corresponding classification of maximum probability in the M dimensional feature vector as the figure
As processing result.
Fig. 6 is the functional block diagram for illustrating the image processing apparatus neural network based according to the embodiment of the present disclosure.Such as Fig. 6
Shown, the image processing apparatus 60 according to the embodiment of the present disclosure includes: training unit 601, the 602, first mind of image segmentation unit
Through network unit 603 and nervus opticus network unit 604.Above-mentioned each module can execute respectively to be described above with reference to Fig. 2 to Fig. 5
Cell detection method and image processing method according to an embodiment of the present disclosure each step.Those skilled in the art's reason
Solution: these unit modules can be realized individually by hardware, individually by software or by a combination thereof in various ways, and the disclosure
It is not limited to they any one.
Training unit 601 is used for the training image training first nerves network unit 603 and described second with multiple marks
Neural network unit 604.Each of the training image of the multiple mark is labeled with its corresponding classification.
Image segmentation unit 602 is used to image segmentation to be processed be multiple subgraphs.Specifically, image segmentation unit
602 by the image segmentation to be processed be N × N number of subgraph.More specifically, being the feelings of medicine dye image in image to be processed
Under condition, the medicine dye image is divided into 2 × 2 subgraph P1-P4.That is, N=2.
First nerves network unit 603 is used for each first processing of execution for the multiple subgraph, to extract
Multiple subgraph features.Specifically, for the N × N number of subgraph, each executes the to the first nerves network unit
One processing, to extract N × N number of M dimensional feature vector, M is the number of the classification as processing result image.More specifically, to
In the case that processing image is medicine dye image, first nerves network unit 603 is for each execution of 2 × 2 subgraphs
First processing, to extract 2 × 2 sub- characteristics of image, i.e. 4 dimensional feature vector f1-f4.
Nervus opticus network unit 604 is used to execute second processing for the multiple subgraph feature, to obtain correspondence
In the processing result image of the image to be processed.Specifically, the nervus opticus network unit 604 ties up the N × N number of M
Feature vector connects and executes second processing, to obtain 1 M dimensional feature vector, and with maximum in the M dimensional feature vector
The corresponding classification of probability is used as described image processing result.More specifically, the case where image to be processed is medicine dye image
Under, 2 × 24 dimensional feature vectors are connected and execute second processing by nervus opticus network unit 604, to obtain 14
Dimensional feature vector, and using the corresponding cell tissue classification of maximum probability in 4 dimensional feature vector as cell detection results.
Fig. 7 is the hardware block diagram for illustrating the electronic equipment 600 according to the embodiment of the present disclosure.According to the electricity of the embodiment of the present disclosure
Sub- equipment includes at least processor;And memory, for storing computer program instructions.When computer program instructions are by handling
When device is loaded and run, the processor executes cell detection method and image processing method as described above.
Electronic equipment 700 shown in Fig. 7 specifically includes: central processing unit (CPU) 701, graphics processing unit (GPU)
702 and main memory 703.These units are interconnected by bus 704.At central processing unit (CPU) 701 and/or figure
Reason unit (GPU) 702 may be used as above-mentioned processor, and main memory 703 may be used as above-mentioned storage computer program instructions
Memory.In addition, electronic equipment 700 can also include communication unit 705, storage unit 706, output unit 707, input unit
708 and external equipment 709, these units be also connected to bus 704.
Fig. 8 is the schematic diagram for illustrating computer readable storage medium according to an embodiment of the present disclosure.As shown in figure 8, root
Computer program instructions 801 are stored thereon with according to the computer readable storage medium 800 of the embodiment of the present disclosure.When the computer
When program instruction 801 is run by processor, the cell detection side according to the embodiment of the present disclosure referring to the figures above description is executed
Method and image processing method.The computer readable storage medium includes but is not limited to such as volatile memory and/or non-
Volatile memory.The volatile memory for example may include random access memory (RAM) and/or caches
Device (cache) etc..The nonvolatile memory for example may include read-only memory (ROM), hard disk, flash memory, CD, disk
Deng.
More than, it describes with reference to the accompanying drawings according to a kind of medicine dye image based under microscope of the embodiment of the present disclosure
Cell detection method, image processing method neural network based, intelligent microscope system, image procossing neural network based
Device, electronic equipment and computer readable storage medium, by being multiple subgraphs by image segmentation to be processed, by first nerves
Network exports the individual features vector of multiple subgraphs, finally defeated by nervus opticus network by the way that all feature vectors are connected
The processing result image of such as class label out.In this way, for such as high-resolution (for example, 2048 × 2048 pixel resolutions)
Image to be processed (for example, ErbB-2 image under field of microscope) being capable of output category knot in real time
Fruit (for example, cell tissue classification results of ErbB-2 image instruction), due to not to image to be processed
Down-sampling is carried out, thus without result in the loss of significance of image procossing, and due in the training process without to each subgraph
As additionally being marked, the mark difficulty of training image, and the complexity of training and detection are reduced.
Those of ordinary skill in the art may be aware that list described in conjunction with the examples disclosed in the embodiments of the present disclosure
Member and algorithm steps can be realized with the combination of electronic hardware or computer software and electronic hardware.These functions are actually
It is implemented in hardware or software, the specific application and design constraint depending on technical solution.Professional technician
Each specific application can be used different methods to achieve the described function, but this realization is it is not considered that exceed
The scope of the present invention.
The basic principle of the disclosure is described in conjunction with specific embodiments above, however, it is desirable to, it is noted that in the disclosure
The advantages of referring to, advantage, effect etc. are only exemplary rather than limitation, must not believe that these advantages, advantage, effect etc. are the disclosure
Each embodiment is prerequisite.In addition, detail disclosed above is merely to exemplary effect and the work being easy to understand
With, rather than limit, it is that must be realized using above-mentioned concrete details that above-mentioned details, which is not intended to limit the disclosure,.
Device involved in the disclosure, device, equipment, system block diagram only as illustrative example and be not intended to
It is required that or hint must be attached in such a way that box illustrates, arrange, configure.As those skilled in the art will appreciate that
, it can be connected by any way, arrange, configure these devices, device, equipment, system.Such as "include", "comprise", " tool
" etc. word be open vocabulary, refer to " including but not limited to ", and can be used interchangeably with it.Vocabulary used herein above
"or" and "and" refer to vocabulary "and/or", and can be used interchangeably with it, unless it is not such that context, which is explicitly indicated,.Here made
Vocabulary " such as " refers to phrase " such as, but not limited to ", and can be used interchangeably with it.
In addition, as used herein, the "or" instruction separation used in the enumerating of the item started with "at least one"
It enumerates, so that enumerating for such as " at least one of A, B or C " means A or B or C or AB or AC or BC or ABC (i.e. A and B
And C).In addition, wording " exemplary " does not mean that the example of description is preferred or more preferable than other examples.
It may also be noted that in the system and method for the disclosure, each component or each step are can to decompose and/or again
Combination nova.These decompose and/or reconfigure the equivalent scheme that should be regarded as the disclosure.
The technology instructed defined by the appended claims can not departed from and carried out to the various of technology described herein
Change, replace and changes.In addition, the scope of the claims of the disclosure is not limited to process described above, machine, manufacture, thing
Composition, means, method and the specific aspect of movement of part.Can use carried out to corresponding aspect described herein it is essentially identical
Function or realize essentially identical result there is currently or later to be developed processing, machine, manufacture, event group
At, means, method or movement.Thus, appended claims include such processing, machine, manufacture, event within its scope
Composition, means, method or movement.
The above description of disclosed aspect is provided so that any person skilled in the art can make or use this
It is open.Various modifications in terms of these are readily apparent to those skilled in the art, and are defined herein
General Principle can be applied to other aspect without departing from the scope of the present disclosure.Therefore, the disclosure is not intended to be limited to
Aspect shown in this, but according to principle disclosed herein and the consistent widest range of novel feature.
In order to which purpose of illustration and description has been presented for above description.In addition, this description is not intended to the reality of the disclosure
It applies example and is restricted to form disclosed herein.Although already discussed above multiple exemplary aspects and embodiment, this field skill
Its certain modifications, modification, change, addition and sub-portfolio will be recognized in art personnel.
Claims (15)
1. a kind of cell detection method based on the medicine dye image under microscope, comprising:
Obtain the medicine dye image;
The medicine dye image is divided into multiple subgraphs;
Each using first nerves network for the multiple subgraph executes the first processing, special to extract multiple subgraphs
Sign;
Second processing is executed for the multiple subgraph feature using nervus opticus network, to obtain the medicine dye image
Cell detection results.
2. cell detection method as described in claim 1, wherein the medicine dye image is under the field of microscope
ErbB-2 image, and the cell detection results are the ErbB-2 image
The cell tissue classification results of instruction.
3. cell detection method as claimed in claim 1 or 2, wherein the first nerves network is convolutional neural networks, and
And the nervus opticus network is full Connection Neural Network.
4. cell detection method as claimed in claim 3, wherein
It is described that the medicine dye image is divided into multiple subgraphs includes: that the medicine dye image is divided into N × N number of
Subgraph,
Each using first nerves network for the multiple subgraph executes the first processing, to extract multiple subgraphs
As feature include: using first nerves network for the N × N number of subgraph each execute first processing, with extract N ×
N number of M dimensional feature vector, M are the numbers being classified as the cell tissue of cell detection results, and
It is described that second processing is executed for the multiple subgraph feature using nervus opticus network, to obtain the medicine dyeing
The cell detection results of image include: that the N × N number of M dimensional feature vector is connected and executes second processing, to obtain 1 M
Dimensional feature vector, and using the corresponding cell tissue classification of maximum probability in the M dimensional feature vector as the cell detection
As a result.
5. such as described in any item cell detection methods of claims 1 to 4, further includes:
The first nerves network and the nervus opticus network are trained with the medicine dyeing training image of multiple marks,
Wherein, each of the medicine dyeing training image of the multiple mark is labeled with its corresponding cell tissue point
Grade result.
6. a kind of image processing method neural network based, comprising:
It is multiple subgraphs by image segmentation to be processed;
Each using first nerves network for the multiple subgraph executes the first processing, special to extract multiple subgraphs
Sign;
Second processing is executed for the multiple subgraph feature using nervus opticus network, to obtain corresponding to described to be processed
The processing result image of image.
7. image processing method as claimed in claim 6, wherein the first nerves network is convolutional neural networks, and
The nervus opticus network is full Connection Neural Network.
8. image processing method as claimed in claim 7, wherein
Described be multiple subgraphs by image segmentation to be processed include: by the image segmentation to be processed is N × N number of subgraph,
Each using first nerves network for the multiple subgraph executes the first processing, to extract multiple subgraphs
As feature include: using first nerves network for the N × N number of subgraph each execute first processing, with extract N ×
N number of M dimensional feature vector, M are the numbers of the classification as processing result image, and
It is described that second processing is executed for the multiple subgraph feature using nervus opticus network, with obtain correspond to it is described to
The processing result image of processing image includes: that the N × N number of M dimensional feature vector is connected to and executed second processing, to obtain
1 M dimensional feature vector, and using the corresponding classification of maximum probability in the M dimensional feature vector as described image processing result.
9. such as described in any item image processing methods of claim 6 to 8, further includes:
The first nerves network and the nervus opticus network are trained with the training image of multiple marks,
Wherein, each of the training image of the multiple mark is labeled with its corresponding classification.
10. a kind of intelligent microscope system, comprising:
Microscope unit, for observing medicine dye image;
Camera unit, for shooting the medicine dye image under the microscope unit;
Processing unit, for executing such as claims 1 to 5 based on the medicine dye image under the microscope unit
Described in any item cell detection methods.
11. a kind of image processing apparatus neural network based, comprising:
Image segmentation unit, for being multiple subgraphs by image segmentation to be processed;
First nerves network unit, for executing the first processing for each of the multiple subgraph, to extract multiple sons
Characteristics of image;And
Nervus opticus network unit, for executing second processing for the multiple subgraph feature, to obtain corresponding to described
The processing result image of image to be processed.
12. image processing apparatus as claimed in claim 11, wherein the first nerves network unit is convolutional neural networks
Unit, and the nervus opticus network unit is full Connection Neural Network unit.
13. image processing apparatus as claimed in claim 12, wherein
The image segmentation to be processed is N × N number of subgraph by described image cutting unit,
For the N × N number of subgraph, each executes the first processing to the first nerves network unit, to extract N × N number of
M dimensional feature vector, M are the numbers of the classification as processing result image, and
The N × N number of M dimensional feature vector is connected and executes second processing by the nervus opticus network unit, to obtain 1
M dimensional feature vector, and using the corresponding classification of maximum probability in the M dimensional feature vector as described image processing result.
14. such as described in any item image processing apparatus of claim 11 to 13, further includes:
Training unit, for training the first nerves network and the nervus opticus network with the training image of multiple marks,
Wherein, each of the training image of the multiple mark is labeled with its corresponding classification.
15. a kind of electronic equipment, comprising:
Processor;And
Memory, for storing computer program instructions;
Wherein, when the computer program instructions are loaded and run by the processor, the processor is executed as right is wanted
Ask 1 to 5 described in any item cell detection methods and described in any item image processing methods of such as claim 6 to 9.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910684849.8A CN110390676A (en) | 2019-07-26 | 2019-07-26 | The cell detection method of medicine dye image, intelligent microscope system under microscope |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910684849.8A CN110390676A (en) | 2019-07-26 | 2019-07-26 | The cell detection method of medicine dye image, intelligent microscope system under microscope |
Publications (1)
Publication Number | Publication Date |
---|---|
CN110390676A true CN110390676A (en) | 2019-10-29 |
Family
ID=68287517
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910684849.8A Pending CN110390676A (en) | 2019-07-26 | 2019-07-26 | The cell detection method of medicine dye image, intelligent microscope system under microscope |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110390676A (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110853022A (en) * | 2019-11-14 | 2020-02-28 | 腾讯科技(深圳)有限公司 | Pathological section image processing method, device and system and storage medium |
CN112037173A (en) * | 2020-08-04 | 2020-12-04 | 湖南自兴智慧医疗科技有限公司 | Chromosome detection method and device and electronic equipment |
CN112862742A (en) * | 2019-11-27 | 2021-05-28 | 静宜大学 | Artificial intelligent cell detection method and system using photodynamic technology |
CN113344928A (en) * | 2021-08-06 | 2021-09-03 | 深圳市瑞图生物技术有限公司 | Model training and using method, device, detector and storage medium |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107092870A (en) * | 2017-04-05 | 2017-08-25 | 武汉大学 | A kind of high resolution image semantics information extracting method and system |
US20170249548A1 (en) * | 2016-02-26 | 2017-08-31 | Google Inc. | Processing cell images using neural networks |
CN107358262A (en) * | 2017-07-13 | 2017-11-17 | 京东方科技集团股份有限公司 | The sorting technique and sorter of a kind of high-definition picture |
US20180137338A1 (en) * | 2016-11-16 | 2018-05-17 | The Governing Council Of The University Of Toronto | System and method for classifying and segmenting microscopy images with deep multiple instance learning |
CN108268890A (en) * | 2017-12-28 | 2018-07-10 | 南京信息工程大学 | A kind of hyperspectral image classification method |
CN109272511A (en) * | 2018-08-27 | 2019-01-25 | 温州大学激光与光电智能制造研究院 | The smoke detecting apparatus of light network based on piecemeal |
CN109271870A (en) * | 2018-08-21 | 2019-01-25 | 平安科技(深圳)有限公司 | Pedestrian recognition methods, device, computer equipment and storage medium again |
CN109271992A (en) * | 2018-09-26 | 2019-01-25 | 上海联影智能医疗科技有限公司 | A kind of medical image processing method, system, device and computer readable storage medium |
CN109272492A (en) * | 2018-08-24 | 2019-01-25 | 深思考人工智能机器人科技(北京)有限公司 | A kind of processing method and system of cell pathology smear |
-
2019
- 2019-07-26 CN CN201910684849.8A patent/CN110390676A/en active Pending
Patent Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20170249548A1 (en) * | 2016-02-26 | 2017-08-31 | Google Inc. | Processing cell images using neural networks |
CN108885682A (en) * | 2016-02-26 | 2018-11-23 | 谷歌有限责任公司 | Use Processing with Neural Network cell image |
US20180137338A1 (en) * | 2016-11-16 | 2018-05-17 | The Governing Council Of The University Of Toronto | System and method for classifying and segmenting microscopy images with deep multiple instance learning |
CN107092870A (en) * | 2017-04-05 | 2017-08-25 | 武汉大学 | A kind of high resolution image semantics information extracting method and system |
CN107358262A (en) * | 2017-07-13 | 2017-11-17 | 京东方科技集团股份有限公司 | The sorting technique and sorter of a kind of high-definition picture |
CN108268890A (en) * | 2017-12-28 | 2018-07-10 | 南京信息工程大学 | A kind of hyperspectral image classification method |
CN109271870A (en) * | 2018-08-21 | 2019-01-25 | 平安科技(深圳)有限公司 | Pedestrian recognition methods, device, computer equipment and storage medium again |
CN109272492A (en) * | 2018-08-24 | 2019-01-25 | 深思考人工智能机器人科技(北京)有限公司 | A kind of processing method and system of cell pathology smear |
CN109272511A (en) * | 2018-08-27 | 2019-01-25 | 温州大学激光与光电智能制造研究院 | The smoke detecting apparatus of light network based on piecemeal |
CN109271992A (en) * | 2018-09-26 | 2019-01-25 | 上海联影智能医疗科技有限公司 | A kind of medical image processing method, system, device and computer readable storage medium |
Non-Patent Citations (1)
Title |
---|
杨金鑫等: "结合卷积神经网络和超像素聚类的细胞图像分割方法", 《计算机应用研究》 * |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110853022A (en) * | 2019-11-14 | 2020-02-28 | 腾讯科技(深圳)有限公司 | Pathological section image processing method, device and system and storage medium |
CN110853022B (en) * | 2019-11-14 | 2020-11-06 | 腾讯科技(深圳)有限公司 | Pathological section image processing method, device and system and storage medium |
US11967069B2 (en) | 2019-11-14 | 2024-04-23 | Tencent Technology (Shenzhen) Company Limited | Pathological section image processing method and apparatus, system, and storage medium |
CN112862742A (en) * | 2019-11-27 | 2021-05-28 | 静宜大学 | Artificial intelligent cell detection method and system using photodynamic technology |
CN112037173A (en) * | 2020-08-04 | 2020-12-04 | 湖南自兴智慧医疗科技有限公司 | Chromosome detection method and device and electronic equipment |
CN112037173B (en) * | 2020-08-04 | 2024-04-05 | 湖南自兴智慧医疗科技有限公司 | Chromosome detection method and device and electronic equipment |
CN113344928A (en) * | 2021-08-06 | 2021-09-03 | 深圳市瑞图生物技术有限公司 | Model training and using method, device, detector and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110390676A (en) | The cell detection method of medicine dye image, intelligent microscope system under microscope | |
CN110348387B (en) | Image data processing method, device and computer readable storage medium | |
CN110853022B (en) | Pathological section image processing method, device and system and storage medium | |
Wang et al. | Mitosis detection in breast cancer pathology images by combining handcrafted and convolutional neural network features | |
CN110472737B (en) | Training method and device for neural network model and medical image processing system | |
CN105122308B (en) | System and method for using the multichannel biological marker of the structural unicellular division of continuous dyeing quantitative | |
CN110309856A (en) | Image classification method, the training method of neural network and device | |
CN109558864A (en) | Face critical point detection method, apparatus and storage medium | |
CN109583297A (en) | Retina OCT volume data identification method and device | |
CN109657583A (en) | Face's critical point detection method, apparatus, computer equipment and storage medium | |
CN109117773A (en) | A kind of characteristics of image point detecting method, terminal device and storage medium | |
CN110570352A (en) | image labeling method, device and system and cell labeling method | |
CN112581438A (en) | Slice image recognition method and device, storage medium and electronic equipment | |
CN109670423A (en) | A kind of image identification system based on deep learning, method and medium | |
CN114037907A (en) | Detection method and device for power transmission line, computer equipment and storage medium | |
CN112818821A (en) | Human face acquisition source detection method and device based on visible light and infrared light | |
CN114596584A (en) | Intelligent detection and identification method for marine organisms | |
CN108229281A (en) | The generation method and method for detecting human face of neural network, device and electronic equipment | |
CN110210574A (en) | Diameter radar image decomposition method, Target Identification Unit and equipment | |
Ozimek et al. | A space-variant visual pathway model for data efficient deep learning | |
CN109598201A (en) | Motion detection method, device, electronic equipment and readable storage medium storing program for executing | |
CN109948577A (en) | A kind of cloth recognition methods, device and storage medium | |
CN113139932B (en) | Deep learning defect image identification method and system based on ensemble learning | |
Visalatchi et al. | Intelligent Vision with TensorFlow using Neural Network Algorithms | |
CN114283178A (en) | Image registration method and device, computer equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |