CN111310611A - Method for detecting cell visual field map and storage medium - Google Patents

Method for detecting cell visual field map and storage medium Download PDF

Info

Publication number
CN111310611A
CN111310611A CN202010075316.2A CN202010075316A CN111310611A CN 111310611 A CN111310611 A CN 111310611A CN 202010075316 A CN202010075316 A CN 202010075316A CN 111310611 A CN111310611 A CN 111310611A
Authority
CN
China
Prior art keywords
network
training
classification
model
visual field
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010075316.2A
Other languages
Chinese (zh)
Other versions
CN111310611B (en
Inventor
张立箎
王乾
周明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Jiaotong University
Original Assignee
Shanghai Jiaotong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Jiaotong University filed Critical Shanghai Jiaotong University
Priority to CN202010075316.2A priority Critical patent/CN111310611B/en
Publication of CN111310611A publication Critical patent/CN111310611A/en
Application granted granted Critical
Publication of CN111310611B publication Critical patent/CN111310611B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/69Microscopic objects, e.g. biological cells or cellular parts
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/69Microscopic objects, e.g. biological cells or cellular parts
    • G06V20/698Matching; Classification
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Computational Linguistics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Biophysics (AREA)
  • Multimedia (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a cell field diagram detection method and a storage medium. The cascade network which uses the combination of the end-to-end detection network and the classification network is trained, and the abnormal characteristic information of the level of the view map reflected by the detection network is integrated into the classification network, so that the information loss is avoided. The two networks are trained simultaneously, so that the detection network and the classification network are supervised mutually and promoted mutually, the classification precision is ensured, and the false positive detection in an abnormal area is reduced.

Description

Method for detecting cell visual field map and storage medium
Technical Field
The invention relates to the field of cell image detection, in particular to a cell visual field image detection method and a storage medium.
Background
In the prior art for detecting abnormal cell areas in abnormal field views, the field views are not further classified, and only the acquisition of the position and classification information of abnormal cells is completed. The region regression and marker box classification of the two networks are typically combined using a combination of fast-RCNN based detection and R-FCN detection. The specific process is that for a visual field diagram, a certain number of candidate frames are generated through a feature extractor and an RPN (candidate area generation network), wherein the number of the candidate frames is about 2000 marking frames, and regression and classification of the marking frames are respectively carried out by utilizing the regional property of the R-FCN to obtain a final detection result. However, the resulting test results are not applied to the task of final classification or diagnosis. Moreover, the detected result is inaccurate, and a false positive result exists.
In addition, the current abnormal cell visual field map classification mainly depends on cell classification, and in the cell classification, the most advanced method at present uses a classification model based on graph convolution, firstly uses Densenet to extract the features of the cells, then uses a K-Means method to cluster the cells, and finally uses the graph convolution method to iteratively update the features to obtain final features, and finally carries out the final cell classification.
The main purpose of the present invention is to solve the above three problems 1, and the detection result is not fused into the model for judging and reading the view map classification online, and we obtain abnormal detection information of the view map, which includes a part of prior information that can be used for view map judgment, but the part of information is not included in the data stream for view map classification judgment, which causes waste and loss of the prior information. 2. After the visual field map passes through the detection module, a part of areas which are considered by the network to be abnormal are marked, but the areas can be disputed false positive areas, namely areas marked by errors of the detected network in normal areas, and obviously the areas should not appear. However, for an input data which is an abnormal view map, a normal area of a false sun is detected while some abnormal areas are detected, and we can consider that the judgment on the abnormal view map is relatively large, but if a plurality of abnormal areas or even only one abnormal area appears in a normal view map, the judgment on the view map is unreasonable, and the judgment on the view map is extremely unfriendly, and if the abnormal areas are detected in the normal view map which should not appear, the robustness of the standard and method for judging whether the view map is abnormal is reduced, and the accuracy of the classification network and the detection network is greatly reduced. 3. At present, the existing models do not adopt an end-to-end training method, do not allow a network to consider information of two tasks at the same time, train a detection network and then train a classification network, can not generate a candidate frame and reduce abnormal false positive at the same time, and people want to train the two tasks at the same time, so that the classification network also restrains the detection network, and the detection network helps the classification network.
Disclosure of Invention
In order to solve the above problems, the present invention provides a method for detecting a cell field map, comprising: an acquisition step, wherein a cell visual field image is acquired, and a sample set is made;
a training step, inputting the sample set and the classification labels, training a cascade network to obtain a cascade model, wherein the cascade network consists of a Retianet network and a CNN network, and the Retianet network is used for obtaining a detection model through training and outputting a region judgment result of a visual field image; the CNN network is used for obtaining a classification model through training and outputting a classification result of the visual field diagram; and a classification step, namely inputting the visual field diagram to be recognized into a trained cascade network to obtain a classification result of the whole visual field diagram and a region judgment result of the visual field diagram.
Further, the training step includes a retinet network training step, which specifically includes: a dividing step, namely dividing the sample set into a training set and a testing set; a first training step, inputting the training set, and training a Retianet network to obtain a first network model and a plurality of characteristic diagrams; a first output step of inputting the test set to the first network model to obtain a first determination result; and a first optimization step, comparing the first judgment result with the correct result, calculating the difference value between the first judgment result and the correct result, transmitting the difference value in the opposite direction, and optimizing the first network model to obtain a detection model.
Further, the training step includes a CNN network training step, which specifically includes: a second training step, inputting the characteristic diagram and the classification labels, and training a CNN network to obtain a second network model; a second output step of inputting the test set and outputting a second determination result; and a second optimization step, comparing the second judgment result with the correct result, calculating the difference value between the second judgment result and the correct result, transmitting the difference value in the reverse direction, and optimizing the classification model of the second network model, wherein the classification model and the detection model form a cascade model.
Further, the retinet network comprises a convolutional layer, a pooling layer and an active layer.
Further, the CNN network includes a convolutional layer, a pooling layer, an active layer, and a fully-connected layer.
Further, in the classifying step, the determination result of the area of the view map includes positions of a plurality of abnormal mark frames and mark frame category information.
Further, in the first training step, the feature map is obtained by training a feature extraction network of the retinet network.
Further, the feature extraction network comprises a feature pyramid network.
Further, the CNN network includes a Resnet network and a dense type network; the classification label comprises a normal view map or an abnormal view map.
The invention also provides a storage medium, which stores a computer program for executing the abnormality detection method of the view map.
The invention has the beneficial effects that: the invention provides a method for detecting a cell visual field diagram and a storage medium, which train a cascade network combining an end-to-end detection network and a classification network, integrate the characteristic information of the abnormity of the level of the visual field diagram reflected by the detection network into the classification network and avoid the loss of the information. The two networks are trained simultaneously, so that the detection network and the classification network are supervised mutually and promoted mutually, the classification precision is ensured, and the false positive detection in an abnormal area is reduced.
Drawings
The technical solution and other advantages of the present invention will become apparent from the following detailed description of specific embodiments of the present invention, which is to be read in connection with the accompanying drawings.
FIG. 1 is a flow chart of a cellular field of view provided by the present invention.
Fig. 2 is a block diagram of a cascaded network provided by the present invention.
FIG. 3 is a flow chart of the detection steps provided by the present invention.
FIG. 4 is a flow chart of a second classification step provided by the present invention.
Detailed Description
The technical solution in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention. It is to be understood that the described embodiments are merely exemplary of the invention, and not restrictive of the full scope of the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The following disclosure provides many different embodiments or examples for implementing different features of the invention. To simplify the disclosure of the present invention, the components and arrangements of specific examples are described below. Of course, they are merely examples and are not intended to limit the present invention. Furthermore, the present invention may repeat reference numerals and/or letters in the various examples, such repetition is for the purpose of simplicity and clarity and does not in itself dictate a relationship between the various embodiments and/or configurations discussed. In addition, the present invention provides examples of various specific processes and materials, but one of ordinary skill in the art may recognize applications of other processes and/or uses of other materials.
As shown in FIG. 1, the invention provides a method for detecting a cell field map, which comprises S1-S5.
And S1, an acquisition step, wherein a cell visual field image is acquired, and a sample set is made.
And S2, a training step, namely inputting the sample set and the classification labels, and training a cascade network to obtain a cascade model.
As shown in fig. 2, the cascade network is composed of a retinet network (a dashed box in fig. 2) and a CNN network, and the retinet network is used for obtaining a detection model through training and outputting a region determination result of a view map; the CNN network is used for obtaining a classification model through training and outputting a classification result of the visual field diagram.
The classification label comprises a normal view map or an abnormal view map.
The training step comprises a Retianet network training step and a CNN network training step.
As shown in fig. 3, the retinet network training step specifically includes: s201 to S204.
S201, a dividing step, namely dividing the sample set into a training set and a testing set.
S202, a first training step, namely inputting the training set, and training the Retianet network to obtain a first network model and a plurality of characteristic diagrams. The retinet network includes a convolutional layer, a pooling layer, and an active layer.
In the first training step, the feature map is obtained by training a feature extraction network of the Retianet network.
The feature extraction network comprises a feature pyramid network.
S203, a first output step, namely inputting the test set into the first network model to obtain a first judgment result.
S204, a first optimization step, namely comparing the first judgment result with a correct result, calculating a difference value between the first judgment result and the correct result, transmitting the difference value in a reverse direction, and optimizing a first network model to obtain a detection model.
As shown in fig. 4, the training step includes a CNN network training step, which specifically includes: s301 to S303.
S301, a second training step, namely inputting the feature map and the classification labels, and training a CNN network to obtain a second network model; the CNN network comprises a convolutional layer, a pooling layer, an active layer and a full connection layer.
The CNN network includes a Resnet network, and particularly, the Resnet50 network is best trained. The CNN network further includes a dense network.
S302, a second output step, namely inputting the test set and outputting a second judgment result.
S303, a second optimization step, namely comparing the second judgment result with the correct result, calculating the difference value between the second judgment result and the correct result, transmitting the difference value in the reverse direction, and optimizing a classification model of the second network model, wherein the classification model and the detection model form a cascade model.
And S3, a classification step, namely, inputting the visual field diagram to be recognized into a trained cascade network to obtain a classification result of the whole visual field diagram and a region judgment result of the visual field diagram.
In the classifying step, the determination result of the field of view region includes positions of a plurality of abnormal mark frames and mark frame category information.
The present invention provides a storage medium storing a computer program for executing the method for detecting a cytofield map according to the present invention.
The invention provides a method for detecting a cell visual field diagram, which trains a cascade network combining an end-to-end detection network and a classification network, integrates the characteristic information of the abnormality of the visual field diagram level reflected by the detection network into the classification network, and avoids the loss of the information. The two networks are trained simultaneously, so that the detection network and the classification network are supervised mutually and promoted mutually, the classification precision is ensured, and the false positive detection in an abnormal area is reduced.
The cascade network synchronously corrects false positive data generated by the abnormal detection result, reduces the abnormal visual field graph detection rate of the normal visual field graph, is end-to-end, does not need to output the detection network result as the input of the classification network, and has higher efficiency. The result of abnormal detection is optimized, so that false positives are reduced, the burden of doctors is reduced, and the diagnosis efficiency of the visual field diagram and the utilization of medical resources are improved; the detection rate of abnormal areas of the normal visual field image is reduced, secondary detection of the normal visual field image is avoided, the diagnosis precision is improved, the medical cost is further reduced, and the social resource waste is reduced.
In practical application, for a view map with a normal classification result, the detection network should not generate abnormal mark frames, but the previous model has no way to ensure that the view map classified as normal does not generate abnormal mark frames, because the simple detection network has no way to restrict the output of the view map with the normal. However, after a classification network is added, the output of the normal view map can be constrained forcibly, and the abnormal detection rate of the normal view map is reduced. And for the abnormal view map, after the classified information is added, the generation of the abnormal region is also strengthened. And finally, end-to-end training of the detection network and the abnormal classification is realized.
In the foregoing embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
The principle and the implementation of the present invention are explained in the present text by applying specific examples, and the above description of the examples is only used to help understanding the technical solution and the core idea of the present invention; those of ordinary skill in the art will understand that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.

Claims (10)

1. A method for detecting a cell field map, comprising:
an acquisition step, wherein a cell visual field image is acquired, and a sample set is made;
a training step, inputting the sample set and the classification labels, training a cascade network to obtain a cascade model, wherein the cascade network consists of a Retianet network and a CNN network, and the Retianet network is used for obtaining a detection model through training and outputting a region judgment result of a visual field image; the CNN network is used for obtaining a classification model through training and outputting a classification result of the visual field diagram;
and a classification step, namely inputting the visual field diagram to be recognized into a trained cascade network to obtain a classification result of the whole visual field diagram and a region judgment result of the visual field diagram.
2. The method of detecting a cytofield image according to claim 1,
the training step comprises a Retianet network training step, and specifically comprises the following steps:
a dividing step, namely dividing the sample set into a training set and a testing set;
a first training step, inputting the training set, and training a Retianet network to obtain a first network model and a plurality of characteristic diagrams;
a first output step of inputting the test set to the first network model to obtain a first determination result;
and a first optimization step, comparing the first judgment result with the correct result, calculating the difference value between the first judgment result and the correct result, transmitting the difference value in the opposite direction, and optimizing a first network model to obtain the detection model.
3. The method of detecting a cytofield image according to claim 2,
the training step includes a CNN network training step, which specifically includes:
a second training step, inputting the characteristic diagram and the classification labels, and training a CNN network to obtain a second network model;
a second output step of inputting the test set and outputting a second determination result;
and a second optimization step, comparing the second judgment result with the correct result, calculating the difference value between the second judgment result and the correct result, transmitting the difference value in the reverse direction, optimizing the classification model of the second network model, and forming a cascade model by the classification model and the detection model.
4. The method of detecting a cytofield image according to claim 2,
the retinet network includes a convolutional layer, a pooling layer, and an active layer.
5. The method of detecting a cytofield image according to claim 3,
the CNN network comprises a convolutional layer, a pooling layer, an active layer and a full connection layer.
6. The method of detecting a cytofield image according to claim 1,
in the step of classifying,
the visual field map region determination result includes positions of a plurality of abnormal mark frames and mark frame type information.
7. The method of detecting a cytofield image according to claim 2,
in the first training step, the feature map is obtained by training a feature extraction network of the Retianet network.
8. The method of detecting a cytofield map according to claim 5, wherein the method comprises
The feature extraction network comprises a feature pyramid network.
9. The method of detecting a cytofield image according to claim 1,
the CNN network comprises a Resnet network and a dense network;
the classification label comprises a normal view map or an abnormal view map.
10. A storage medium storing a computer program for executing the abnormality detection method of the sight map according to any one of claims 1 to 9.
CN202010075316.2A 2020-01-22 2020-01-22 Method for detecting cell view map and storage medium Active CN111310611B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010075316.2A CN111310611B (en) 2020-01-22 2020-01-22 Method for detecting cell view map and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010075316.2A CN111310611B (en) 2020-01-22 2020-01-22 Method for detecting cell view map and storage medium

Publications (2)

Publication Number Publication Date
CN111310611A true CN111310611A (en) 2020-06-19
CN111310611B CN111310611B (en) 2023-06-06

Family

ID=71161616

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010075316.2A Active CN111310611B (en) 2020-01-22 2020-01-22 Method for detecting cell view map and storage medium

Country Status (1)

Country Link
CN (1) CN111310611B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113838008A (en) * 2021-09-08 2021-12-24 江苏迪赛特医疗科技有限公司 Abnormal cell detection method based on attention-drawing mechanism
CN116977905A (en) * 2023-09-22 2023-10-31 杭州爱芯元智科技有限公司 Target tracking method, device, electronic equipment and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190065817A1 (en) * 2017-08-29 2019-02-28 Konica Minolta Laboratory U.S.A., Inc. Method and system for detection and classification of cells using convolutional neural networks
CN109448090A (en) * 2018-11-01 2019-03-08 北京旷视科技有限公司 Image processing method, device, electronic equipment and storage medium
CN110110799A (en) * 2019-05-13 2019-08-09 广州锟元方青医疗科技有限公司 Cell sorting method, device, computer equipment and storage medium
CN110210362A (en) * 2019-05-27 2019-09-06 中国科学技术大学 A kind of method for traffic sign detection based on convolutional neural networks
CN110287927A (en) * 2019-07-01 2019-09-27 西安电子科技大学 Based on the multiple dimensioned remote sensing image object detection method with context study of depth
CN110334565A (en) * 2019-03-21 2019-10-15 江苏迪赛特医疗科技有限公司 A kind of uterine neck neoplastic lesions categorizing system of microscope pathological photograph

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190065817A1 (en) * 2017-08-29 2019-02-28 Konica Minolta Laboratory U.S.A., Inc. Method and system for detection and classification of cells using convolutional neural networks
CN109448090A (en) * 2018-11-01 2019-03-08 北京旷视科技有限公司 Image processing method, device, electronic equipment and storage medium
CN110334565A (en) * 2019-03-21 2019-10-15 江苏迪赛特医疗科技有限公司 A kind of uterine neck neoplastic lesions categorizing system of microscope pathological photograph
CN110110799A (en) * 2019-05-13 2019-08-09 广州锟元方青医疗科技有限公司 Cell sorting method, device, computer equipment and storage medium
CN110210362A (en) * 2019-05-27 2019-09-06 中国科学技术大学 A kind of method for traffic sign detection based on convolutional neural networks
CN110287927A (en) * 2019-07-01 2019-09-27 西安电子科技大学 Based on the multiple dimensioned remote sensing image object detection method with context study of depth

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
CHUNFENG SONG 等: "Mask-guided Contrastive Attention Model for Person Re-Identification", 2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113838008A (en) * 2021-09-08 2021-12-24 江苏迪赛特医疗科技有限公司 Abnormal cell detection method based on attention-drawing mechanism
CN113838008B (en) * 2021-09-08 2023-10-24 江苏迪赛特医疗科技有限公司 Abnormal cell detection method based on attention-introducing mechanism
CN116977905A (en) * 2023-09-22 2023-10-31 杭州爱芯元智科技有限公司 Target tracking method, device, electronic equipment and storage medium
CN116977905B (en) * 2023-09-22 2024-01-30 杭州爱芯元智科技有限公司 Target tracking method, device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN111310611B (en) 2023-06-06

Similar Documents

Publication Publication Date Title
US11836996B2 (en) Method and apparatus for recognizing text
CN114399769B (en) Training method of text recognition model, and text recognition method and device
WO2019095782A1 (en) Data sample label processing method and apparatus
CN112528963A (en) Intelligent arithmetic question reading system based on MixNet-YOLOv3 and convolutional recurrent neural network CRNN
CN110633610B (en) Student state detection method based on YOLO
CN113221743A (en) Table analysis method and device, electronic equipment and storage medium
CN116049397A (en) Sensitive information discovery and automatic classification method based on multi-mode fusion
CN111310611A (en) Method for detecting cell visual field map and storage medium
CN113205047A (en) Drug name identification method and device, computer equipment and storage medium
CN116823793A (en) Device defect detection method, device, electronic device and readable storage medium
CN114463686B (en) Moving target detection method and system based on complex background
CN115223166A (en) Picture pre-labeling method, picture labeling method and device, and electronic equipment
CN117313141A (en) Abnormality detection method, abnormality detection device, abnormality detection equipment and readable storage medium
CN112417974A (en) Public health monitoring method
CN110929013A (en) Image question-answer implementation method based on bottom-up entry and positioning information fusion
Zhang et al. Deep-learning generation of POI data with scene images
CN116912872A (en) Drawing identification method, device, equipment and readable storage medium
CN115984838A (en) POI name generation method and device, electronic equipment and storage medium
CN112784015B (en) Information identification method and device, apparatus, medium, and program
CN115544984A (en) Method, apparatus, device and medium for generating coverage group in integrated circuit verification environment
CN116778518A (en) Intelligent solving method and device for geometric topics, electronic equipment and storage medium
CN115186738A (en) Model training method, device and storage medium
CN113010647A (en) Corpus processing model training method and device, storage medium and electronic equipment
CN111291667A (en) Method for detecting abnormality in cell visual field map and storage medium
CN113806452A (en) Information processing method, information processing device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant