CN111709911A - Ovarian follicle automatic counting method based on neural network - Google Patents

Ovarian follicle automatic counting method based on neural network Download PDF

Info

Publication number
CN111709911A
CN111709911A CN202010418981.7A CN202010418981A CN111709911A CN 111709911 A CN111709911 A CN 111709911A CN 202010418981 A CN202010418981 A CN 202010418981A CN 111709911 A CN111709911 A CN 111709911A
Authority
CN
China
Prior art keywords
neural network
follicles
network
model
training
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010418981.7A
Other languages
Chinese (zh)
Other versions
CN111709911B (en
Inventor
蔡辉煌
程雨夏
吴卿
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Dianzi University
Original Assignee
Hangzhou Dianzi University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Dianzi University filed Critical Hangzhou Dianzi University
Priority to CN202010418981.7A priority Critical patent/CN111709911B/en
Publication of CN111709911A publication Critical patent/CN111709911A/en
Application granted granted Critical
Publication of CN111709911B publication Critical patent/CN111709911B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/047Probabilistic or stochastic networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/187Segmentation; Edge detection involving region growing; involving region merging; involving connected component labelling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/50Extraction of image or video features by performing operations within image blocks; by using histograms, e.g. histogram of oriented gradients [HoG]; by summing image-intensity values; Projection analysis
    • G06V10/507Summing image-intensity values; Histogram projection analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20036Morphological image processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Molecular Biology (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Probability & Statistics with Applications (AREA)
  • Multimedia (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses an ovarian follicle automatic counting method based on a neural network, which generates different training sets, verification sets and test sets for each time by adopting a random seed mode for all data images; continuously updating the network parameters through training of the neural network, and verifying through the IOU of the verification set so as to keep the optimal parameters of the network model; carrying out thresholding processing and noise processing on a prediction image output by a neural network to remove noise in the prediction image and convert an image into a gray image; then separating the follicles which are contacted with each other by using a distance conversion algorithm and a watershed algorithm; and finally, counting the number of the ovarian follicles by using a connected region analysis method to realize the counting function of the ovarian follicles.

Description

Ovarian follicle automatic counting method based on neural network
Technical Field
The invention relates to the field of deep learning of Watershed algorithm (Watershed) and CNN (rational Neural network), in particular to a method for automatically counting ovarian follicles of mice.
Background
The ovary is a complex endocrine organ, is an important gonadal organ for female reproduction and mainly has the functions of ovulation and female hormone secretion. Among them, the follicle plays a crucial part in the ovary, and the follicle is composed of an oocyte and many small follicular cells. The follicles in the ovary can be divided into six types according to the changes in morphology and function during follicle development: primordial follicles, primary follicles, secondary follicles, pre-antral follicles, and tertiary follicles. In the absence of any drug, normal women can find in the ovary that small follicles grow gradually to grow up and develop into follicles in different stages in one month, and a mature follicle is discharged in the ovulation phase. However, genetic mutations, toxins and some specific drugs have an effect on follicles. This is important in order to determine whether these effects are promoting or inhibiting for ovulation of the follicles.
The convolutional neural network is composed of a plurality of convolutional layers and fully-connected layers (corresponding to a classical neural network), and also comprises an activation layer and a pooling layer. Compared with other deep learning structures, the convolutional neural network can give better results in the aspects of image segmentation and recognition.
The watershed algorithm is an image segmentation algorithm based on analysis of geographic morphology, and simulates geographic structures (such as mountains, ravines, basins) to segment different objects.
Due to this importance, current stage is mainly directed to microscopic images of mouse ovary. The data set was obtained by staining and labeling the ovaries of the mice, and then by studying the development of follicles in the reproductive system of the mice and determining the number of their various follicle types.
Disclosure of Invention
Aiming at the defects of the prior art, the invention provides a neural network-based ovarian follicle automatic counting method.
In order to solve the problem, the invention is realized by the following technical scheme:
a follicle counting method based on a neural network and a watershed algorithm comprises the following steps:
the method comprises the following steps: and dividing the data set into a training set, a verification set and a test set by using all data in a random seed mode. The data input by the neural network each time comprises an original picture and a marked picture;
step two: processing the pre-input pictures by using a data enhancement method for a training set when a network model is trained, and converting the picture data into a vector form;
step three: loading parameters of a pre-training model to initialize a neural network, executing forward propagation to calculate network parameters, and obtaining an output vectorO iObtaining a classification result vector y of each pixel point through a Log _ Softmax functioni(ii) a This vector yiThe maximum value of the medium weight is the final classification result predicted by the network for each pixel point, and the specific formula is yi=log_softmax(O i) ); the obtained classification result yiAnd the current correct tag value
Figure BDA0002496127480000021
Respectively serving as two inputs of an NLLloss loss function, and calculating a loss value; transmitting the error signal to the output of each layer, and obtaining the gradient of the network parameter through the derivative of the function of each layer to the network parameter; and updating and calculating network parameters influencing model training and model output by a stochastic gradient descent method optimizer to enable the network parameters to approach or reach an optimal value, so that a loss function is minimized or maximized.
Step four: after each training, calculating the cross-over ratio of the verification set
Figure BDA0002496127480000022
And storing the maximum model parameter of IOU when the training times are up toStopping training at a certain number of times, and loading the model parameter with the largest IOU to obtain the final model of the neural network, wherein
Figure BDA0002496127480000023
Is a true value, and y is a predicted value;
step five: the model obtained in the fourth step is an optimal model, input data are transmitted into a trained neural network to obtain an output result, the output image format is an RGB image, the RGB image is converted into a gray image through graying, an image gray histogram is established to select threshold values capable of partitioning different follicles, and then the follicles are partitioned according to the different threshold values;
step six: respectively processing the noise of the pictures obtained in the step five by using an on operation
Figure BDA0002496127480000024
For eliminating small objects, using closed-loop operations
Figure BDA0002496127480000025
Filling small cavities in the object, wherein
Figure BDA0002496127480000026
And
Figure BDA0002496127480000027
respectively, expansion and corrosion;
step seven: by distance conversion
Figure BDA0002496127480000028
Calculating the distance from the value of each pixel to the nearest background pixel to obtain Euclidean distance map, and then finding the final erosion point of each local area, namely the closer to the center of the follicle, the larger the corresponding value, wherein X is the target point, B isxIs the closest background point to X;
step eight: then, taking the maximum value of each local area as a water injection point of a watershed, and expanding the area of each mark point from the points as much as possible, namely, either until the edge of the local area reaches or the edge of the area of another mark point reaches, thereby segmenting the follicles which are contacted with each other;
step nine: setting different thresholds for different types of follicles, judging the follicles to be noise when the area in the picture is smaller than a given threshold, analyzing and calculating the number of the connected regions in the picture by adopting the connected regions of the four neighborhoods, and marking the number of the connected regions, wherein the number of the connected regions is the number of the follicles.
Preferably, the marking picture marks different follicle types through different colors.
Preferably, the data enhancement method comprises random horizontal flipping, random vertical flipping, 360-degree random rotation, contrast, brightness, saturation, sharpening and standardization.
Compared with the prior art, the invention has the following effects:
1) the invention provides a novel ovarian follicle counting mode, which counts ovarian follicles by combining a convolution neural network of semantic segmentation and watershed calculation for the first time.
2) The number counted by the counting mode is closer to the real data, and the accuracy is higher.
Drawings
FIG. 1 is an overall flow diagram of the present invention;
fig. 2 is a diagram of a neural network architecture employed.
Detailed Description
The invention is further described with reference to the following detailed description and accompanying drawings:
fig. 1 shows a novel method for ovarian follicle counting based on neural network and watershed algorithm:
1) and dividing the data set into a training set, a verification set and a test set by using all data in a random seed mode. The data input by the neural network each time comprises an original picture and a marked picture, wherein the marked picture marks different follicle types through different colors.
2) When a network model is trained, processing a pre-input picture by using a data enhancement method (random horizontal turning, random vertical turning, 360-degree random rotation, contrast, brightness, saturation, sharpening and standardization) on a training set, and converting picture data into a vector form;
3) loading parameters of the pre-trained model to initialize the neural network (as shown in FIG. 2), performing forward propagation to calculate network parameters, and obtaining the output vectorO iObtaining a classification result vector y of each pixel point through a Log _ Softmax functioni(ii) a This vector yiThe maximum value of the medium weight is the final classification result predicted by the network for each pixel point, and the specific formula is yi=log_softmax(O i) ); the obtained classification result yiAnd the current correct tag value
Figure BDA0002496127480000031
Respectively serving as two inputs of an NLLloss loss function, and calculating a loss value; transmitting the error signal to the output of each layer, and obtaining the gradient of the network parameter through the derivative of the function of each layer to the network parameter; and updating and calculating network parameters influencing model training and model output by a stochastic gradient descent method optimizer to enable the network parameters to approach or reach an optimal value, so that a loss function is minimized or maximized.
4) After each training, calculating the cross-over ratio of the verification set
Figure BDA0002496127480000032
Storing the maximum model parameter of IOU, stopping training when the training times reach a certain number, and loading the maximum model parameter of IOU to obtain the final model of the neural network, wherein
Figure BDA0002496127480000033
Is a true value, and y is a predicted value;
5) the model obtained in the fourth step is an optimal model, input data are transmitted into a trained neural network to obtain an output result, the output image format is an RGB image, the RGB image is converted into a gray image through graying, an image gray histogram is established to select threshold values capable of partitioning different follicles, and then the follicles are partitioned according to the different threshold values;
6) respectively processing the noise of the pictures obtained in the step five by using an on operation
Figure BDA0002496127480000041
For eliminating small objects, using closed-loop operations
Figure BDA0002496127480000042
Filling small cavities in the object, wherein
Figure BDA0002496127480000043
And
Figure BDA0002496127480000044
respectively, expansion and corrosion;
7) by distance conversion
Figure BDA0002496127480000045
Calculating the distance from the value of each pixel to the nearest background pixel to obtain Euclidean distance map, and then finding the final erosion point of each local area, namely the closer to the center of the follicle, the larger the corresponding value, wherein X is the target point, B isxIs the closest background point to X;
8) then, taking the maximum value of each local area as a water injection point of a watershed, and expanding the area of each mark point from the points as much as possible, namely, either until the edge of the local area reaches or the edge of the area of another mark point reaches, thereby segmenting the follicles which are contacted with each other;
9) setting different thresholds for different types of follicles, judging the follicles to be noise when the area in the picture is smaller than a given threshold, analyzing and calculating the number of the connected regions in the picture by adopting the connected regions of the four neighborhoods, and marking the number of the connected regions, wherein the number of the connected regions is the number of the follicles.

Claims (3)

1. A method for automatically counting ovarian follicles based on a neural network is characterized by comprising the following steps:
the method comprises the following steps: dividing a data set into a training set, a verification set and a test set by using all data in a random seed mode; the data input by the neural network each time comprises an original picture and a marked picture;
step two: processing the pre-input pictures by using a data enhancement method for a training set when a network model is trained, and converting the picture data into a vector form;
step three: loading parameters of a pre-training model to initialize a neural network, executing forward propagation to calculate network parameters, and obtaining an output vectorO iObtaining a classification result vector y of each pixel point through a Log _ Softmax functioni(ii) a This vector yiThe maximum value of the medium weight is the final classification result predicted by the network for each pixel point, and the specific formula is yi=log_soft max(O i) ); the obtained classification result yiAnd the current correct tag value
Figure FDA0002496127470000011
Respectively serving as two inputs of an NLLloss loss function, and calculating a loss value; transmitting the error signal to the output of each layer, and obtaining the gradient of the network parameter through the derivative of the function of each layer to the network parameter; updating and calculating network parameters influencing model training and model output by a random gradient descent method optimizer to enable the network parameters to approach or reach an optimal value, so that a loss function is minimized or maximized;
step four: after each training, calculating the cross-over ratio of the verification set
Figure FDA0002496127470000012
Storing the maximum model parameter of IOU, stopping training when the training times reach a certain number, and loading the maximum model parameter of IOU to obtain the final model of the neural network, wherein
Figure FDA0002496127470000013
Is a true value, and y is a predicted value;
step five: the model obtained in the fourth step is an optimal model, input data are transmitted into a trained neural network to obtain an output result, the output image format is an RGB image, the RGB image is converted into a gray image through graying, an image gray histogram is established to select threshold values capable of partitioning different follicles, and then the follicles are partitioned according to the different threshold values;
step six: respectively processing the noise of the pictures obtained in the step five by using an on operation
Figure FDA0002496127470000014
For eliminating small objects, using closed-loop operations
Figure FDA0002496127470000015
Filling small cavities in the object, wherein
Figure FDA0002496127470000016
And
Figure FDA0002496127470000017
respectively, expansion and corrosion;
step seven: by distance conversion
Figure FDA0002496127470000018
Calculating the distance from the value of each pixel to the nearest background pixel to obtain Euclidean distance map, and then finding the final erosion point of each local area, namely the closer to the center of the follicle, the larger the corresponding value, wherein X is the target point, B isxIs the closest background point to X;
step eight: then, taking the maximum value of each local area as a water injection point of a watershed, and expanding the area of each mark point from the points as much as possible, namely, either until the edge of the local area reaches or the edge of the area of another mark point reaches, thereby segmenting the follicles which are contacted with each other;
step nine: setting different thresholds for different types of follicles, judging the follicles to be noise when the area in the picture is smaller than a given threshold, analyzing and calculating the number of the connected regions in the picture by adopting the connected regions of the four neighborhoods, and marking the number of the connected regions, wherein the number of the connected regions is the number of the follicles.
2. The method for neural network-based ovarian follicle auto-counting according to claim 1, characterized in that: the marking picture marks different follicle types through different colors.
3. The method for neural network-based ovarian follicle auto-counting according to claim 1, characterized in that: the data enhancement method comprises random horizontal turning, random vertical turning, 360-degree random rotation, contrast, brightness, saturation, sharpening and standardization.
CN202010418981.7A 2020-05-18 2020-05-18 Automatic ovarian follicle counting method based on neural network Active CN111709911B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010418981.7A CN111709911B (en) 2020-05-18 2020-05-18 Automatic ovarian follicle counting method based on neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010418981.7A CN111709911B (en) 2020-05-18 2020-05-18 Automatic ovarian follicle counting method based on neural network

Publications (2)

Publication Number Publication Date
CN111709911A true CN111709911A (en) 2020-09-25
CN111709911B CN111709911B (en) 2023-05-05

Family

ID=72538005

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010418981.7A Active CN111709911B (en) 2020-05-18 2020-05-18 Automatic ovarian follicle counting method based on neural network

Country Status (1)

Country Link
CN (1) CN111709911B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113744184A (en) * 2021-07-27 2021-12-03 江苏农林职业技术学院 Snakehead ovum counting method based on image processing
CN114563869A (en) * 2022-01-17 2022-05-31 中国地质大学(武汉) Surface mount type mobile phone microscope detection system and microscopic result obtaining method thereof
CN117982214A (en) * 2024-04-07 2024-05-07 安徽医科大学第一附属医院 Auxiliary identification method and system for rapid egg picking

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108416755A (en) * 2018-03-20 2018-08-17 南昌航空大学 A kind of image de-noising method and system based on deep learning
CN109584251A (en) * 2018-12-06 2019-04-05 湘潭大学 A kind of tongue body image partition method based on single goal region segmentation
US20200043171A1 (en) * 2018-07-31 2020-02-06 Element Ai Inc. Counting objects in images based on approximate locations

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108416755A (en) * 2018-03-20 2018-08-17 南昌航空大学 A kind of image de-noising method and system based on deep learning
US20200043171A1 (en) * 2018-07-31 2020-02-06 Element Ai Inc. Counting objects in images based on approximate locations
CN109584251A (en) * 2018-12-06 2019-04-05 湘潭大学 A kind of tongue body image partition method based on single goal region segmentation

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
田佳鹭;邓立国;: "基于改进VGG16的猴子图像分类方法" *
黄斌;卢金金;王建华;吴星明;陈伟海;: "基于深度卷积神经网络的物体识别算法" *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113744184A (en) * 2021-07-27 2021-12-03 江苏农林职业技术学院 Snakehead ovum counting method based on image processing
CN114563869A (en) * 2022-01-17 2022-05-31 中国地质大学(武汉) Surface mount type mobile phone microscope detection system and microscopic result obtaining method thereof
CN117982214A (en) * 2024-04-07 2024-05-07 安徽医科大学第一附属医院 Auxiliary identification method and system for rapid egg picking

Also Published As

Publication number Publication date
CN111709911B (en) 2023-05-05

Similar Documents

Publication Publication Date Title
CN109800778B (en) Faster RCNN target detection method based on difficultly-divided sample mining
CN111325152B (en) Traffic sign recognition method based on deep learning
CN111898406B (en) Face detection method based on focus loss and multitask cascade
CN111709911A (en) Ovarian follicle automatic counting method based on neural network
CN108805070A (en) A kind of deep learning pedestrian detection method based on built-in terminal
CN110532946B (en) Method for identifying axle type of green-traffic vehicle based on convolutional neural network
CN112287941B (en) License plate recognition method based on automatic character region perception
CN113096096B (en) Microscopic image bone marrow cell counting method and system fusing morphological characteristics
CN111833322B (en) Garbage multi-target detection method based on improved YOLOv3
CN111986126B (en) Multi-target detection method based on improved VGG16 network
CN111986125A (en) Method for multi-target task instance segmentation
CN108537168A (en) Human facial expression recognition method based on transfer learning technology
CN113420643A (en) Lightweight underwater target detection method based on depth separable cavity convolution
CN111127360A (en) Gray level image transfer learning method based on automatic encoder
CN111931572B (en) Target detection method for remote sensing image
CN117788957B (en) Deep learning-based qualification image classification method and system
CN113408524A (en) Crop image segmentation and extraction algorithm based on MASK RCNN
CN107871315B (en) Video image motion detection method and device
CN113221956A (en) Target identification method and device based on improved multi-scale depth model
CN114550134A (en) Deep learning-based traffic sign detection and identification method
CN114140485A (en) Method and system for generating cutting track of main root of panax notoginseng
CN117765480A (en) Method and system for early warning migration of wild animals along road
CN112926694A (en) Method for automatically identifying pigs in image based on improved neural network
CN112132207A (en) Target detection neural network construction method based on multi-branch feature mapping
CN113837236B (en) Method and device for identifying target object in image, terminal equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant