CN112330660A - Sperm tail detection method and system based on neural network - Google Patents

Sperm tail detection method and system based on neural network Download PDF

Info

Publication number
CN112330660A
CN112330660A CN202011329591.9A CN202011329591A CN112330660A CN 112330660 A CN112330660 A CN 112330660A CN 202011329591 A CN202011329591 A CN 202011329591A CN 112330660 A CN112330660 A CN 112330660A
Authority
CN
China
Prior art keywords
sperm
tail
head
image
point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011329591.9A
Other languages
Chinese (zh)
Other versions
CN112330660B (en
Inventor
刘畅
李福平
侯苇
王梓名
余林
贾烨菻
钟正华
廖露
赵阳玫
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chengdu Puhua Technology Co ltd
Original Assignee
Chengdu Puhua Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chengdu Puhua Technology Co ltd filed Critical Chengdu Puhua Technology Co ltd
Priority to CN202011329591.9A priority Critical patent/CN112330660B/en
Publication of CN112330660A publication Critical patent/CN112330660A/en
Application granted granted Critical
Publication of CN112330660B publication Critical patent/CN112330660B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30024Cell structures in vitro; Tissue sections in vitro

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Data Mining & Analysis (AREA)
  • Radiology & Medical Imaging (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Medical Informatics (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Quality & Reliability (AREA)
  • Evolutionary Computation (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a sperm tail detection method and a sperm tail detection system based on a neural network, wherein the sperm tail detection method comprises the steps of collecting a sperm image; segmenting the sperm image through a first neural network unit to generate a segmented image; extracting the head coordinates and head direction data of the sperms of the sperm image through a second neural network unit; extracting a sperm tail coordinate set of the segmented image based on the sperm head coordinate and the head direction data; and generating a detection result containing the length information of the sperm tail based on the sperm tail coordinate set and the segmentation image. According to the invention, the head region image and the tail region image of the sperm image are accurately extracted through a neural network algorithm to improve the segmentation precision, and the sperm tail coordinate set is generated by traversing the skeleton points in the skeleton image corresponding to the tail region image, so that accurate sperm tail length information is generated based on the sperm tail coordinate set and the segmentation image, the sperm tail segmentation precision is improved, and the problem of low detection precision of the traditional sperm tail detection method is solved.

Description

Sperm tail detection method and system based on neural network
Technical Field
The invention relates to the technical field of sperm detection, in particular to a sperm tail detection method and a sperm tail detection system based on a neural network.
Background
The traditional sperm detection method mainly comprises the steps of dyeing sperms through a coloring agent, then placing a sperm slide under a microscope with the magnification of 100 times, taking a picture, then carrying out sperm segmentation detection on the picture obtained by taking the picture through a traditional segmentation algorithm (such as texture), then judging the sperms through the segmentation result, and judging whether the sperms are normal. However, the traditional segmentation algorithm needs to artificially design features, and the diversity of the sperm morphological images can cause the robustness of the features to be low, so that the accuracy of sperm segmentation is low.
Therefore, the existing sperm tail detection method has the problem of low detection precision.
Disclosure of Invention
In view of the above, the invention provides a sperm tail detection method based on a neural network and a system thereof, and solves the problem of low detection precision of the existing sperm tail detection method by improving an image detection method.
In order to solve the problems, the technical scheme of the invention is to adopt a sperm tail detection method based on a neural network, which comprises the following steps: s1: collecting a sperm image; s2: segmenting the sperm image by a first neural network unit to generate a segmented image; s3: extracting the head coordinates and head direction data of the sperms of the sperm image through a second neural network unit; s4: extracting a sperm tail coordinate set of the segmented image based on the sperm head coordinates and the head direction data; s5: and generating a detection result containing sperm tail length information based on the sperm tail coordinate set and the segmentation image.
Preferably, segmenting the sperm image by a first neural network unit to generate a segmented image comprises: acquiring a data set consisting of a plurality of sperm photos, labeling the sperm tail of each sperm photo, and generating a first training sample set and a first test set consisting of a plurality of sperm photos containing tail labels; training and verifying under a tensioflow framework by using a Deeplabv3+ network model based on the first training sample set and the first test set, and generating a semantic segmentation model for segmenting the tail of the sperm; and inputting the sperm image into the first neural network unit, and acquiring the segmentation image based on the semantic segmentation model.
Preferably, extracting the sperm head coordinates and head direction data of the sperm image by a second neural network unit comprises: acquiring a data set consisting of a plurality of sperm photos, labeling the sperm head of each sperm photo, and generating a second training sample set and a second training set consisting of a plurality of sperm photos containing head labels, wherein the labeled central point is the intersection point of the sperm head and the sperm tail; training and verifying under a tenserflow framework by using a Faster-rcnn network model based on the second training sample set and the second testing set to generate a sperm head detection model for sperm head extraction; inputting the sperm image into the second neural network unit, acquiring a sperm head region extraction frame based on the sperm head detection model and extracting a sperm head region image; extracting the sperm head coordinates and the head direction data based on the sperm head region image.
Preferably, extracting the sperm head coordinates and the head direction data based on the sperm head region image comprises: generating a grayscale map based on the sperm head region image; calculating a pixel mean value m and a pixel value standard deviation sd based on the gray level image, and defining a threshold interval as (m-sd, m + sd); traversing all pixel points in the gray-scale image, updating the pixel value of the pixel point of which the original pixel value belongs to the threshold interval to be 0, and updating the pixel value of the pixel point of which the original pixel value does not belong to the threshold interval to be 255, thereby generating a binary black-and-white image; and extracting all pixel points with the pixel values of 0 contained in the binary black-and-white image and performing ellipse fitting calculation, wherein the central point of the generated ellipse area image is the head coordinate of the sperm, and the connecting line direction of two focuses of the ellipse area image is the head direction data.
Preferably, extracting sperm tail coordinates of the segmented image based on the sperm head coordinates and head direction data comprises: calculating connected domains in the segmented images, and eliminating the connected domains with the areas smaller than a first threshold value in the segmented images; extracting bones from the residual connected domains in the segmented image to generate a bone map consisting of a plurality of discontinuous bone points; extracting sperm tail starting point coordinates contained in the skeleton map based on the sperm head coordinates and head direction data; generating the sperm tail coordinates based on the sperm tail starting point coordinates and the head direction data.
Preferably, extracting sperm tail starting point coordinates contained in the skeletal map based on the sperm head coordinates and head direction data comprises: extracting at least one bone point which is contained in a corresponding area in the bone map and intersects with the target frame and an intersection point of the bone point and the target frame based on the coordinates of the sperm head area extraction frame; and extracting the intersection point with the smallest included angle with the head direction from the intersection points of the skeleton point and the target frame as the starting point of the tail of the sperm based on the head direction data.
Preferably, generating the set of sperm tail coordinates based on the sperm tail starting point coordinates and the head direction data comprises: extracting the skeleton points existing in a preset pixel area range with the sperm tail starting point as a central point and generating a skeleton point set, wherein the skeleton point set comprises an external point set and an internal point set; respectively calculating the direction of a connecting line of each bone point in the external point set and the sperm tail starting point as the direction of the bone point, extracting the bone point with the smallest included angle with the head direction in the external point set as a first external tail point, extracting the bone points existing in a preset pixel area range with the first external tail point as a central point, calculating the included angle, and extracting second external tail points until all external tail points are generated; repeating the steps based on the internal point set until all internal tail points are generated; generating the sperm tail coordinate set based on the total outer tail points and inner tail points.
Accordingly, the present invention provides a sperm tail detection system based on a neural network, comprising: the image acquisition unit is used for acquiring a sperm image; a first neural network unit for segmenting the sperm image and generating a segmented image; the second neural network unit is used for extracting the sperm head coordinates and the head direction data of the sperm image; and the data processing unit extracts a sperm tail coordinate set of the segmentation image based on the sperm head coordinate and the head direction data, and generates a detection result containing sperm tail length information based on the sperm tail coordinate set and the segmentation image.
Preferably, the first neural network unit generates a first training sample set and a first test set consisting of a plurality of sperm photographs containing tail labels by acquiring a data set consisting of a plurality of sperm photographs and labeling a sperm tail of each of the sperm photographs; training and verifying under a tensioflow framework by using a Deeplabv3+ network model based on the first training sample set and the first test set, and generating a semantic segmentation model for segmenting the tail of the sperm; and inputting the sperm image into the first neural network unit, and acquiring the segmentation image based on the semantic segmentation model.
Preferably, the second neural network unit generates a second training sample set and a second training set composed of a plurality of sperm photos containing head marks by acquiring a data set composed of a plurality of sperm photos and labeling the sperm head of each sperm photo, wherein the labeled central point is the intersection point of the sperm head and the sperm tail; training and verifying under a tenserflow framework by using a Faster-rcnn network model based on the second training sample set and the second testing set to generate a sperm head detection model for sperm head extraction; inputting the sperm image into the second neural network unit, acquiring a sperm head region extraction frame based on the sperm head detection model and extracting a sperm head region image; extracting the sperm head coordinates and the head direction data based on the sperm head region image.
The invention has the primary improvement that after the segmentation precision is improved by accurately extracting the head region image and the tail region image of the sperm image by using a neural network algorithm, the sperm tail coordinate set is generated by traversing the bone points in the bone image corresponding to the tail region image, so that accurate sperm tail length information is generated based on the sperm tail coordinate set and the segmentation image, the sperm tail segmentation precision is greatly improved, and the problem of lower detection precision of the traditional sperm tail detection method is solved.
Drawings
FIG. 1 is a simplified flow diagram of a neural network-based sperm tail detection method of the present invention;
FIG. 2 is a simplified block diagram of a neural network-based sperm tail detection system of the present invention;
FIG. 3 is an exemplary graph of a gray scale map of the present invention;
FIG. 4 is an exemplary diagram of an elliptical area image of the present invention;
FIG. 5 is an exemplary diagram of the present invention for extracting outer tail points;
FIG. 6 is an exemplary diagram of extracting inner tail points of the present invention.
Detailed Description
In order to make the technical solutions of the present invention better understood by those skilled in the art, the present invention will be further described in detail with reference to the accompanying drawings and specific embodiments.
As shown in fig. 1, a sperm tail detection method based on a neural network is characterized by comprising the following steps: s1: collecting a sperm image; s2: segmenting the sperm image by a first neural network unit to generate a segmented image; s3: extracting the head coordinates and head direction data of the sperms of the sperm image through a second neural network unit; s4: extracting a sperm tail coordinate set of the segmented image based on the sperm head coordinates and the head direction data; s5: and generating a detection result containing sperm tail length information based on the sperm tail coordinate set and the segmentation image.
According to the invention, after the head region image and the tail region image of the sperm image are accurately extracted by using a neural network algorithm to improve the segmentation precision, the sperm tail coordinate set is generated by traversing the bone points in the bone map corresponding to the tail region image, so that accurate sperm tail length information is generated based on the sperm tail coordinate set and the segmentation image, the sperm tail segmentation precision is greatly improved, and the problem of low detection precision in the traditional sperm tail detection method is solved.
The invention is to accurately extract a segmentation image of a sperm tail, segment the sperm image through a first neural network unit to generate a segmentation image, specifically, obtain a data set consisting of a plurality of sperm photos, label the sperm tail of each sperm photo, and generate a first training sample set and a first test set consisting of a plurality of sperm photos containing tail labels; training and verifying under a tensioflow framework by using a Deeplabv3+ network model based on the first training sample set and the first test set, and generating a semantic segmentation model for segmenting the tail of the sperm; and inputting the sperm image into the first neural network unit, and acquiring the segmentation image based on the semantic segmentation model. The number of sperm photographs contained in the data set may be 1000, including 7000 sperm tail markers.
Specifically, the working principle of the semantic segmentation model under the Deeplabv3+ network model is as follows: inputting the picture into an improved deep convolutional network for feature extraction, so as to obtain semantic features C and semantic features G with different scales; and transmitting the semantic features C into a cavity pyramid pooling module ASPP, and respectively performing convolution with four cavity convolution layers and pooling with one pooling layer, thereby obtaining five feature maps and combining the five feature maps into a 5-layer structure D. D, convolving with a convolution layer of 1 x 1 to obtain a structure E; e, obtaining a structure F through up-sampling; obtaining a semantic feature map G which is the same as the structure F in resolution in a deep convolution network layer, reducing the number of channels after 1 x 1 convolution so as to be the same as the number of channels occupied by the structure F, and then combining the semantic feature map G with the structure F; based on the semantic feature graph H generated by merging in the previous step, performing thinning operation through a 3 multiplied by 3 convolution; and then the image is changed into 4 times of the original image through bilinear upsampling, and finally a semantic segmentation result is obtained.
In order to accurately acquire a sperm head area image containing a sperm head and tail intersection point, extracting a sperm head coordinate and head direction data of the sperm image through a second neural network unit, specifically, acquiring a data set consisting of a plurality of sperm photos, labeling the sperm head of each sperm photo, and generating a second training sample set and a second training set consisting of a plurality of sperm photos containing head marks, wherein the labeled central point is the sperm head and tail intersection point; training and verifying under a tenserflow framework by using a Faster-rcnn network model based on the second training sample set and the second testing set to generate a sperm head detection model for sperm head extraction; inputting the sperm image into the second neural network unit, acquiring a sperm head region extraction frame based on the sperm head detection model and extracting a sperm head region image; extracting the sperm head coordinates and the head direction data based on the sperm head region image. The number of sperm photographs contained in the data set may be 1000 or more, including 8000 or more sperm head markers.
Further, extracting the sperm head coordinates and the head direction data based on the sperm head region image comprises: generating a gray scale map based on the sperm head region image, as shown in fig. 3; calculating a pixel mean value m and a pixel value standard deviation sd based on the gray level image, and defining a threshold interval as (m-sd, m + sd); traversing all pixel points in the gray-scale image, updating the pixel value of the pixel point of which the original pixel value belongs to the threshold interval to be 0, and updating the pixel value of the pixel point of which the original pixel value does not belong to the threshold interval to be 255, thereby generating a binary black-and-white image; as shown in fig. 4, all pixel points with pixel values of 0 included in the binarized black-and-white image are extracted and ellipse fitting calculation is performed, the central point of the generated ellipse region image is the sperm head coordinate, and the connecting line direction of the two focuses of the ellipse region image is the head direction data.
Furthermore, because the line of the tail of the sperm after semantic segmentation is thicker, and the tail needs to be depicted when the line of the tail of the sperm is obtained, the invention removes impurity regions and refines the line in the segmentation graph through bone extraction to obtain an accurate thinner line, thereby further improving the precision of the detection of the tail of the sperm, and specifically, the extraction of the coordinate of the tail of the sperm of the segmentation image based on the head coordinate of the sperm and the head direction data comprises the following steps: calculating connected domains in the segmented images, and eliminating the connected domains with the areas smaller than a first threshold value in the segmented images; extracting bones from the residual connected domains in the segmented image to generate a bone map consisting of a plurality of discontinuous bone points; extracting sperm tail starting point coordinates contained in the skeleton map based on the sperm head coordinates and head direction data; generating the sperm tail coordinates based on the sperm tail starting point coordinates and the head direction data. The first threshold may be 100 pixels.
Further, extracting sperm tail starting point coordinates contained in the skeleton map based on the sperm head coordinates and the head direction data, comprising: extracting at least one bone point which is contained in a corresponding area in the bone map and intersects with the target frame and an intersection point of the bone point and the target frame based on the coordinates of the sperm head area extraction frame; and extracting the intersection point with the smallest included angle with the head direction from the intersection points of the skeleton point and the target frame as the starting point of the tail of the sperm based on the head direction data. Wherein, since the sperm head region extraction frame generated by the fast-rcnn network model has a width of 3 pixels and bone points spaced between 1 and 2 pixels, it is ensured that at least one of the bone points intersecting the target frame is contained in the corresponding region in the bone map.
Further, generating the set of sperm tail coordinates based on the sperm tail start point coordinates and the head direction data comprises: extracting the skeleton points existing in a preset pixel area range with the sperm tail starting point as a central point and generating a skeleton point set, wherein the skeleton point set comprises an external point set and an internal point set; as shown in fig. 5, respectively calculating a connection line direction of each bone point in the external point set and the sperm tail starting point as the direction of the bone point, extracting the bone point with the smallest included angle with the head direction in the external point set as a first external tail point, extracting the bone points existing in a preset pixel area range with the first external tail point as a central point, performing included angle calculation, and extracting second external tail points until all external tail points are generated; as shown in fig. 6, based on the internal point set, repeating the above steps until all internal tail points are generated; generating the sperm tail coordinate set based on the total outer tail points and inner tail points. The preset pixel area range can be a 7 × 7 pixel range, the outer point set is defined as all the bone points located outside the region of the bone map corresponding to the coordinates of the sperm head region extraction box, and the inner point set is defined as all the bone points located inside the region of the bone map corresponding to the coordinates of the sperm head region extraction box.
Accordingly, as shown in fig. 2, the present invention provides a sperm tail detection system based on neural network, comprising: the image acquisition unit is used for acquiring a sperm image; a first neural network unit for segmenting the sperm image and generating a segmented image; the second neural network unit is used for extracting the sperm head coordinates and the head direction data of the sperm image; and the data processing unit extracts a sperm tail coordinate set of the segmentation image based on the sperm head coordinate and the head direction data, and generates a detection result containing sperm tail length information based on the sperm tail coordinate set and the segmentation image.
Further, the first neural network unit generates a first training sample set and a first test set which are composed of a plurality of sperm photos containing tail marks by acquiring a data set composed of a plurality of sperm photos and labeling the sperm tail of each sperm photo; training and verifying under a tensioflow framework by using a Deeplabv3+ network model based on the first training sample set and the first test set, and generating a semantic segmentation model for segmenting the tail of the sperm; and inputting the sperm image into the first neural network unit, and acquiring the segmentation image based on the semantic segmentation model.
Further, the second neural network unit generates a second training sample set and a second training set which are composed of a plurality of sperm photos containing head marks by acquiring a data set composed of a plurality of sperm photos and labeling the sperm head of each sperm photo, wherein the labeled central point is the intersection point of the sperm head and the sperm tail; training and verifying under a tenserflow framework by using a Faster-rcnn network model based on the second training sample set and the second testing set to generate a sperm head detection model for sperm head extraction; inputting the sperm image into the second neural network unit, acquiring a sperm head region extraction frame based on the sperm head detection model and extracting a sperm head region image; extracting the sperm head coordinates and the head direction data based on the sperm head region image.
The above is only a preferred embodiment of the present invention, and it should be noted that the above preferred embodiment should not be considered as limiting the present invention, and the protection scope of the present invention should be subject to the scope defined by the claims. It will be apparent to those skilled in the art that various modifications and adaptations can be made without departing from the spirit and scope of the invention, and these modifications and adaptations should be considered within the scope of the invention.

Claims (10)

1. A sperm tail detection method based on a neural network is characterized by comprising the following steps:
s1: collecting a sperm image;
s2: segmenting the sperm image by a first neural network unit to generate a segmented image;
s3: extracting the head coordinates and head direction data of the sperms of the sperm image through a second neural network unit;
s4: extracting a sperm tail coordinate set of the segmented image based on the sperm head coordinates and the head direction data;
s5: and generating a detection result containing sperm tail length information based on the sperm tail coordinate set and the segmentation image.
2. The sperm tail detection method of claim 1 wherein segmenting the sperm image by a first neural network element to generate a segmented image comprises:
acquiring a data set consisting of a plurality of sperm photos, labeling the sperm tail of each sperm photo, and generating a first training sample set and a first test set consisting of a plurality of sperm photos containing tail labels;
training and verifying under a tensioflow framework by using a Deeplabv3+ network model based on the first training sample set and the first test set, and generating a semantic segmentation model for segmenting the tail of the sperm;
and inputting the sperm image into the first neural network unit, and acquiring the segmentation image based on the semantic segmentation model.
3. The sperm tail detection method of claim 2 wherein extracting sperm head coordinates and head orientation data for the sperm image via a second neural network element comprises:
acquiring a data set consisting of a plurality of sperm photos, labeling the sperm head of each sperm photo, and generating a second training sample set and a second training set consisting of a plurality of sperm photos containing head labels, wherein the labeled central point is the intersection point of the sperm head and the sperm tail;
training and verifying under a tenserflow framework by using a Faster-rcnn network model based on the second training sample set and the second testing set to generate a sperm head detection model for sperm head extraction;
inputting the sperm image into the second neural network unit, acquiring a sperm head region extraction frame based on the sperm head detection model and extracting a sperm head region image;
extracting the sperm head coordinates and the head direction data based on the sperm head region image.
4. The sperm tail detection method of claim 3 wherein extracting the sperm head coordinates and the head orientation data based on the sperm head region image comprises:
generating a grayscale map based on the sperm head region image;
calculating a pixel mean value m and a pixel value standard deviation sd based on the gray level image, and defining a threshold interval as (m-sd, m + sd);
traversing all pixel points in the gray-scale image, updating the pixel value of the pixel point of which the original pixel value belongs to the threshold interval to be 0, and updating the pixel value of the pixel point of which the original pixel value does not belong to the threshold interval to be 255, thereby generating a binary black-and-white image;
and extracting all pixel points with the pixel values of 0 contained in the binary black-and-white image and performing ellipse fitting calculation, wherein the central point of the generated ellipse area image is the head coordinate of the sperm, and the connecting line direction of two focuses of the ellipse area image is the head direction data.
5. The sperm tail detection method of claim 4 wherein extracting sperm tail coordinates of the segmented image based on the sperm head coordinates and head direction data comprises:
calculating connected domains in the segmented images, and eliminating the connected domains with the areas smaller than a first threshold value in the segmented images;
extracting bones from the residual connected domains in the segmented image to generate a bone map consisting of a plurality of discontinuous bone points;
extracting sperm tail starting point coordinates contained in the skeleton map based on the sperm head coordinates and head direction data;
generating the sperm tail coordinates based on the sperm tail starting point coordinates and the head direction data.
6. The sperm tail detection method of claim 5 wherein extracting sperm tail starting point coordinates contained in said skeletal map based on said sperm head coordinates and head orientation data comprises:
extracting at least one bone point which is contained in a corresponding area in the bone map and intersects with the target frame and an intersection point of the bone point and the target frame based on the coordinates of the sperm head area extraction frame;
and extracting the intersection point with the smallest included angle with the head direction from the intersection points of the skeleton point and the target frame as the starting point of the tail of the sperm based on the head direction data.
7. The sperm tail detection method of claim 6, wherein generating the set of sperm tail coordinates based on the sperm tail start point coordinates and the head direction data comprises:
extracting the skeleton points existing in a preset pixel area range with the sperm tail starting point as a central point and generating a skeleton point set, wherein the skeleton point set comprises an external point set and an internal point set;
respectively calculating the direction of a connecting line of each bone point in the external point set and the sperm tail starting point as the direction of the bone point, extracting the bone point with the smallest included angle with the head direction in the external point set as a first external tail point, extracting the bone points existing in a preset pixel area range with the first external tail point as a central point, calculating the included angle, and extracting second external tail points until all external tail points are generated;
repeating the steps based on the internal point set until all internal tail points are generated;
generating the sperm tail coordinate set based on the total outer tail points and inner tail points.
8. A sperm tail detection system based on a neural network, comprising:
the image acquisition unit is used for acquiring a sperm image;
a first neural network unit for segmenting the sperm image and generating a segmented image;
the second neural network unit is used for extracting the sperm head coordinates and the head direction data of the sperm image;
and the data processing unit extracts a sperm tail coordinate set of the segmentation image based on the sperm head coordinate and the head direction data, and generates a detection result containing sperm tail length information based on the sperm tail coordinate set and the segmentation image.
9. The sperm tail detection system of claim 8, wherein the first neural network element generates a first training sample set and a first test set of a plurality of sperm photographs comprising tail labels by acquiring a data set of a plurality of sperm photographs and labeling a sperm tail of each of the sperm photographs; training and verifying under a tensioflow framework by using a Deeplabv3+ network model based on the first training sample set and the first test set, and generating a semantic segmentation model for segmenting the tail of the sperm; and inputting the sperm image into the first neural network unit, and acquiring the segmentation image based on the semantic segmentation model.
10. The sperm tail detection system of claim 9 wherein the second neural network element generates a second training sample set and a second training set of a plurality of sperm pictures comprising head labeling sperm heads by acquiring a data set of a plurality of sperm pictures and labeling a sperm head of each of the sperm pictures, wherein a center point of labeling is a sperm head and tail intersection point; training and verifying under a tenserflow framework by using a Faster-rcnn network model based on the second training sample set and the second testing set to generate a sperm head detection model for sperm head extraction; inputting the sperm image into the second neural network unit, acquiring a sperm head region extraction frame based on the sperm head detection model and extracting a sperm head region image; extracting the sperm head coordinates and the head direction data based on the sperm head region image.
CN202011329591.9A 2020-11-24 2020-11-24 Sperm tail detection method and system based on neural network Active CN112330660B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011329591.9A CN112330660B (en) 2020-11-24 2020-11-24 Sperm tail detection method and system based on neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011329591.9A CN112330660B (en) 2020-11-24 2020-11-24 Sperm tail detection method and system based on neural network

Publications (2)

Publication Number Publication Date
CN112330660A true CN112330660A (en) 2021-02-05
CN112330660B CN112330660B (en) 2024-02-02

Family

ID=74322320

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011329591.9A Active CN112330660B (en) 2020-11-24 2020-11-24 Sperm tail detection method and system based on neural network

Country Status (1)

Country Link
CN (1) CN112330660B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022221911A1 (en) * 2021-04-19 2022-10-27 Newsouth Innovations Pty Limited "quality assessment of reproductive material"

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2005080944A1 (en) * 2004-02-18 2005-09-01 The University Court Of The University Of Glasgow Analysis of cell morphology and motility
KR20110049606A (en) * 2009-11-05 2011-05-12 주식회사 메디칼써프라이 A method for analyzing the morphology and motility of sperm using histogram
US20120148141A1 (en) * 2010-12-14 2012-06-14 Aydogan Ozcan Compact automated semen analysis platform using lens-free on-chip microscopy
GB201711561D0 (en) * 2017-07-18 2017-08-30 Spermcomet Ltd Method and apparatus
CN107615066A (en) * 2015-05-07 2018-01-19 技术创新动力基金(以色列)有限合伙公司 For biological cell and the interference system of biologic artifact and method including sperm
CN110363056A (en) * 2018-12-29 2019-10-22 上海北昂医药科技股份有限公司 Sperm recognition methods in dynamics video image
CN110930345A (en) * 2018-08-31 2020-03-27 赛司医疗科技(北京)有限公司 Sperm tail recognition method
WO2020068380A1 (en) * 2018-09-28 2020-04-02 The Brigham And Women's Hospital, Inc. Automated evaluation of sperm morphology

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2005080944A1 (en) * 2004-02-18 2005-09-01 The University Court Of The University Of Glasgow Analysis of cell morphology and motility
KR20110049606A (en) * 2009-11-05 2011-05-12 주식회사 메디칼써프라이 A method for analyzing the morphology and motility of sperm using histogram
US20120148141A1 (en) * 2010-12-14 2012-06-14 Aydogan Ozcan Compact automated semen analysis platform using lens-free on-chip microscopy
CN107615066A (en) * 2015-05-07 2018-01-19 技术创新动力基金(以色列)有限合伙公司 For biological cell and the interference system of biologic artifact and method including sperm
GB201711561D0 (en) * 2017-07-18 2017-08-30 Spermcomet Ltd Method and apparatus
CN110930345A (en) * 2018-08-31 2020-03-27 赛司医疗科技(北京)有限公司 Sperm tail recognition method
WO2020068380A1 (en) * 2018-09-28 2020-04-02 The Brigham And Women's Hospital, Inc. Automated evaluation of sperm morphology
CN110363056A (en) * 2018-12-29 2019-10-22 上海北昂医药科技股份有限公司 Sperm recognition methods in dynamics video image

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022221911A1 (en) * 2021-04-19 2022-10-27 Newsouth Innovations Pty Limited "quality assessment of reproductive material"

Also Published As

Publication number Publication date
CN112330660B (en) 2024-02-02

Similar Documents

Publication Publication Date Title
CN109655019B (en) Cargo volume measurement method based on deep learning and three-dimensional reconstruction
CN112233125B (en) Image segmentation method, device, electronic equipment and computer readable storage medium
CN110807775A (en) Traditional Chinese medicine tongue image segmentation device and method based on artificial intelligence and storage medium
CN113362331A (en) Image segmentation method and device, electronic equipment and computer storage medium
CN111640116B (en) Aerial photography graph building segmentation method and device based on deep convolutional residual error network
CN111027538A (en) Container detection method based on instance segmentation model
CN114331986A (en) Dam crack identification and measurement method based on unmanned aerial vehicle vision
CN111709929A (en) Lung canceration region segmentation and classification detection system
CN114022408A (en) Remote sensing image cloud detection method based on multi-scale convolution neural network
CN113505781B (en) Target detection method, target detection device, electronic equipment and readable storage medium
CN115019274A (en) Pavement disease identification method integrating tracking and retrieval algorithm
CN115272887A (en) Coastal zone garbage identification method, device and equipment based on unmanned aerial vehicle detection
CN114266881A (en) Pointer type instrument automatic reading method based on improved semantic segmentation network
CN114241469A (en) Information identification method and device for electricity meter rotation process
CN116645592A (en) Crack detection method based on image processing and storage medium
CN111832616A (en) Method and system for identifying airplane model by using remote sensing image of multiple types of depth maps
CN116266406A (en) Character coordinate extraction method, device, equipment and storage medium
CN112330660B (en) Sperm tail detection method and system based on neural network
CN111325263B (en) Image processing method and device, intelligent microscope, readable storage medium and equipment
CN113269195A (en) Reading table image character recognition method and device and readable storage medium
CN113537187A (en) Text recognition method and device, electronic equipment and readable storage medium
CN116740528A (en) Shadow feature-based side-scan sonar image target detection method and system
CN114241150A (en) Water area data preprocessing method in oblique photography modeling
CN111931689B (en) Method for extracting video satellite data identification features on line
CN112036246B (en) Construction method of remote sensing image classification model, remote sensing image classification method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant