CN114041166A - Method and system for tracking oocytes - Google Patents

Method and system for tracking oocytes Download PDF

Info

Publication number
CN114041166A
CN114041166A CN202080047331.4A CN202080047331A CN114041166A CN 114041166 A CN114041166 A CN 114041166A CN 202080047331 A CN202080047331 A CN 202080047331A CN 114041166 A CN114041166 A CN 114041166A
Authority
CN
China
Prior art keywords
follicle
region
ovarian
target
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202080047331.4A
Other languages
Chinese (zh)
Inventor
刘学东
刘超越
邹耀贤
林穆清
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Mindray Bio Medical Electronics Co Ltd
Original Assignee
Shenzhen Mindray Bio Medical Electronics Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Mindray Bio Medical Electronics Co Ltd filed Critical Shenzhen Mindray Bio Medical Electronics Co Ltd
Publication of CN114041166A publication Critical patent/CN114041166A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10132Ultrasound image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing

Abstract

A method and system for egg tracking, the method comprising: acquiring ultrasound images of ovarian tissue of a subject at least three different examination times, wherein the ovarian tissue comprises a target follicle (201); determining a follicle region corresponding to a target follicle on at least three ultrasound images at different examination times respectively to obtain at least three follicle regions, wherein the at least three follicle regions are follicle regions corresponding to the same target follicle (202); determining growth parameters of the target follicle according to the at least three follicle areas respectively, and obtaining at least three growth parameters of the target follicle (203); a growth trend map of the target follicle is obtained (204) based on at least three growth parameters of the target follicle, and the growth trend map is displayed (205). According to the method, tracking detection is carried out on the same target follicle in the ultrasonic images obtained at different inspection times, and the growth parameter trend graph of the same target follicle is obtained, so that an operator can accurately evaluate the optimal ovum pickup time according to the growth trend graph of the target follicle, and the accuracy work efficiency of evaluating the ovum pickup time is effectively improved.

Description

Method and system for tracking oocytes
Technical Field
The present application relates to the field of follicle tracking technologies, and more particularly, to a follicle tracking method and system.
Background
At present, many families have the problem of infertility, and the test tube infant solves one of the main modes of the problem of the infertility. The key step in tube infants is the retrieval of the ovum. In the natural cycle of a patient, only one dominant follicle usually obtains one embryo, so in order to improve the success rate of transplantation, controlled ovarian hyperstimulation is needed to enhance and improve the ovarian function so as to achieve the purposes of obtaining a plurality of healthy ova without the limitation of the natural cycle. In order to obtain healthy eggs, the egg taking time is of the utmost importance.
Generally, after ovarian stimulation is carried out by injecting ovulation-promoting drugs, 4-6 follow-up examinations are required to monitor the development condition of follicles in an ovulation cycle. Because the growth and development conditions of a single follicle cannot be tracked in a positioning manner, the current monitoring basically focuses on judging the overall development condition of the follicle, so that the obtained information is inaccurate, and the accurate ovum taking time cannot be obtained for the single follicle.
Disclosure of Invention
In this summary, concepts in a simplified form are introduced that are further described in the detailed description. This summary of the invention is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used as an aid in determining the scope of the claimed subject matter.
In one embodiment, there is provided an egg tracking method, comprising:
acquiring ultrasound images of ovarian tissue of a subject at least three different examination times, wherein the ovarian tissue comprises a target follicle;
determining a follicle region corresponding to the target follicle on the at least three ultrasound images at different examination times respectively to obtain at least three follicle regions, wherein the at least three follicle regions are follicle regions corresponding to the same target follicle;
determining growth parameters of the target follicle according to the at least three follicle areas respectively, and obtaining at least three growth parameters of the target follicle;
obtaining a growth trend map of the target follicle according to at least three growth parameters of the target follicle;
and displaying the growth trend graph.
In one embodiment, the growth trend graph is a growth parameter graph, wherein the growth parameter graph takes the inspection time as a first coordinate and takes the growth parameter as a second coordinate; alternatively, the first and second electrodes may be,
the growth trend graph is a list of the growth parameters corresponding to the different inspection times.
In one embodiment, determining the follicular region corresponding to the target follicle on the ultrasound images at the at least three different examination times respectively comprises:
determining a follicle region corresponding to the target follicle in a first ultrasonic image of the ultrasonic images at the at least three different examination times to obtain a first follicle region;
according to the first follicular area, determining a follicular area corresponding to the target follicle in a second ultrasound image of the ultrasound images at the at least three different examination times, and obtaining a second follicular area;
and determining a follicle region corresponding to the target follicle in a third ultrasound image of the at least three ultrasound images at different examination times according to the first follicle region or the second follicle region, and obtaining a third follicle region.
In one embodiment, determining a follicular region corresponding to the target follicle in a first ultrasound image of the ultrasound images of the at least three different examination times, and obtaining a first follicular region includes:
segmenting a follicle region corresponding to the target follicle in a first ultrasound image of the ultrasound images of the at least three different examination times based on the image characteristics of the follicle to obtain a first follicle region; alternatively, the first and second electrodes may be,
detecting an operator's tracing of a corresponding region of a target follicle in a first ultrasound image of the ultrasound images of the at least three different examination times to obtain a first follicle region.
In one embodiment, the ultrasound image is a three-dimensional ultrasound image, and the segmenting a follicle region corresponding to the target follicle in a first ultrasound image of the ultrasound images at the at least three different examination times based on the image characteristics of the follicle to obtain a first follicle region includes:
segmenting a corresponding region of the target follicle in a plurality of two-dimensional slice images of a first ultrasound image of the three-dimensional ultrasound images of the at least three different examination times based on image features of the follicle;
and integrating corresponding areas of the target follicle on the plurality of two-dimensional sectional images to obtain the first follicle area.
In one embodiment, the plurality of two-dimensional sectional images of the first ultrasound image are all two-dimensional sectional images in the first ultrasound image, or,
the two-dimensional sections of the first ultrasound image are sampling images obtained by sampling the first ultrasound image according to a first preset rule, and the synthesizing of the corresponding regions of the target follicle on the two-dimensional sections includes: and performing three-dimensional interpolation on the segmentation result of the sampling image to obtain the first follicle region.
In one embodiment, the ultrasound image is a three-dimensional ultrasound image, and the segmenting a follicle region corresponding to the target follicle in a first ultrasound image of the ultrasound images at the at least three different examination times based on the image characteristics of the follicle to obtain a first follicle region includes:
based on the image characteristics of the follicles, performing three-dimensional segmentation on a follicle region corresponding to the target follicle in a first ultrasound image of the three-dimensional ultrasound images of the at least three different examination times to obtain the first follicle region.
In one embodiment, determining a follicular region corresponding to the target follicle in a first ultrasound image of the ultrasound images of the at least three different examination times, and obtaining a first follicular region includes:
determining a region of the ovarian tissue in a first ultrasound image of the ultrasound images of the at least three different examination times, obtaining a first ovarian region;
and determining a follicle region corresponding to the target follicle based on the first ovarian region, and obtaining a first follicle region.
In one embodiment, determining a follicular region corresponding to the target follicle in a first ultrasound image of the ultrasound images of the at least three different examination times, and obtaining a first follicular region includes:
determining a plurality of first candidate follicular regions in a first ultrasound image of the ultrasound images of the at least three different examination times;
acquiring a growth parameter of each first follicle candidate area, and determining the first follicle candidate area of which the growth parameter meets a first preset condition as the first follicle candidate area.
In one embodiment, determining a follicular region corresponding to the target follicle in a first ultrasound image of the ultrasound images of the at least three different examination times, and obtaining a first follicular region includes:
determining a plurality of first candidate follicular regions in a first ultrasound image of the ultrasound images of the at least three different examination times;
acquiring a growth parameter of each first follicle candidate region, and determining the first follicle candidate region with the maximum growth parameter as the first follicle candidate region.
In one embodiment, the determining, according to the first follicular area, a follicular area corresponding to the target follicle in a second ultrasound image of the ultrasound images at the at least three different examination times to obtain a second follicular area includes:
acquiring a follicle region, of which the similarity to the characteristic information of the first follicle region meets a first threshold value, in a second ultrasound image of the ultrasound images at the at least three different examination times, and determining the acquired follicle region as a second follicle region; alternatively, the first and second electrodes may be,
acquiring a second candidate follicle region corresponding to a target follicle in the second ultrasound image, determining a correspondence between the first follicle region and the second candidate follicle region by using a first learning model, adjusting the second candidate follicle region according to the correspondence, and determining the adjusted second candidate follicle region as the second follicle region.
In one embodiment, the determining, according to the first follicular area, a follicular area corresponding to the target follicle in a second ultrasound image of the ultrasound images at the at least three different examination times to obtain a second follicular area includes:
detecting an operation of identifying a follicle region corresponding to the first follicle region in a second ultrasound image of the ultrasound images of the at least three different examination times by the operator, and determining the identified follicle region as a second follicle region.
In one embodiment, on the basis of the above embodiment, the method further includes:
determining a first ovarian region of ovarian tissue of the subject in the first ultrasound image;
determining a second ovarian region of ovarian tissue of the subject in the second ultrasound image;
registering the first ovarian region with the second ovarian region.
In one embodiment, registering the first ovarian region with the second ovarian region comprises:
adjusting the position of the second ovarian region in the second ultrasound image according to the position of the first ovarian region in the first ultrasound image, so that the position of the adjusted second ovarian region in the second ultrasound image is the same as the position of the first ovarian region in the first ultrasound image; alternatively, the first and second electrodes may be,
adjusting the size of the second ovarian region relative to the first ovarian region based on the size of the first ovarian region such that the adjusted second ovarian region is the same size as the first ovarian region.
In one embodiment, registering the first ovarian region with a second ovarian region comprises:
determining the corresponding relation between the second ovary area and the first ovary area by adopting a second learning model, and adjusting the second ovary area according to the corresponding relation; alternatively, the first and second electrodes may be,
receiving an instruction of an operator to adjust the second ovarian region according to the first ovarian region, and adjusting the second ovarian region according to the instruction.
In one embodiment, on the basis of the above embodiment, the method further includes:
determining a first ovarian region of ovarian tissue of the subject in the first ultrasound image;
and matching the second ultrasonic image with the template of the first ovarian region, and determining a second ovarian region of the ovarian tissue in the second ultrasonic image according to the matching result.
In one embodiment, on the basis of the above embodiment, the method further includes:
receiving an instruction of an operator for registering the second ultrasonic image and the first ultrasonic image, and rotating, translating or scaling the second ultrasonic image according to the instruction so as to register the second ultrasonic image and the first ultrasonic image.
In one embodiment, the growth parameters include at least one of: volume, path length, and growth rate.
In one embodiment, there is provided an egg tracking system comprising:
an ultrasonic probe;
the transmitting circuit is used for exciting the ultrasonic probe to transmit ultrasonic waves to the ovarian tissue of the measured object;
the receiving circuit is used for exciting the ultrasonic probe to receive the echo of the ultrasonic wave so as to obtain an echo signal of the ultrasonic wave;
a processor configured to perform the follicle tracking method according to any of the preceding embodiments.
And the display is used for displaying the growth trend graph.
In one embodiment, there is provided an egg tracking method, comprising:
acquiring ultrasound images of at least two different examination times of ovarian tissue of a subject, wherein the ovarian tissue comprises a target follicle;
determining a follicle region corresponding to the target follicle on the at least two ultrasound images at different examination times respectively to obtain at least two follicle regions, wherein the at least two follicle regions are follicle regions corresponding to the same target follicle;
determining growth parameters of the target follicle according to the at least two follicle areas respectively, and obtaining at least two growth parameters of the target follicle;
obtaining a growth trend map of the target follicle according to at least two growth parameters of the target follicle;
and displaying the growth trend graph.
In one embodiment, there is provided an egg tracking system comprising:
an ultrasonic probe;
the transmitting circuit is used for exciting the ultrasonic probe to transmit ultrasonic waves to the ovarian tissue of the measured object;
the receiving circuit is used for exciting the ultrasonic probe to receive the echo of the ultrasonic wave so as to obtain an echo signal of the ultrasonic wave;
a processor for performing the follicle tracking method as described in the above embodiments.
According to the follicle tracking method and the follicle tracking system, a single target follicle can be tracked and monitored, a growth trend graph of the single target follicle is finally obtained, an operator can accurately evaluate the optimal ovum taking time according to the growth trend graph of the target follicle, and the working efficiency and the accuracy are effectively improved.
In one embodiment, there is provided an egg tracking method, comprising:
acquiring a first ultrasound image of ovarian tissue of a subject, wherein the ovarian tissue comprises a target follicle;
determining a follicle region corresponding to the target follicle in the first ultrasonic image to obtain a first follicle region;
acquiring a second ultrasonic image of the ovarian tissue of the tested object;
a second follicular region is obtained by determining a follicular region in the second ultrasound image corresponding to the target follicle based on the first follicular region.
In one embodiment, determining a follicle region corresponding to the target follicle in the first ultrasound image, and obtaining a first follicle region includes:
segmenting a follicle region corresponding to the target follicle in the first ultrasonic image based on the image characteristics of the follicle to obtain a first follicle region; alternatively, the first and second electrodes may be,
detecting an operator tracing operation on a corresponding region of a target follicle in the first ultrasound image to obtain a first follicle region.
In one embodiment, the ultrasound image is a three-dimensional ultrasound image, and the segmenting a follicle region corresponding to the target follicle in the first ultrasound image based on the image feature of the follicle includes:
segmenting a corresponding region of the target follicle in a plurality of two-dimensional sectional images of the first ultrasound image based on image features of the follicle;
and integrating corresponding areas of the target follicle on the plurality of two-dimensional sectional images to obtain the first follicle area.
In one embodiment, the plurality of two-dimensional sectional images of the first ultrasound image are all two-dimensional sectional images in the first ultrasound image, or,
the two-dimensional sections of the first ultrasound image are sampling images obtained by sampling the first ultrasound image according to a first preset rule, and the synthesizing of the corresponding regions of the target follicle on the two-dimensional sections includes: and performing three-dimensional interpolation on the segmentation result of the sampling image to obtain the first follicle region.
In one embodiment, the ultrasound image is a three-dimensional ultrasound image, and the segmenting a follicle region corresponding to the target follicle in the first ultrasound image based on the image feature of the follicle includes:
based on the image characteristics of the follicle, a follicle region corresponding to the target follicle is segmented in three dimensions in the first ultrasound image to obtain the first follicle region.
In one embodiment, determining a follicle region corresponding to the target follicle in the first ultrasound image, and obtaining a first follicle region includes:
determining a region of the ovarian tissue in the first ultrasound image, obtaining a first ovarian region;
and determining a follicle region corresponding to the target follicle based on the first ovarian region, and obtaining a first follicle region.
In one embodiment, determining a follicle region corresponding to the target follicle in the first ultrasound image, and obtaining a first follicle region includes:
determining a plurality of first candidate follicular regions in the first ultrasound image;
and acquiring a growth parameter of each first follicle candidate area, and determining the first follicle candidate area with the growth parameter meeting a first preset condition as the first follicle area.
In one embodiment, determining a follicle region corresponding to the target follicle in the first ultrasound image, and obtaining a first follicle region includes:
determining a plurality of first candidate follicular regions in the first ultrasound image;
and acquiring a growth parameter of each first follicle candidate area, and determining the first follicle candidate area with the maximum growth parameter as the first follicle area.
In one embodiment, the determining a follicular region corresponding to the target follicle in the second ultrasound image according to the first follicular region to obtain a second follicular region includes:
acquiring a follicle region of which the similarity with the characteristic information of the first follicle region meets a first threshold value in the second ultrasound image, and determining the acquired follicle region as a second follicle region; alternatively, the first and second electrodes may be,
acquiring a second candidate follicle region corresponding to a target follicle in the second ultrasound image, determining a correspondence between the first follicle region and the second candidate follicle region by using a first learning model, adjusting the second candidate follicle region according to the correspondence, and determining the adjusted second candidate follicle region as the second follicle region.
In one embodiment, the determining a follicular region corresponding to the target follicle in the second ultrasound image according to the first follicular region to obtain a second follicular region includes:
detecting an operation of an operator to identify a follicle region corresponding to the first follicle region in the second ultrasound image, and determining the identified follicle region as a second follicle region.
In one embodiment, on the basis of the above embodiment, the method further includes:
determining a first ovarian region of ovarian tissue of the subject in the first ultrasound image;
determining a second ovarian region of ovarian tissue of the subject in the second ultrasound image;
registering the first ovarian region with the second ovarian region.
In one embodiment, registering the first ovarian region with the second ovarian region comprises:
adjusting the position of the second ovarian region in the second ultrasound image according to the position of the first ovarian region in the first ultrasound image, so that the position of the adjusted second ovarian region in the second ultrasound image is the same as the position of the first ovarian region in the first ultrasound image; alternatively, the first and second electrodes may be,
adjusting the size of the second ovarian region relative to the first ovarian region based on the size of the first ovarian region such that the adjusted second ovarian region is the same size as the first ovarian region.
In one embodiment, registering the first ovarian region with the second ovarian region comprises:
determining the corresponding relation between the second ovary area and the first ovary area by adopting a second learning model, and adjusting the second ovary area according to the corresponding relation; alternatively, the first and second electrodes may be,
receiving an instruction of an operator for adjusting the second ovary region to be registered according to the first ovary region, and adjusting the second ovary region to be registered according to the instruction.
In one embodiment, on the basis of the above embodiment, the method further includes:
determining a first ovarian region of the ovarian tissue in the first ultrasound image;
and matching the second ultrasonic image with the template of the first ovarian region, and determining a second ovarian region of the ovarian tissue in the second ultrasonic image according to the matching result.
In one embodiment, on the basis of the above embodiment, the method further includes:
receiving an instruction of an operator for registering the second ultrasonic image and the first ultrasonic image, and rotating, translating or scaling the second ultrasonic image according to the instruction so as to register the second ultrasonic image and the first ultrasonic image.
In one embodiment, the growth parameters include at least one of: volume, path length, and growth rate.
In one embodiment, there is provided an egg tracking system comprising:
an ultrasonic probe;
the transmitting circuit is used for exciting the ultrasonic probe to transmit ultrasonic waves to the ovarian tissue of the measured object;
the receiving circuit is used for exciting the ultrasonic probe to receive the echo of the ultrasonic wave so as to obtain an echo signal of the ultrasonic wave;
a processor configured to perform the follicle tracking method according to any of the preceding embodiments.
According to the follicle tracking method and the follicle tracking system, the same target follicle can be tracked and monitored on a plurality of ultrasonic images, the growth and development conditions of the same target follicle can be clinically realized, an operator can accurately evaluate the optimal ovum taking time, and the working efficiency and the accuracy are effectively improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings without inventive exercise.
In the drawings:
FIG. 1 shows a schematic block diagram of an ultrasound imaging system according to an embodiment of the present application;
fig. 2 shows a schematic flow diagram of a follicle tracking method according to an embodiment of the invention;
fig. 3 shows a schematic diagram of a follicle tracking method according to another embodiment of the invention;
fig. 4 shows a schematic diagram of a follicle tracking method according to another embodiment of the invention;
fig. 5 shows a schematic diagram of a follicle tracking method according to another embodiment of the invention;
fig. 6 shows a schematic diagram of a follicle tracking method according to another embodiment of the invention;
fig. 7 shows a schematic diagram of a follicle tracking method according to another embodiment of the invention;
fig. 8 shows a schematic diagram of a follicle tracking method according to another embodiment of the invention;
fig. 9 shows a schematic diagram of a follicle tracking method according to another embodiment of the invention;
fig. 10 shows a schematic diagram of a follicle tracking method according to another embodiment of the invention;
fig. 11 shows a schematic diagram of a follicle tracking method according to another embodiment of the invention;
fig. 12 shows a schematic flow diagram of a follicle tracking method according to a further embodiment of the invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, exemplary embodiments according to the present application will be described in detail below with reference to the accompanying drawings. It should be understood that the described embodiments are only some embodiments of the present application and not all embodiments of the present application, and that the present application is not limited by the example embodiments described herein. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the application described in the application without inventive step, shall fall within the scope of protection of the application.
In the following description, numerous specific details are set forth in order to provide a more thorough understanding of the present application. It will be apparent, however, to one skilled in the art, that the present application may be practiced without one or more of these specific details. In other instances, well-known features of the art have not been described in order to avoid obscuring the present application.
It is to be understood that the present application is capable of implementation in various forms and should not be construed as limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the application to those skilled in the art.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms "comprises" and/or "comprising," when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. As used herein, the term "and/or" includes any and all combinations of the associated listed items.
In order to provide a thorough understanding of the present application, a detailed structure will be presented in the following description in order to explain the technical solutions presented in the present application. Alternative embodiments of the present application are described in detail below, however, the present application may have other implementations in addition to these detailed descriptions.
Next, an ultrasound imaging system according to an embodiment of the present application is first described with reference to fig. 1, and fig. 1 shows a schematic structural block diagram of an ultrasound imaging system 100 according to an embodiment of the present application.
As shown in fig. 1, the ultrasound imaging system 100 includes an ultrasound probe 110, transmit/receive circuitry 112, a processor 114, and a display 116. Further, the ultrasound imaging system 100 may further include a beam forming circuit, a transmission/reception selection switch, and the like.
The ultrasound probe 110 typically includes an array of a plurality of array elements. At each transmission of the ultrasound wave, all or part of the elements of the ultrasound probe 110 participate in the transmission of the ultrasound wave. At this time, each array element or each part of array elements participating in ultrasonic wave transmission is excited by the transmission pulse and respectively transmits ultrasonic waves, and the ultrasonic waves transmitted by the array elements are superposed in the transmission process to form a synthetic ultrasonic wave beam transmitted to the region where the ovary tissue of the measured object is located.
The transmission/reception circuit 112 may be connected to the ultrasound probe 110 through a transmission/reception selection switch. The transmission/reception selection switch may also be referred to as a transmission/reception controller, which may include a transmission controller and a reception controller, the transmission controller being configured to excite the ultrasonic probe 110 to transmit ultrasonic waves to a region where ovarian tissue of the object to be measured is located via the transmission circuit; the receiving controller is used for receiving the ultrasonic echo returned from the region where the ovary tissue of the tested object is located through the receiving circuit by the ultrasonic probe 110, so as to obtain ultrasonic echo data. Then, the transmitting/receiving circuit 112 sends the electrical signal of the ultrasonic echo to the beam forming circuit, and the beam forming circuit performs focusing delay, weighting, channel summation, and the like on the electrical signal, and then sends the processed ultrasonic echo data to the processor 114.
Alternatively, the processor 114 may be implemented by software, hardware, firmware or any combination thereof, and may use circuits, single or multiple Application Specific Integrated Circuits (ASICs), single or multiple general purpose Integrated circuits, single or multiple microprocessors, single or multiple programmable logic devices, or any combination of the foregoing circuits or devices, or other suitable circuits or devices, so that the processor 114 may perform the respective steps of the methods in the various embodiments of the present description. Also, the processor 114 may control other components in the ultrasound imaging system 100 to perform desired functions.
The processor 114 processes the received ultrasonic echo data to obtain ultrasonic data of the ovarian tissue of the measured object, wherein the ultrasonic data may be two-dimensional ultrasonic data or may be three-dimensional ultrasonic data. Taking the ultrasound data as three-dimensional ultrasound data as an example, the ultrasound probe 110 transmits/receives ultrasound waves in a series of scanning planes, and the processor 114 performs integration according to the three-dimensional spatial relationship thereof, so as to scan the ovarian tissue of the measured object in a three-dimensional space and reconstruct a three-dimensional image. Finally, the processor 114 performs partial or all image post-processing steps such as denoising, smoothing, enhancing and the like on the image to obtain three-dimensional ultrasonic data of the ovarian tissue of the measured object. The processor 114 may acquire ultrasound images of ovarian tissue of the subject at least three different examination times, wherein the ovarian tissue includes the target follicle. The processor 114 may also determine corresponding regions of the same target follicle on the ultrasound images of at least three different examination times, respectively. The processor 114 may also determine a growth parameter based on the follicular region corresponding to the target follicle. The processor 114 obtains a growth trend map of the target follicle according to the growth parameters of the target follicle. The trend graph obtained by the processor 114 may be stored in memory or displayed on the display 116.
The display 116 is connected with the processor 114, and the display 116 may be a touch screen, a liquid crystal display, or the like; or the display 116 may be a separate display device such as a liquid crystal display, a television, or the like, separate from the ultrasound imaging system 100; or the display 116 may be a display screen of an electronic device such as a smartphone, tablet, etc. The number of the display 116 may be one or more. For example, the display 116 may include a home screen for displaying ultrasound images and a touch screen for human-computer interaction.
The display 116 may display the ultrasound images obtained by the processor 114. In addition, the display 116 can provide the operator with a graphical interface for human-computer interaction while displaying the ultrasound image, and one or more controlled objects are provided on the graphical interface, and the operator is provided with a human-computer interaction device to input operation instructions to control the controlled objects, so as to execute corresponding control operations. For example, icons are displayed on the graphical interface, which can be manipulated by the human-computer interaction device to perform a particular function.
Optionally, the ultrasound imaging system 100 may also include a human-computer interaction device other than the display 116, which is connected to the processor 114, for example, the processor 114 may be connected to the human-computer interaction device through an external input/output port, which may be a wireless communication module, a wired communication module, or a combination thereof. The external input/output port may also be implemented based on USB, bus protocols such as CAN, and/or wired network protocols, etc.
The human-computer interaction device may include an input device for detecting input information of an operator, for example, control instructions for the transmission/reception timing of the ultrasonic waves, operation input instructions for drawing points, lines, frames, or the like on the ultrasonic images, or other instruction types. The input device may include one or more of a keyboard, mouse, scroll wheel, trackball, mobile input device (such as a mobile device with a touch screen display, cell phone, etc.), multi-function knob, and the like. The human-computer interaction device may also include an output device such as a printer.
The ultrasound imaging system 100 may also include a memory for storing instructions executed by the processor, storing received ultrasound echoes, storing ultrasound images, and so forth. The memory may be a flash memory card, solid state memory, hard disk, etc. Which may be volatile memory and/or non-volatile memory, removable memory and/or non-removable memory, etc.
It should be understood that the inclusion of components in the follicle tracking system 100 shown in fig. 1 is merely illustrative and that more or fewer components may be included. This is not limited by the present application.
Next, a follicle tracking method according to an embodiment of the present application will be described with reference to fig. 2. Fig. 2 is a schematic flow chart of a follicle tracking method 200 according to an embodiment of the present application.
In this embodiment, the same target follicle of the object to be measured is tracked, ultrasound images of the same target follicle at different inspection times are obtained, and the growth and development of the same target follicle are determined from the obtained ultrasound images, and as shown in fig. 3, an operator can determine the timing of ova aspiration from the growth and development of the same follicle. Compared with the existing tracking monitoring aiming at the whole follicle development condition, the tracking monitoring aiming at a single follicle in the implementation can provide more accurate clinical information, and an operator can evaluate more accurate ovum taking time according to the more accurate clinical information.
Referring to fig. 2, a follicle tracking method 200 according to an embodiment of the present application includes the following steps:
step 201, obtaining ultrasonic images of at least three different inspection times of an ovarian tissue of a tested object, wherein the ovarian tissue comprises a target follicle.
In one embodiment, for a plurality of ultrasound images acquired at each examination, the processor may automatically acquire an ultrasound image including a target follicle from the plurality of ultrasound images as an ultrasound image of a target follicle tracking monitor acquired at a current examination time; or an operator manually selects an ultrasound image including a target follicle from among the acquired plurality of ultrasound images as an ultrasound image of the target follicle tracking monitor at the current examination time. In the same way, ultrasound images of ovarian tissue of the same subject, including the target follicle, can be acquired at multiple different times.
The ultrasound image may be a two-dimensional ultrasound image or may be a three-dimensional ultrasound image. In actual clinical practice, a testee usually performs four to six ultrasonic examinations at least three times during the process from the completion of ovulation promoting drug injection to the ovum retrieval, obtains at least three growth parameters for the same target follicle, and provides more accurate follicle growth trend information for an operator based on a growth trend graph drawn by the at least three growth parameters, so that the operator can evaluate the ovum retrieval time more accurately.
Step 202, determining a follicle region corresponding to the target follicle on the at least three ultrasound images at different examination times, respectively, to obtain at least three follicle regions, wherein the at least three follicle regions are follicle regions corresponding to the same target follicle.
Aiming at the same target follicle, the processor automatically determines a region corresponding to the target follicle on the ultrasonic images of at least three different examination times; or the operator manually determines the corresponding area of the target follicle on the ultrasound images at least three different examination times. Therefore, growth parameters of the ultrasonic images acquired by the same target follicle at different times are obtained, namely the same target follicle can be tracked, and the accurate ovum taking time is evaluated.
The area corresponding to the same target follicle is determined on different ultrasound images, and the process can be automatically determined by the processor, or can be manually determined by an operator, or the process can be automatically confirmed by combining the operation of the processor and the operation of the operator.
In the process of determining a region corresponding to the same target follicle on different ultrasound images, a region corresponding to the target follicle in one ultrasound image needs to be determined first, and then a follicle region corresponding to the same target follicle region is determined in a plurality of ultrasound images acquired at different inspection times. For better illustration of the process, the at least three ultrasound images at different examination times are a first ultrasound image, a second ultrasound image and a third ultrasound image, wherein the first ultrasound image, the second ultrasound image and the third ultrasound image are only used for distinguishing the ultrasound images acquired at the at least three different examination times, and the "first", "second" and "third" do not indicate the sequence of the examination times.
First, a follicle region corresponding to the target follicle is determined in a first ultrasound image of the ultrasound images at the at least three different examination times, and a first follicle region is obtained.
In one embodiment, the target follicle may be one target follicle, or may be two or more target follicles. The follicle in question may have grown during its growth and eventually the ovum is removed, or it may disappear before the ovum has not formed. Of course, the target follicle may disappear or a new target follicle may be added throughout the tracking of the target follicle. Generally, a plurality of target follicles are selected at the beginning clinically, so that the target follicles are prevented from being lost in the subsequent tracking process.
Wherein the target follicle is selected during the first selection process, which generally selects the follicle with better growth as the target follicle. For example, if the obtained ultrasound image is a two-dimensional ultrasound image, a follicle with a larger diameter on the two-dimensional ultrasound image can be selected as a target follicle; or if the ultrasound image obtained is a three-dimensional ultrasound image, it may be a follicle of a larger volume as the target follicle. Wherein the operator may manually select the target follicle or the processor automatically identifies the target follicle.
In one embodiment, a plurality of first candidate follicular regions are determined based on the first ultrasound image, a growth parameter of the plurality of first candidate follicular regions is determined based on the plurality of first candidate follicular regions, and a first candidate follicular region whose growth parameter satisfies a first preset condition is determined as a first region of the target follicle. Wherein the growth parameters are volume or diameter length of the follicle. Taking the volume as an example, the first predetermined condition may be that the volume of the follicle is larger than a predetermined threshold, or that the volume of the follicle is relatively large in the first candidate follicle volume, etc. The first preset condition is either automatically settable by the disposer or manually set by the operator.
In one embodiment, the first follicle candidate region with the largest growth parameter may be determined as the first region of the target follicle. The first follicle candidate region having the largest volume may be determined as the first region of the target follicle, or the first follicle candidate region having the largest diameter may be determined as the target follicle region.
A first region of the target follicle is determined based on a first ultrasound image of ovarian tissue, as will be exemplified below by the ovarian tissue being a three-dimensional ultrasound image.
In one embodiment, the processor segments a follicle region corresponding to the target follicle on a first ultrasound image of the ultrasound images at the at least three different examination times based on the image characteristics of the follicle to obtain a first follicle region; or the processor detects an instruction from the operator to trace a corresponding region of the target follicle of the first ultrasound image to obtain the first follicle region.
The processor automatically segments a corresponding region of a target follicle on the first ultrasonic image based on image characteristics of the follicle, and the two-dimensional image segmentation method is generally divided into two methods, namely a two-dimensional image segmentation method which comprises the steps of segmenting the first ultrasonic image into a plurality of two-dimensional sections, segmenting the corresponding region of the target follicle from the plurality of two-dimensional sections, and integrating the corresponding region of the target follicle in the plurality of two-dimensional sections to obtain a first follicle region.
On the basis of the above embodiment, the plurality of two-dimensional sectional images of the first ultrasound image may be all two-dimensional sectional images in the first ultrasound image.
On the basis of the above embodiment, the plurality of two-dimensional sections of the first ultrasound image may be sampling images obtained by sampling the first ultrasound image according to a preset rule. For example, selecting a point around which to cut a sample image in a radial pattern, as shown with reference to fig. 4; or a parallel slice of the sampled image, as shown with reference to fig. 5. And performing three-dimensional interpolation on the segmentation result of the sampling image to obtain the first follicle region.
Two-dimensional image segmentation methods mainly have two modes, one mode is a traditional segmentation algorithm, and modeling is not required to be carried out through a large amount of data; one is an image segmentation method based on machine learning, which requires a large amount of data to build a learning model.
Conventional image segmentation a conventional image segmentation algorithm detects a corresponding region of a target road follicle by a target detection method (e.g., point detection, line detection), and segments the region to obtain a first follicle region. Common segmentation algorithms include Level Set (Level Set) -based segmentation algorithm, Random Walk (Random Walk), Graph Cut (Graph Cut), Snake, and the like.
A machine learning based approach segments the target follicle by learning characteristics of the corresponding region of the target follicle in the database. The method mainly comprises the following steps: 1) and constructing a database, wherein the database comprises a large number of data sets and corresponding marking results thereof, and the marking information is Mask (Mask) information for accurately segmenting the target follicle, wherein the Mask information comprises boundary information and area information formed by the boundary information. 2) The method mainly comprises two segmentation algorithms, wherein one segmentation algorithm is based on traditional machine learning, and the other segmentation algorithm is based on deep learning, and the method specifically comprises the following steps:
the semantic segmentation algorithm based on traditional machine learning generally comprises the steps of dividing a first ultrasonic image into a plurality of image blocks, extracting features of the image blocks, wherein the features are extracted in a mode of traditional PCA, LDA, Harr features, texture features and the like, or extracting the features by using a deep neural network such as an Overfeat network and the like, classifying the extracted features by using a cascade classifier such as a KNN, SVM, random forest and other discriminators, so as to determine whether the current image block is a corresponding region of a target follicle, using the classification result as a center point marking result of the current image block, and finally obtaining a segmentation result of the whole image, namely obtaining a first follicle region.
An end-to-end semantic segmentation algorithm based on deep learning structurally comprises the steps of stacking a convolution layer, a pooling layer, an up-sampling layer or an anti-convolution layer and the like to obtain an output image with the size consistent with that of an input first ultrasonic image, and directly segmenting a corresponding region of a required target follicle from the output image.
In addition to the two-dimensional image segmentation method, the other method is a three-dimensional image segmentation method, in which the processor directly performs three-dimensional segmentation in the first ultrasound image based on the image features of the follicle, so as to determine a first follicle region, as shown in fig. 6.
Segmentation based on three-dimensional data may be performed by detecting a corresponding region of the target follicle by a target detection method (e.g., spot detection, line detection), and segmenting the region to obtain a first follicle region. The method does not need a large amount of marked data, and common segmentation algorithms include a Level Set (Level Set) based segmentation algorithm, a Random Walk (Random Walk), a Graph Cut (Graph Cut), Snake, a three-dimensional law, clustering, a Markov Random field and the like.
The region may be segmented by learning the features of the corresponding region of the target follicle in the database using a machine learning method. The method mainly comprises the following steps: 1) and constructing a database, wherein the database comprises a large number of data sets and corresponding marking results thereof, and the marking information is MASK (MASK) information obtained by accurately dividing corresponding areas of the target follicle. 2) The method mainly comprises two segmentation algorithms, wherein one segmentation algorithm is based on traditional machine learning, and the other segmentation algorithm is based on deep learning, and the method specifically comprises the following steps:
the semantic segmentation algorithm based on the traditional machine learning generally includes the steps of dividing a first ultrasonic image into a plurality of image blocks, and then extracting features of the image blocks, such as extracting three-dimensional data edge features, gradients, texture features and the like based on edge detection of sobel operators, or extracting features by using a neural network, such as a medical net. And then classifying the extracted features by using a cascaded classifier, wherein the classifier is a discriminator such as KNN, SVM, random forest and the like, so as to determine whether the current image block is a corresponding area of the target follicle, and the classification result is used as a center point marking result of the current image block, and finally, a segmentation result of the whole image is obtained, namely, a first follicle area is obtained.
An end-to-end semantic segmentation algorithm based on deep learning structurally comprises the steps that an output image which is consistent with an input first ultrasonic image in size is obtained through stacking of a convolution layer, a pooling layer, an up-sampling layer or an anti-convolution layer in three dimensions, the output image directly segments a required first follicle region, the method is supervised learning, input supervision information is mask information of a target region, data preparation is time-consuming, and common three-dimensional segmentation networks comprise 3D U-Net, 3DFCN, Medical-Net and the like.
When the first ultrasound image is a two-dimensional ultrasound image, there are two main ways of obtaining the first follicle region segmentation method of the target follicle on the two-dimensional image, one is a traditional segmentation algorithm, and modeling is not required to be performed through a large amount of data; one is an image segmentation method based on machine learning, which requires a large amount of data to build a learning model. The conventional segmentation algorithm and the image segmentation algorithm based on machine learning are as described above and will not be described herein.
Determining a follicle region corresponding to the target follicle in a second ultrasound image of the ultrasound images at least at three different examination times based on the first follicle region obtained in the above embodiment, and obtaining a second follicle region; and according to the first follicle region or the second follicle region, determining a follicle region corresponding to the target follicle in a third ultrasound image in the ultrasound images of the at least three different examination times, and obtaining a third follicle region. The procedure is described by taking the example of obtaining a second follicular area in a second ultrasound image based on the first follicular area.
In one embodiment, the processor may automatically determine a follicular region corresponding to the target follicle in the second ultrasound image based on the first follicular region, obtaining a second follicular region; or the operator manually identifies a follicle region corresponding to the first follicle region in the second ultrasound image, the processor detects a manual identification operation of the operator, and automatically determines the identified follicle region as the second follicle region, wherein the manual identification of the operator includes manually determining the second follicle region by the operator, such as tracing the second follicle region, or may further include highlighting the determined second follicle region.
The processor automatically determines the second follicular area in the second ultrasound image according to the first follicular area, which mainly includes two methods, one is a conventional image matching method, that is, a sub-image similar to the characteristic information of the first follicular area is searched in the second ultrasound image based on the characteristic information of the first follicular area, iteration is required to search for an optimal solution, the speed is slow, and a large amount of data is not required.
In one embodiment, the processor automatically determines a second region from the second ultrasound image, the similarity of the characteristic information of which to the first follicular region satisfies a first threshold, and determines the second region satisfying the first threshold as the second follicular region. Wherein the first threshold may be a percentage ratio, such as 90%. The processor may automatically set the first threshold; alternatively, the operator may manually set the first threshold according to clinical requirements.
In one embodiment, the characteristic information of the target follicle may be image signal information obtained by transforming the image signal from the time domain to the frequency domain by some transformation (e.g., fourier transform, inverse transform of the image onto the frequency domain, domain transform); or raw data information directly from the image, where the raw data information refers to data information directly obtained from the image without any processing, such as gray scale value, contrast, and the like; or the image feature information is extracted after the original data information is processed, the image feature mainly comprises the features of points, lines, areas and the like, and also can be divided into local features and global features, wherein the point features and the line features are applied more, the point features mainly comprise Harris, Moravec, KLT, Harr-like, HOG, LBP, SIFT, SURF, BRIEF, SUSAN, FAST, CENSUS, FREAK, BRISK, ORB, optical flow method, A-KAZE and the like, and the line features mainly comprise LoG operators, Robert operators, Sobel operators, Prewitt operators, Canny operators and the like.
Based on the difference of the above feature information, the conventional image matching method is specifically divided into three types corresponding to the feature information: 1) the method based on domain transformation is based on the frequency domain signal of the first follicle region for matching, and mainly adopts the modes of phase correlation, Walsh transform, wavelet transform and the like. 2) The template matching method includes performing matching based on raw data information (such as gray values and contrast) of a first follicular area, searching a second ultrasound image for a sub-image with similarity meeting a first threshold with the first follicular area according to the raw data information of the first follicular area, and determining the sub-image meeting the first threshold as a second follicular area. In this process, feature information of the ultrasound image does not need to be extracted. For example, matching may be performed on a gray-scale basis, also referred to as a correlation matching algorithm, using a spatial two-dimensional sliding template. Common algorithms in the template matching based method include a mean absolute difference algorithm (MAD), a Sum of Absolute Differences (SAD), a sum of square errors algorithm (SSD), a mean sum of square errors algorithm (MSD), a normalized product correlation algorithm (NCC), a Sequential Similarity Detection Algorithm (SSDA), a hadamard transform algorithm (SATD), a local gray value coding algorithm, a PIU and the like. 3) The method comprises the steps of firstly extracting image feature information of a first follicle region, then generating a feature description operator according to the extracted image feature information, and finally determining a second follicle region with the similarity meeting a first threshold value with the description operator in a second ultrasonic image.
The method for determining the second follicular region in the second ultrasound image based on the first follicular region includes, in addition to the conventional image matching method, another method, i.e., an image registration method based on deep learning, which is shown in fig. 7. The image registration mode based on deep learning is that the corresponding relation between two image areas is determined through a trained learning model, and the two image areas are aligned in position and size in a spatial position relation through at least one of rotation, translation and scaling. In the process, a large number of labeled samples are needed to establish a database, and the registration of the images can be realized in one step.
The processor automatically or manually obtains a second candidate follicle region corresponding to the target follicle in the second ultrasound image, determines a corresponding relationship between the first follicle region and the second candidate follicle region by using the first learning model, adjusts the first candidate follicle region according to the corresponding relationship, and determines the adjusted second candidate follicle region as the second follicle region. The method mainly comprises the following steps: 1) building a database, wherein the database comprises a large number of data sets and corresponding mark information thereof, and the mark information is aligned corresponding relation, such as rotation relation, translation relation, scaling relation or any combination relation of the three relations; 2) and a registering step, namely inputting the first follicle region and the second follicle candidate region into a database, judging a corresponding relationship, and adjusting the second follicle candidate region of the target follicle according to the judged relationship, wherein the adjustment can be realized by at least one of rotation, translation or scaling, and the adjusted second follicle candidate region is output and determined as the second follicle region.
The image registration method based on deep learning comprises two types, wherein one type is a two-stage image registration mode; one is end-to-end image registration.
The method is similar to a feature-based matching method, but here, the traditional features such as SIFT are replaced by extracting the features by using a deep learning network.
An end-to-end image registration method based on deep learning can be divided into two types, one type is supervised learning, a first follicle region and a second follicle candidate region are input into a first learning model which is trained, and a corresponding relation between the first follicle region and the second follicle candidate region is output usually through a stacking algorithm of three-dimensional network structures such as a convolutional layer, a pooling layer, an upsampling layer or an anti-convolutional layer, the input supervision information is the corresponding relation, and common network structures include FCN, U-Net and the like. One is unsupervised learning, in which the first follicle region and the second follicle candidate region are input, and the adjusted second follicle candidate region is output.
By the same method, corresponding areas of the same target follicle in the ultrasonic images at other different examination times are sequentially obtained, and continuous tracking monitoring of the same target follicle in the ultrasonic images obtained at different examination times is completed.
On the basis of the above embodiment, the registration or matching of the ovarian tissue regions in different ultrasound images may be performed first, or the registration of the whole image may be performed, and then the tracking monitoring of the target follicle is performed on the basis. In the process of tracking and detecting the same target follicle, the growth speed of the target follicle is high, and the size of the target follicle is constantly changing, so that the sizes of ultrasonic images obtained by the same target follicle at different examination times are different, for example, the diameter length or the volume of the target follicle is different, and therefore, it is difficult to continuously track and monitor the same target follicle. However, since the inspection interval time is relatively short, the relative position of the same target follicle in the ovarian tissue does not change much, so that the ovarian tissue region can be determined in the second ultrasound image first, and then the second follicle region can be determined on the second ultrasound image according to the relative position relationship between the first target follicle and the first ultrasound image in the first ultrasound image, so that the influence of the structural tissue similar to the target follicle outside the ovary can be reduced, the calculation amount of the processor can be reduced, the result is more accurate, and the processing efficiency is higher.
Of course, if the first ultrasound image and the second ultrasound image are images of the same or substantially the same portion, the first ultrasound image and the second ultrasound image may be adjusted in their entirety by at least one of rotation, translation and zooming, so that the target follicle is followed and monitored after the first ultrasound image and the second ultrasound image are registered. This method requires a relatively high manipulation operation for the operator to check the time before and after.
It should be noted that, in the two-dimensional ultrasound image, generally, the ultrasound section obtained may not include the entire ovarian tissue due to the operator's manipulation or other factors, which causes inconvenience to the tracking and monitoring of the target follicle, and even the ultrasound image obtained may not include the target follicle, so that the tracking and monitoring of the same target follicle through the two-dimensional ultrasound image requires a relatively high manipulation requirement for the operator. On the other hand, if follicle tracking is performed by a three-dimensional ultrasound method, the operator can easily acquire an image of the entire ovarian tissue, and thus, the operator can easily acquire the target follicle and reduce the need for a manipulation.
It should be noted that registration or matching of the ovarian tissue regions or registration of the entire image is not a necessary step and may or may not be used.
In one embodiment, a first ovarian region of the ovarian tissue of the subject is determined in the first ultrasound image, a second ovarian region of the ovarian tissue of the subject is determined in the second ultrasound image, and the first ovarian region is registered with the second ovarian region. Wherein the processor may automatically register the first ovarian region and the second ovarian region, or the operator manually registers.
In one embodiment, the processor determines the first ovarian region in the first ultrasound image using a machine learning method based on object segmentation or a conventional segmentation algorithm, or the operator manually determines the first ovarian region.
In one embodiment, the position of the second ovarian region in the second ultrasound image is adjusted according to the position of the first ovarian region in the first ultrasound image, so that the position of the adjusted second ovarian region in the second ultrasound image is the same as the position of the first ovarian region in the first ultrasound image; the size of the second ovarian region relative to the first ovarian region can be adjusted according to the size of the first ovarian region, the size of the radial length of the section, or the size of the volume, so that the adjusted second ovarian region is the same as the size of the first ovarian region. Wherein the adjustment of the orientation and the adjustment of the size can be automatically adjusted by the processor or manually adjusted by an operator.
Due to the interval of the two acquisition times, factors such as a doctor's manipulation and follicle growth causing a micro-change in the shape of the ovary cause misalignment of the two pieces of ovarian data, for example, the position of the first ovarian region in the first ultrasound image is shifted to the left, and the position of the second ovarian region in the second ultrasound image is shifted to the right or centered, and then the position of the second ovarian region needs to be adjusted, so that the position of the second ovarian region in the second ultrasound image is the same as the position of the first ovarian region in the first ultrasound image. Or there may be a case where the first ovarian region is relatively large and the second ovarian region is relatively small, where the size may be the size of the volume, or the size may be the size of the diameter, and then the second ovarian region needs to be enlarged so that the adjusted second ovarian region has the same size as the first ovarian region. The same and the same size of orientation in the present embodiment includes the same absolute orientation and the same absolute size, and also includes the same general orientation and the same general size. Particularly, when the operator manually performs adjustment, certain inevitable errors cannot be absolutely the same, but because the registration of the ovarian tissue region is not an indispensable step but only an optimization step, the ovarian tissue region can be roughly registered, and the clinical requirement can be basically met.
In one embodiment, the processor may automatically adjust the second ovarian region using a machine learning method such that the adjusted second ovarian region and the first ovarian region are registered, as shown with reference to fig. 8.
In one embodiment, the processor determines a correspondence, such as a rotational relationship, or a translational relationship, or a scaling relationship, or any combination thereof, of the first ovarian region and the second ovarian region using the second learning model, and adjusts the second ovarian region based on the determined correspondence such that the second ovarian follicle region is in registration with the first ovarian follicle region. The method mainly comprises the following steps: 1) constructing a database, wherein the database comprises a large number of data sets and corresponding marking information thereof, and the marking information is in a corresponding relation; 2) and a registration step, namely inputting the first ovary region and the second ovary region into a database, judging the corresponding relation corresponding to the first ovary region and the second ovary region, adjusting the second ovary region according to the judged corresponding relation, and outputting the adjusted second ovary region.
The machine learning method further includes a deep learning method, and the registration of the ovarian tissue based on the deep learning is similar to the aforementioned registration method of the target follicle based on the deep learning, and is not described herein again.
In the above embodiment, it is necessary to determine the second ovarian region in the second ultrasound image, and then perform registration based on the acquired first ovarian region and the second ovarian region. In addition, registration of the ovarian regions may be performed in a manner that does not require prior determination of a second ovarian region, such as by conventional image matching methods. In the conventional image matching method, a sub-image with similarity meeting a preset threshold is searched in a second ultrasonic image based on the feature information of the first ovarian region, and the sub-image with the similarity meeting the preset threshold is determined as the second ovarian follicle region.
The conventional image registration method includes: domain transform-based methods; a template matching based approach; matching method based on image characteristics. Determining the first ovarian region using conventional image matching methods is similar to determining the first ovarian follicle region using conventional image matching methods described above, and will not be described herein again.
In one embodiment, the registration of the ovarian region can be achieved manually by an operator based on a method of image global registration. And receiving an instruction of an operator for registering the second ultrasonic image and the first ultrasonic image, and rotating, translating or scaling the second ultrasonic image according to the instruction so as to register the second ultrasonic image and the first ultrasonic image.
In the process of tracking and monitoring the same target follicle, a situation that the follicle is newly added or the target follicle disappears occurs. Reserving a follicle corresponding to a target follicle in a previous ultrasonic image in a next ultrasonic image, and recording information such as the volume diameter length corresponding to a relevant position of the follicle; deleting the target follicles appearing in the previous ultrasonic image and disappearing in the next ultrasonic image; and for the target follicle which newly appears in the next ultrasonic image, if the growth parameter of the newly increased follicle meets the preset condition, taking the newly increased follicle with the growth parameter meeting the preset condition as the new target follicle to be added into the tracking line and column, otherwise, deleting the newly increased follicle. The preset condition is that the volume meets a certain threshold condition or the diameter meets a certain threshold condition, and the specific threshold size can be automatically set by a processor or manually set by an operator.
Step 203, determining growth parameters of the target follicle according to the at least three follicle regions, respectively, and obtaining at least three growth parameters of the target follicle.
Wherein the growth parameters include at least one of: volume, path length, and growth rate.
In one embodiment, the processor may automatically determine or the operator may manually determine the growth parameters of the target follicle to obtain at least three growth parameters of the target follicle, and the processor may manually obtain the long diameter and the short diameter of the target follicle in the two-dimensional image as shown in fig. 9.
Illustratively, the acquired ultrasound image is a three-dimensional ultrasound image. The volume of the target follicle can be measured based on the voxels of the segmentation result and the distance between the voxels; the diameter of the target follicle may be obtained by fitting the segmentation result of the target follicle region to an ellipsoid, determining a section of the ellipsoid where the area of the target follicle region is the largest or meets a preset requirement, where the preset requirement may be user-defined or set by a machine, determining the longest diameter of the target follicle in the determined section, and determining the longest short diameter perpendicular to the long diameter as the short diameter of the target follicle in the determined section, where the perpendicular relationship includes absolute perpendicular and approximate perpendicular, and for example, between 85 degrees and 95 degrees clinically, the relationship may be basically regarded as perpendicular.
Illustratively, the acquired ultrasound image is a two-dimensional ultrasound image. An ellipse is fitted to the two-dimensional image of the target follicle in accordance with the target follicle region, and the major axis and the minor axis of the target follicle are determined on the basis of the ellipse. Acquiring a multi-frame section image in the vertical direction of a two-dimensional image based on a target follicle, and manually or automatically selecting a section image with the largest diameter length or the diameter length meeting preset requirements from the acquired multi-frame section image, wherein the preset requirements can be customized by a user or set by a machine; determining the height and the length of a target follicle based on the selected section image; the volume of the target follicle is calculated based on the determined height, length, and length.
Based on the above embodiment, the growth rate of the target follicle can be determined based on the ratio of the obtained volume or diameter length to the growth time.
And step 204, obtaining a growth trend chart of the target follicle according to at least three growth parameters of the target follicle.
And step 205, displaying a growth trend graph.
In one embodiment, the growth trend graph of the target follicle may be a growth parameter graph, wherein the growth parameter graph has a first coordinate of the examination time and a second coordinate of the growth parameter, such as shown with reference to fig. 10.
In one embodiment, the growth trend map of the target follicle may be a list of growth parameters corresponding to different examination times, as shown in fig. 11.
Based on the above description, according to the follicle tracking method and system provided by the embodiment of the application, a single target follicle can be tracked and monitored, and a growth trend graph of the single target follicle is finally obtained, so that an operator can accurately evaluate the optimal ovum picking time according to the growth trend graph of the target follicle, and the working efficiency and the accuracy are effectively improved.
In one embodiment, a method of follicle tracking is provided. Next, a follicle tracking method according to an embodiment of the present application will be described with reference to fig. 12. Fig. 12 is a schematic flow chart of a follicle tracking method 300 according to an embodiment of the present application.
As shown in fig. 12, a follicle tracking method 300 according to an embodiment of the present application includes the steps of:
in step 301, a first ultrasound image of ovarian tissue of a subject is acquired, wherein the ovarian tissue comprises a target follicle.
A processor automatically or manually acquires a first ultrasound image of ovarian tissue of a subject, wherein the ovarian tissue includes a target follicle. The ultrasound image may be a two-dimensional ultrasound image or may be a three-dimensional ultrasound image. In actual clinical practice, a tested subject usually performs ultrasonic examination for a plurality of times during the process from the injection of an ovulation-promoting drug to the egg taking. Based on clinical needs, generally, at least two ultrasound examinations are performed, and the same target follicle is tracked based on the ultrasound images obtained at least twice at different examination times. Of course, if the ultrasound images obtained three times to six times at different examination times track the same target follicle, the obtained ovum-taking time is more accurate.
Step 302, a follicle region corresponding to the target follicle is determined in the first ultrasound image, and a first follicle region is obtained.
In one embodiment, the target follicle may be one target follicle, or may be two or more target follicles. The follicle of interest may have grown during its growth and eventually become an ovum to be removed, or may disappear during the time when no ovum is formed. Of course, the target follicle may disappear or a new target follicle may be added throughout the tracking of the target follicle. Generally, a plurality of target follicles are selected at the beginning clinically, so that the target follicles are prevented from being lost in the subsequent tracking process.
Wherein the target follicle is selected during the first selection process, which generally selects the follicle with better growth as the target follicle. The follicle that grows better can be a follicle with a larger diameter on the section, or can be a follicle with a larger volume, or can also be a follicle that grows longer. The physician may manually select or the processor automatically identify the follicle with the better growth and list it as the target follicle.
In one embodiment, a plurality of first candidate follicular regions are determined based on the first ultrasound image, a growth parameter of the plurality of first candidate follicular regions is determined based on the plurality of first candidate follicular regions, and a first candidate follicular region whose growth parameter satisfies a first preset condition is determined as a first region of the target follicle. Wherein the growth parameters are volume or diameter length of the follicle. Taking the volume as an example, the first preset condition may be that the volume of the follicle is larger than a certain threshold value, or that the volume of the follicle is relatively large in the first candidate follicle volume, etc. The first preset condition is either automatically settable by the disposer or manually set by the operator.
In one embodiment, the first follicle candidate region with the largest growth parameter may be determined as the first region of the target follicle. The first follicle candidate region having the largest volume may be determined as the first region of the target follicle, or the first follicle candidate region having the largest diameter may be determined as the target follicle region.
In one embodiment, the first ultrasound image of ovarian tissue may be a two-dimensional ultrasound image, or may be a three-dimensional ultrasound image.
The processor may automatically segment a follicle region corresponding to the target follicle on the first ultrasound image to obtain a first follicle region; or the processor detects an instruction from the operator to trace a corresponding region of the target follicle of the first ultrasound image to obtain the first follicle region.
The processor automatically segments the target follicle on the first ultrasonic image based on the image characteristics of the follicle in two ways, one of which is a two-dimensional image segmentation method, namely, the first ultrasonic image is segmented into a plurality of two-dimensional sections, the area corresponding to the target follicle is segmented from the plurality of two-dimensional sections, and the corresponding area of the target follicle in the plurality of two-dimensional sections is synthesized to obtain a first follicle area.
The method comprises the steps that a plurality of two-dimensional section images of a first ultrasonic image are all two-dimensional section images in the first ultrasonic image; or the plurality of two-dimensional sections of the first ultrasonic image are sampling images acquired by a preset rule in the first ultrasonic image, and the segmentation result of the sampling images is subjected to three-dimensional interpolation to obtain the first follicle region.
Two-dimensional image segmentation methods mainly have two modes, one mode is a traditional segmentation algorithm, and modeling is not required to be carried out through a large amount of data; one is an image segmentation method based on machine learning, which requires a large amount of data to build a learning model. The two-dimensional image segmentation method is as described above and will not be described herein.
Another method is a three-dimensional image segmentation method, in which the processor directly performs a three-dimensional segmentation in the first ultrasound image based on image features of the follicle, thereby determining the boundary of the first follicular region.
The segmentation based on the three-dimensional data can detect a corresponding region of a target follicle through a target detection method (such as point detection and line detection), so as to segment the region, the method does not need a large amount of marked data, and common segmentation algorithms include a Level Set (Level Set) based segmentation algorithm, Random Walk (Random Walk), Graph Cut (Graph Cut), Snake, a three-dimensional law, clustering, a markov Random field and the like.
The region may be segmented by learning the features of the corresponding region of the target follicle in the database using a machine learning method. The method mainly comprises the following steps: 1) and constructing a database, wherein the database comprises a large number of data sets and corresponding marking results, and the marking information is boundary information for accurately segmenting the target. 2) And (3) segmentation steps, wherein two types of segmentation algorithms are mainly adopted, one type is a segmentation algorithm based on traditional machine learning, and the other type is a segmentation algorithm based on deep learning. The segmentation method based on machine learning is as described above and will not be described herein.
Step 303, a second ultrasound image of the ovarian tissue of the subject is obtained.
The processor automatically acquires a second ultrasound image of the ovarian tissue of the subject, or the operator manually determines the second ultrasound image of the ovarian tissue of the subject.
Step 304 is to determine a follicle region corresponding to the target follicle in the second ultrasound image based on the first follicle region, and obtain a second follicle region.
In one embodiment, the processor may automatically determine a follicular region corresponding to the target follicle in the second ultrasound image based on the first follicular region, obtaining a second follicular region; or the operator manually identifies the follicle region corresponding to the first follicle region in the second ultrasound image, and the processor detects the manual identification operation of the operator and automatically determines the identified follicle region as the second follicle region.
The processor automatically determines the second follicular area in the second ultrasound image according to the first follicular area, which mainly includes two methods, one is a conventional image matching method, that is, a sub-image similar to the characteristic information of the first follicular area is searched in the second ultrasound image based on the characteristic information of the first follicular area, iteration is required to search for an optimal solution, the speed is slow, and a large amount of data is not required.
In one embodiment, the processor automatically determines a second region from the second ultrasound image, where the feature information similarity information with the first follicular region meets a first threshold, and determines the second region meeting the first threshold as the second follicular region. Wherein the first threshold may be a percentage ratio, such as 90%. The processor may automatically set the first threshold; alternatively, the operator may manually set the first threshold according to clinical requirements.
In one embodiment, the characteristic information of the target follicle may be image signal information obtained by transforming the image signal from the time domain to the frequency domain by some transformation (e.g., fourier transform, inverse transform of the image onto the frequency domain, domain transform); or raw data information directly from the image, where the raw data information refers to data information directly obtained from the image without any processing, such as gray scale value, contrast, and the like; or the image feature information extracted from the original data information can be the image feature information, the image feature mainly can be divided into the features of points, lines, regions and the like, and can also be divided into local features and global features, wherein the point features and the line features are applied more, the point features mainly comprise Harris, Moravec, KLT, Harr-like, HOG, LBP, SIFT, SURF, BRIEF, SUSAN, FAST, CENSUS, FREAK, BRISK, ORB, optical flow method, A-KAZE and the like, and the line features mainly comprise LoG operators, Robert operators, Sobel operators, Prewitt operators, Canny operators and the like.
Based on the difference of the above feature information, the conventional image matching method is specifically divided into three types corresponding to the feature information: 1) the method based on domain transformation is based on the frequency domain information of the first follicle region for matching, and mainly adopts the modes of phase correlation, Walsh transform, wavelet transform and the like. 2) The template matching method includes performing matching based on raw data information of a first follicular area, such as gray scale values, searching a second ultrasound image for a sub-image with similarity meeting a first threshold with the first follicular area according to the raw data information of the first follicular area, and determining the sub-image meeting the first threshold as the second follicular area. In this process, feature information of the ultrasound image does not need to be extracted. 3) The method comprises the steps of firstly extracting image feature information of a first follicle region, then generating a feature description operator according to the extracted image feature information, and finally determining a second region with the similarity meeting a first threshold value with the description operator in a second ultrasonic image.
The method for determining the second follicular region in the second ultrasound image based on the first follicular region has another way, namely, an image registration way based on deep learning, besides the traditional image matching method. The image registration mode based on deep learning needs a large amount of labeled samples to establish a database, and can realize image registration in one step.
The processor automatically or manually obtains a second candidate follicle region corresponding to the target follicle in the second ultrasound image, determines a corresponding relationship between the first follicle region and the second candidate follicle region by using the first learning model, adjusts the first candidate follicle region according to the corresponding relationship, and determines the adjusted second candidate follicle region as the second follicle region. The method mainly comprises the following steps: 1) building a database, wherein the database comprises a large number of data sets and corresponding mark information thereof, and the mark information is aligned corresponding relation, such as rotation relation, translation relation, scaling relation or any combination relation of the three relations; 2) and a registering step, namely inputting the first follicle region and the second follicle candidate region into a database, judging a corresponding relationship, and adjusting the second follicle candidate region of the target follicle according to the judged relationship, wherein the adjustment can be realized by at least one of rotation, translation or scaling, and the adjusted second follicle candidate region is output and determined as the second follicle region.
The image registration method based on deep learning comprises two types, wherein one type is a two-stage image registration mode; one is end-to-end image registration. The specific image registration method based on the deep learning is as described above, and is not described herein again.
By the same method, corresponding areas of the same target follicle in the ultrasonic images at other different examination times are sequentially obtained, and continuous tracking monitoring of the same target follicle in the ultrasonic images obtained at different examination times is completed.
On the basis of the above embodiment, the registration or matching of the ovarian tissue regions in different ultrasound images may be performed first, or the registration of the whole image may be performed, and then the tracking monitoring of the target follicle is performed on the basis.
In one embodiment, a first ovarian region of the ovarian tissue of the subject is determined in the first ultrasound image, a second ovarian region of the ovarian tissue of the subject is determined in the second ultrasound image, and the first ovarian region is registered with the second ovarian region. Wherein the processor may automatically register the first ovarian region and the second ovarian region, or the operator manually registers.
In one embodiment, the processor determines the first ovarian region in the first ultrasound image using a machine learning method based on object segmentation or a conventional segmentation algorithm, or the operator manually determines the first ovarian region.
In one embodiment, the position of the second ovarian region in the second ultrasound image is adjusted according to the position of the first ovarian region in the first ultrasound image, so that the position of the adjusted second ovarian region in the second ultrasound image is the same as the position of the first ovarian region in the first ultrasound image; the size of the second ovarian region relative to the first ovarian region can be adjusted according to the size of the first ovarian region, the size of the radial length of the section, or the size of the volume, so that the adjusted second ovarian region is the same as the size of the first ovarian region. Wherein the adjustment of the orientation and the adjustment of the size can be automatically adjusted by the processor or manually adjusted by an operator.
In one embodiment, the processor may automatically adjust the second ovarian region using a machine learning method such that the adjusted second ovarian region and the first ovarian region are registered.
In one embodiment, the processor determines a correspondence, such as a rotational relationship, or a translational relationship, or a scaling relationship, or any combination thereof, of the first ovarian region and the second ovarian region using the second learning model, and adjusts the second ovarian region based on the determined correspondence such that the second ovarian follicle region is in registration with the first ovarian follicle region. The method mainly comprises the following steps: 1) constructing a database, wherein the database comprises a large number of data sets and corresponding marking information thereof, and the marking information is in a corresponding relation; 2) and a registration step, namely inputting the first ovary region and the second ovary region into a database, judging the corresponding relation corresponding to the first ovary region and the second ovary region, adjusting the second ovary region according to the judged corresponding relation, and outputting the adjusted second ovary region.
The machine learning method further includes a deep learning method, and the registration of the ovarian tissue based on the deep learning is similar to the aforementioned registration method of the target follicle based on the deep learning, and is not described herein again.
In the above embodiment, it is necessary to determine the second ovarian region in the second ultrasound image, and then perform registration based on the acquired first ovarian region and the second ovarian region. In addition, registration of the ovarian region may be performed in a manner that does not require the first determination of a second ovarian region, such as conventional image registration methods. In the conventional image registration method, a sub-image with similarity meeting a preset threshold is searched in a second ultrasonic image based on the feature information of the first ovarian region, and the sub-image with the similarity meeting the preset threshold is determined as the second ovarian follicle region.
The conventional image registration method includes: domain transform-based methods; a template matching based approach; matching method based on image characteristics. Determining the second ovarian region using conventional image matching methods is similar to determining the second ovarian follicle region using conventional image matching methods as described above and will not be described herein again.
In one embodiment, the registration of the ovarian region can be achieved manually by an operator based on a method of image global registration. And receiving an instruction of an operator for registering the second ultrasonic image and the first ultrasonic image, and rotating, translating or scaling the second ultrasonic image according to the instruction so as to register the second ultrasonic image and the first ultrasonic image.
In the process of registering the target follicle, a new follicle may appear, or the target follicle may disappear. Reserving a follicle corresponding to a target follicle in a previous ultrasonic image in a next ultrasonic image, and recording information such as the volume diameter length corresponding to a relevant position of the follicle; deleting the target follicles appearing in the previous ultrasonic image and disappearing in the next ultrasonic image; and for the target follicle newly appearing in the next ultrasonic image, if the growth parameter of the newly added follicle meets the preset condition, adding the target follicle into the tracking line as the new target follicle, and if not, deleting the target follicle. The preset condition is that the volume meets a certain threshold condition or the diameter meets a certain threshold condition, and the specific threshold size can be automatically set by a processor or manually set by an operator.
Based on the above description, according to the follicle tracking method and the follicle tracking system in the embodiment of the application, the same target follicle can be tracked and monitored on a plurality of ultrasonic images, and the growth and development conditions of the same target follicle can be realized clinically, so that an operator can accurately evaluate the optimal ovum taking time, and the working efficiency and the accuracy are effectively improved.
Although the example embodiments have been described herein with reference to the accompanying drawings, it is to be understood that the above-described example embodiments are merely illustrative and are not intended to limit the scope of the present application thereto. Various changes and modifications may be effected therein by one of ordinary skill in the pertinent art without departing from the scope or spirit of the present application. All such changes and modifications are intended to be included within the scope of the present application as claimed in the appended claims.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the several embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described device embodiments are merely illustrative, and for example, the division of the units is only one logical functional division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another device, or some features may be omitted, or not executed.
In the description provided herein, numerous specific details are set forth. However, it is understood that embodiments of the application may be practiced without these specific details. In some instances, well-known methods, structures and techniques have not been shown in detail in order not to obscure an understanding of this description.
Similarly, it should be appreciated that in the description of exemplary embodiments of the present application, various features of the present application are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the application and aiding in the understanding of one or more of the various inventive aspects. However, the method of the present application should not be construed to reflect the intent: this application is intended to cover such departures from the present disclosure as come within known or customary practice in the art to which this invention pertains. Rather, as the following claims reflect, inventive aspects lie in less than all features of a single disclosed embodiment. Thus, the claims following the detailed description are hereby expressly incorporated into this detailed description, with each claim standing on its own as a separate embodiment of this application.
It will be understood by those skilled in the art that all of the features disclosed in this specification (including any accompanying claims, abstract and drawings), and all of the processes or elements of any method or apparatus so disclosed, may be combined in any combination, except combinations where such features are mutually exclusive. Each feature disclosed in this specification (including any accompanying claims, abstract and drawings) may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise.
Furthermore, those skilled in the art will appreciate that while some embodiments described herein include some features included in other embodiments, rather than other features, combinations of features of different embodiments are meant to be within the scope of the application and form different embodiments. For example, in the claims, any of the claimed embodiments may be used in any combination.
The various component embodiments of the present application may be implemented in hardware, or in software modules running on one or more processors, or in a combination thereof. Those skilled in the art will appreciate that a microprocessor or Digital Signal Processor (DSP) may be used in practice to implement some or all of the functionality of some of the modules according to embodiments of the present application. The present application may also be embodied as apparatus programs (e.g., computer programs and computer program products) for performing a portion or all of the methods described herein. Such programs implementing the present application may be stored on a computer readable medium or may be in the form of one or more signals. Such a signal may be downloaded from an internet website or provided on a carrier signal or in any other form.
It should be noted that the above-mentioned embodiments illustrate rather than limit the application, and that those skilled in the art will be able to design alternative embodiments without departing from the scope of the appended claims. In the claims, any reference signs placed between parentheses shall not be construed as limiting the claim. The application may be implemented by means of hardware comprising several distinct elements, and by means of a suitably programmed computer. In the unit claims enumerating several means, several of these means may be embodied by one and the same item of hardware. The usage of the words first, second and third, etcetera do not indicate any ordering. These words may be interpreted as names.
The above description is only for the specific embodiments of the present application or the description thereof, and the protection scope of the present application is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope disclosed in the present application, and shall be covered by the protection scope of the present application. The protection scope of the present application shall be subject to the protection scope of the claims.

Claims (38)

1. A method of follicle tracking, comprising:
acquiring ultrasound images of ovarian tissue of a subject at least three different examination times, wherein the ovarian tissue comprises a target follicle;
determining a follicle region corresponding to the target follicle on the at least three ultrasound images at different examination times respectively to obtain at least three follicle regions, wherein the at least three follicle regions are follicle regions corresponding to the same target follicle;
determining growth parameters of the target follicle according to the at least three follicle areas respectively, and obtaining at least three growth parameters of the target follicle;
obtaining a growth trend map of the target follicle according to at least three growth parameters of the target follicle;
and displaying the growth trend graph.
2. The method of claim 1, wherein:
the growth trend graph is a growth parameter graph, wherein the growth parameter graph takes inspection time as a first coordinate and takes a growth parameter as a second coordinate; alternatively, the first and second electrodes may be,
the growth trend graph is a list of the growth parameters corresponding to the different inspection times.
3. The method of claim 1, wherein determining the follicular region corresponding to the target follicle on the ultrasound images at the at least three different examination times respectively comprises:
determining a follicle region corresponding to the target follicle in a first ultrasonic image of the ultrasonic images at the at least three different examination times to obtain a first follicle region;
according to the first follicular area, determining a follicular area corresponding to the target follicle in a second ultrasound image of the ultrasound images at the at least three different examination times, and obtaining a second follicular area;
and determining a follicle region corresponding to the target follicle in a third ultrasound image of the at least three ultrasound images at different examination times according to the first follicle region or the second follicle region, and obtaining a third follicle region.
4. The method as claimed in claim 3, wherein determining a follicular region corresponding to the target follicle in a first ultrasound image of the ultrasound images of the at least three different examination times, and obtaining a first follicular region comprises:
segmenting a follicle region corresponding to the target follicle in a first ultrasound image of the ultrasound images of the at least three different examination times based on the image characteristics of the follicle to obtain a first follicle region; alternatively, the first and second electrodes may be,
detecting an operator's tracing of a corresponding region of a target follicle in a first ultrasound image of the ultrasound images of the at least three different examination times to obtain a first follicle region.
5. The method as claimed in claim 4, wherein the ultrasound image is a three-dimensional ultrasound image, the segmenting a follicle region corresponding to the target follicle in a first ultrasound image of the ultrasound images of the at least three different examination times based on the image characteristics of the follicle to obtain a first follicle region comprises:
segmenting a corresponding region of the target follicle in a plurality of two-dimensional slice images of a first ultrasound image of the three-dimensional ultrasound images of the at least three different examination times based on image features of the follicle;
and integrating corresponding areas of the target follicle on the plurality of two-dimensional sectional images to obtain the first follicle area.
6. The method of claim 5, wherein the plurality of two-dimensional sectional images of the first ultrasound image are all two-dimensional sectional images of the first ultrasound image, or,
the two-dimensional sections of the first ultrasound image are sampling images obtained by sampling the first ultrasound image according to a first preset rule, and the synthesizing of the corresponding regions of the target follicle on the two-dimensional sections includes: and performing three-dimensional interpolation on the segmentation result of the sampling image to obtain the first follicle region.
7. The method as claimed in claim 4, wherein the ultrasound image is a three-dimensional ultrasound image, the segmenting a follicle region corresponding to the target follicle in a first ultrasound image of the ultrasound images of the at least three different examination times based on the image characteristics of the follicle to obtain a first follicle region comprises:
based on the image characteristics of the follicles, performing three-dimensional segmentation on a follicle region corresponding to the target follicle in a first ultrasound image of the three-dimensional ultrasound images of the at least three different examination times to obtain the first follicle region.
8. The method as claimed in claim 3, wherein determining a follicular region corresponding to the target follicle in a first ultrasound image of the ultrasound images of the at least three different examination times, and obtaining a first follicular region comprises:
determining a region of the ovarian tissue in a first ultrasound image of the ultrasound images of the at least three different examination times, obtaining a first ovarian region;
and determining a follicle region corresponding to the target follicle based on the first ovarian region, and obtaining a first follicle region.
9. The method as claimed in claim 3, wherein determining a follicular region corresponding to the target follicle in a first ultrasound image of the ultrasound images of the at least three different examination times, and obtaining a first follicular region comprises:
determining a plurality of first candidate follicular regions in a first ultrasound image of the ultrasound images of the at least three different examination times;
acquiring a growth parameter of each first follicle candidate area, and determining the first follicle candidate area of which the growth parameter meets a first preset condition as the first follicle candidate area.
10. The method as claimed in claim 3, wherein determining a follicular region corresponding to the target follicle in a first ultrasound image of the ultrasound images of the at least three different examination times, and obtaining a first follicular region comprises:
determining a plurality of first candidate follicular regions in a first ultrasound image of the ultrasound images of the at least three different examination times;
acquiring a growth parameter of each first follicle candidate region, and determining the first follicle candidate region with the maximum growth parameter as the first follicle candidate region.
11. The method of claim 3, wherein determining a follicular region corresponding to the target follicle in a second ultrasound image of the ultrasound images of the at least three different examination times based on the first follicular region, and obtaining a second follicular region comprises:
acquiring a follicle region, of which the similarity to the characteristic information of the first follicle region meets a first threshold value, in a second ultrasound image of the ultrasound images at the at least three different examination times, and determining the acquired follicle region as a second follicle region; alternatively, the first and second electrodes may be,
acquiring a second candidate follicle region corresponding to a target follicle in the second ultrasound image, determining a correspondence between the first follicle region and the second candidate follicle region by using a first learning model, adjusting the second candidate follicle region according to the correspondence, and determining the adjusted second candidate follicle region as the second follicle region.
12. The method of claim 3, wherein determining a follicular region corresponding to the target follicle in a second ultrasound image of the ultrasound images of the at least three different examination times based on the first follicular region, and obtaining a second follicular region comprises:
detecting an operation of identifying a follicle region corresponding to the first follicle region in a second ultrasound image of the ultrasound images of the at least three different examination times by the operator, and determining the identified follicle region as a second follicle region.
13. The method of claim 11 or 12, further comprising:
determining a first ovarian region of ovarian tissue of the subject in the first ultrasound image;
determining a second ovarian region of ovarian tissue of the subject in the second ultrasound image;
registering the first ovarian region with the second ovarian region.
14. The method of claim 13, wherein registering the first ovarian region with the second ovarian region comprises:
adjusting the position of the second ovarian region in the second ultrasound image according to the position of the first ovarian region in the first ultrasound image, so that the position of the adjusted second ovarian region in the second ultrasound image is the same as the position of the first ovarian region in the first ultrasound image; alternatively, the first and second electrodes may be,
adjusting the size of the second ovarian region relative to the first ovarian region based on the size of the first ovarian region such that the adjusted second ovarian region is the same size as the first ovarian region.
15. The method of claim 13, wherein registering the first ovarian region with a second ovarian region comprises:
determining the corresponding relation between the second ovary area and the first ovary area by adopting a second learning model, and adjusting the second ovary area according to the corresponding relation; alternatively, the first and second electrodes may be,
receiving an instruction of an operator to adjust the second ovarian region according to the first ovarian region, and adjusting the second ovarian region according to the instruction.
16. The method of claim 11 or 12, further comprising:
determining a first ovarian region of ovarian tissue of the subject in the first ultrasound image;
and matching the second ultrasonic image with the template of the first ovarian region, and determining a second ovarian region of the ovarian tissue in the second ultrasonic image according to the matching result.
17. The method of claim 11 or 12, further comprising:
receiving an instruction of an operator for registering the second ultrasonic image and the first ultrasonic image, and rotating, translating or scaling the second ultrasonic image according to the instruction so as to register the second ultrasonic image and the first ultrasonic image.
18. The method of any one of claims 1 to 17, wherein the growth parameters include at least one of: volume, path length, and growth rate.
19. A method of follicle tracking, comprising:
acquiring a first ultrasound image of ovarian tissue of a subject, wherein the ovarian tissue comprises a target follicle;
determining a follicle region corresponding to the target follicle in the first ultrasonic image to obtain a first follicle region;
acquiring a second ultrasonic image of the ovarian tissue of the tested object;
a second follicular region is obtained by determining a follicular region in the second ultrasound image corresponding to the target follicle based on the first follicular region.
20. The method of claim 19, wherein determining a follicular region corresponding to the target follicle in the first ultrasound image, and obtaining a first follicular region comprises:
segmenting a follicle region corresponding to the target follicle in the first ultrasonic image based on the image characteristics of the follicle to obtain a first follicle region; alternatively, the first and second electrodes may be,
detecting an operator tracing operation on a corresponding region of a target follicle in the first ultrasound image to obtain a first follicle region.
21. The method of claim 20, wherein the ultrasound image is a three-dimensional ultrasound image, and the segmenting a follicle region corresponding to the target follicle in the first ultrasound image based on the image feature of the follicle comprises:
segmenting a corresponding region of the target follicle in a plurality of two-dimensional sectional images of the first ultrasound image based on image features of the follicle;
and integrating corresponding areas of the target follicle on the plurality of two-dimensional sectional images to obtain the first follicle area.
22. The method of claim 21, wherein the plurality of two-dimensional sectional images of the first ultrasound image are all two-dimensional sectional images of the first ultrasound image, or,
the two-dimensional sections of the first ultrasound image are sampling images obtained by sampling the first ultrasound image according to a first preset rule, and the synthesizing of the corresponding regions of the target follicle on the two-dimensional sections includes: and performing three-dimensional interpolation on the segmentation result of the sampling image to obtain the first follicle region.
23. The method of claim 19, wherein the ultrasound image is a three-dimensional ultrasound image, and the segmenting a follicle region corresponding to the target follicle in the first ultrasound image based on the image feature of the follicle comprises:
based on the image characteristics of the follicle, a follicle region corresponding to the target follicle is segmented in three dimensions in the first ultrasound image to obtain the first follicle region.
24. The method of claim 19, wherein determining a follicular region corresponding to the target follicle in the first ultrasound image, and obtaining a first follicular region comprises:
determining a region of the ovarian tissue in the first ultrasound image, obtaining a first ovarian region;
and determining a follicle region corresponding to the target follicle based on the first ovarian region, and obtaining a first follicle region.
25. The method of claim 19, wherein determining a follicular region corresponding to the target follicle in the first ultrasound image, and obtaining a first follicular region comprises:
determining a plurality of first candidate follicular regions in the first ultrasound image;
and acquiring a growth parameter of each first follicle candidate area, and determining the first follicle candidate area with the growth parameter meeting a first preset condition as the first follicle area.
26. The method of claim 19, wherein determining a follicular region corresponding to the target follicle in the first ultrasound image, and obtaining a first follicular region comprises:
determining a plurality of first candidate follicular regions in the first ultrasound image;
and acquiring a growth parameter of each first follicle candidate area, and determining the first follicle candidate area with the maximum growth parameter as the first follicle area.
27. The method of claim 19, wherein determining a follicular region corresponding to the target follicle in the second ultrasound image based on the first follicular region to obtain a second follicular region comprises:
acquiring a follicle region of which the similarity with the characteristic information of the first follicle region meets a first threshold value in the second ultrasound image, and determining the acquired follicle region as a second follicle region; alternatively, the first and second electrodes may be,
acquiring a second candidate follicle region corresponding to a target follicle in the second ultrasound image, determining a correspondence between the first follicle region and the second candidate follicle region by using a first learning model, adjusting the second candidate follicle region according to the correspondence, and determining the adjusted second candidate follicle region as the second follicle region.
28. The method of claim 19, wherein determining a follicular region corresponding to the target follicle in the second ultrasound image based on the first follicular region to obtain a second follicular region comprises:
detecting an operation of an operator to identify a follicle region corresponding to the first follicle region in the second ultrasound image, and determining the identified follicle region as a second follicle region.
29. The method of claim 27 or 28, further comprising:
determining a first ovarian region of ovarian tissue of the subject in the first ultrasound image;
determining a second ovarian region of ovarian tissue of the subject in the second ultrasound image;
registering the first ovarian region with the second ovarian region.
30. The method of claim 29, wherein registering the first ovarian region with the second ovarian region comprises:
adjusting the position of the second ovarian region in the second ultrasound image according to the position of the first ovarian region in the first ultrasound image, so that the position of the adjusted second ovarian region in the second ultrasound image is the same as the position of the first ovarian region in the first ultrasound image; alternatively, the first and second electrodes may be,
adjusting the size of the second ovarian region relative to the first ovarian region based on the size of the first ovarian region such that the adjusted second ovarian region is the same size as the first ovarian region.
31. The method of claim 29, wherein registering the first ovarian region with the second ovarian region comprises:
determining the corresponding relation between the second ovary area and the first ovary area by adopting a second learning model, and adjusting the second ovary area according to the corresponding relation; alternatively, the first and second electrodes may be,
receiving an instruction of an operator for adjusting the second ovary region to be registered according to the first ovary region, and adjusting the second ovary region to be registered according to the instruction.
32. The method of claim 27 or 28, further comprising:
determining a first ovarian region of the ovarian tissue in the first ultrasound image;
and matching the second ultrasonic image with the template of the first ovarian region, and determining a second ovarian region of the ovarian tissue in the second ultrasonic image according to the matching result.
33. The method of claim 27 or 28, further comprising:
receiving an instruction of an operator for registering the second ultrasonic image and the first ultrasonic image, and rotating, translating or scaling the second ultrasonic image according to the instruction so as to register the second ultrasonic image and the first ultrasonic image.
34. The method according to any one of claims 19-33, wherein the growth parameters comprise at least one of: volume, path length, and growth rate.
35. A method of follicle tracking, comprising:
acquiring ultrasound images of at least two different examination times of ovarian tissue of a subject, wherein the ovarian tissue comprises a target follicle;
determining a follicle region corresponding to the target follicle on the at least two ultrasound images at different examination times respectively to obtain at least two follicle regions, wherein the at least two follicle regions are follicle regions corresponding to the same target follicle;
determining growth parameters of the target follicle according to the at least two follicle areas respectively, and obtaining at least two growth parameters of the target follicle;
obtaining a growth trend map of the target follicle according to at least two growth parameters of the target follicle;
and displaying the growth trend graph.
36. An egg tracking system, comprising:
an ultrasonic probe;
the transmitting circuit is used for exciting the ultrasonic probe to transmit ultrasonic waves to the ovarian tissue of the measured object;
the receiving circuit is used for exciting the ultrasonic probe to receive the echo of the ultrasonic wave so as to obtain an echo signal of the ultrasonic wave;
a processor for performing the follicle tracking method of any one of claims 1-18;
and the display is used for displaying the growth trend graph.
37. An egg tracking system, comprising:
an ultrasonic probe;
the transmitting circuit is used for exciting the ultrasonic probe to transmit ultrasonic waves to the ovarian tissue of the measured object;
the receiving circuit is used for exciting the ultrasonic probe to receive the echo of the ultrasonic wave so as to obtain an echo signal of the ultrasonic wave;
a processor for performing the follicle tracking method of any of claims 19-34.
38. An egg tracking system, comprising:
an ultrasonic probe;
the transmitting circuit is used for exciting the ultrasonic probe to transmit ultrasonic waves to the ovarian tissue of the measured object;
the receiving circuit is used for exciting the ultrasonic probe to receive the echo of the ultrasonic wave so as to obtain an echo signal of the ultrasonic wave;
a processor for performing the follicle tracking method as defined in claim 35.
CN202080047331.4A 2020-12-28 2020-12-28 Method and system for tracking oocytes Pending CN114041166A (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2020/140317 WO2022140960A1 (en) 2020-12-28 2020-12-28 Follicle tracking method and system

Publications (1)

Publication Number Publication Date
CN114041166A true CN114041166A (en) 2022-02-11

Family

ID=80140840

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202080047331.4A Pending CN114041166A (en) 2020-12-28 2020-12-28 Method and system for tracking oocytes

Country Status (2)

Country Link
CN (1) CN114041166A (en)
WO (1) WO2022140960A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116912255B (en) * 2023-09-14 2023-12-19 济南宝林信息技术有限公司 Follicular region segmentation method for ovarian tissue analysis

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101630763B1 (en) * 2014-08-29 2016-06-15 삼성메디슨 주식회사 Ultrasound image display appratus and method for displaying ultrasound image
CN105496453B (en) * 2015-11-13 2018-05-25 武汉科技大学 A kind of ox ovarian follicle ultrasonic monitoring system and its monitoring method
EP3363368A1 (en) * 2017-02-20 2018-08-22 Koninklijke Philips N.V. Ovarian follicle count and size determination
CN110021025B (en) * 2019-03-29 2021-07-06 上海联影智能医疗科技有限公司 Region-of-interest matching and displaying method, device, equipment and storage medium
CN110197713B (en) * 2019-05-10 2021-12-14 上海依智医疗技术有限公司 Medical image processing method, device, equipment and medium
CN110246135B (en) * 2019-07-22 2022-01-18 新名医(北京)科技有限公司 Follicle monitoring method, device, system and storage medium

Also Published As

Publication number Publication date
WO2022140960A1 (en) 2022-07-07

Similar Documents

Publication Publication Date Title
Sobhaninia et al. Fetal ultrasound image segmentation for measuring biometric parameters using multi-task deep learning
US10810735B2 (en) Method and apparatus for analyzing medical image
JP6467041B2 (en) Ultrasonic diagnostic apparatus and image processing method
CN105555198B (en) Method and device for automatically identifying measurement items and ultrasonic imaging equipment
CN110325119B (en) Ovarian follicle count and size determination
CN102171724B (en) The selection of medical image sequences snapshot
US10039501B2 (en) Computer-aided diagnosis (CAD) apparatus and method using consecutive medical images
EP3655917B1 (en) Fetal ultrasound image processing
US20210393240A1 (en) Ultrasonic imaging method and device
CN111374708B (en) Fetal heart rate detection method, ultrasonic imaging device and storage medium
CN111281430B (en) Ultrasonic imaging method, device and readable storage medium
US20210077062A1 (en) Device and method for obtaining anatomical measurements from an ultrasound image
CN116058864A (en) Classification display method of ultrasonic data and ultrasonic imaging system
CN114041166A (en) Method and system for tracking oocytes
RU2746152C2 (en) Detection of a biological object
Nanthagopal et al. A region-based segmentation of tumour from brain CT images using nonlinear support vector machine classifier
CN114521914A (en) Ultrasonic parameter measuring method and ultrasonic parameter measuring system
CN112801940A (en) Model evaluation method, device, equipment and medium
CN109636843B (en) Amniotic fluid index measurement method, ultrasonic imaging equipment and storage medium
CN115813433A (en) Follicle measuring method based on two-dimensional ultrasonic imaging and ultrasonic imaging system
US20210338194A1 (en) Analysis method for breast image and electronic apparatus using the same
WO2014106747A1 (en) Methods and apparatus for image processing
CN113792740A (en) Arteriovenous segmentation method, system, equipment and medium for fundus color photography
CN113693625A (en) Ultrasonic imaging method and ultrasonic imaging apparatus
KR101556601B1 (en) Apparatus and method for building big data database of 3d volume images

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination