CN112233122A - Method and device for extracting and measuring object in ultrasonic image - Google Patents

Method and device for extracting and measuring object in ultrasonic image Download PDF

Info

Publication number
CN112233122A
CN112233122A CN201910574501.3A CN201910574501A CN112233122A CN 112233122 A CN112233122 A CN 112233122A CN 201910574501 A CN201910574501 A CN 201910574501A CN 112233122 A CN112233122 A CN 112233122A
Authority
CN
China
Prior art keywords
image
target object
extracting
segmentation
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910574501.3A
Other languages
Chinese (zh)
Inventor
张凤姝
凌锋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Edan Instruments Inc
Original Assignee
Edan Instruments Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Edan Instruments Inc filed Critical Edan Instruments Inc
Priority to CN201910574501.3A priority Critical patent/CN112233122A/en
Publication of CN112233122A publication Critical patent/CN112233122A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10132Ultrasound image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30008Bone
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30044Fetus; Embryo

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Geometry (AREA)
  • Ultra Sonic Daignosis Equipment (AREA)

Abstract

The invention relates to the field of ultrasonic image processing, and particularly provides an ultrasonic image object extraction and measurement method and device. The method for extracting the object in the ultrasonic image comprises the following steps: a method for extracting an object in an ultrasonic image comprises the following steps: acquiring an ultrasonic image containing a target object; carrying out image segmentation on the ultrasonic image to obtain a segmented image containing a plurality of characteristic regions; connecting a plurality of characteristic regions which meet preset direction conditions and are connected with the segmentation image according to the direction information of each characteristic region to obtain a target image containing the complete target object; and extracting the target object in the target image. By the method, the complete and clear target object can be automatically extracted, so that the subsequent measurement of the target object is more accurate.

Description

Method and device for extracting and measuring object in ultrasonic image
Technical Field
The invention relates to the field of ultrasonic image processing, in particular to an ultrasonic image object extracting and measuring method and device.
Background
Ultrasonic imaging is an important means for medical diagnosis, especially for hospital emergency department examination, and has been widely used in the medical field with the advantages of reality, cheapness and no damage, but factors such as signal attenuation, uneven gray distribution, artifacts, speckle noise and the like all cause the signal-to-noise ratio of an ultrasonic image to be low, so that a lot of difficulties exist in quantitative analysis of the ultrasonic image, for example, in measuring the length of a fetal humerus or femur in ultrasonic imaging.
The fetal bone system is a routine project of prenatal ultrasonic examination, and the measurement of the length of the humerus or the femur of a fetus has important clinical value for screening congenital dysplasia, short limbs, disproportionate skeletal development caused by some chromosome abnormalities and other malformations, and is also an indispensable parameter for estimating fetal weight, fetal age and the like. Currently, fetal humerus or femur measurement in clinical diagnosis is mainly performed manually by an ultrasonic doctor operating a trackball, but there are a lot of problems such as speckle noise, artifact interference, blurred bone edge and the like on an ultrasonic image, so that random errors generated by manual measurement, visual errors of the clinician and the like all affect the accuracy of measurement results, and repetitive operations also increase time cost additionally.
Therefore, it is important to realize automatic measurement of a target object in analysis of an ultrasound image. However, when the target segmentation is performed on the ultrasound image by using the conventional image recognition method, because the gray scale distribution of the target object in the ultrasound image is not completely uniform, the same target object is split into two or more regions after segmentation, which brings difficulty to subsequent screening and calculation, and thus, when the target extraction is performed in the ultrasound image, the process is complicated and is difficult to implement.
Disclosure of Invention
The invention provides an object extraction method in an ultrasonic image, which can accurately extract a target object in the ultrasonic image, and aims to solve the technical problems of complexity and difficulty in implementation in the process of extracting the target on the ultrasonic image by the conventional image identification method.
Meanwhile, in order to solve the technical problem that the accuracy of manually measuring a target object on an ultrasonic image is low in the prior art, the invention provides the method for measuring the object in the ultrasonic image, which can automatically measure and has a more accurate measurement result.
In a first aspect, the present invention provides a method for extracting an object in an ultrasound image, including:
acquiring an ultrasonic image containing a target object;
performing image segmentation on the ultrasonic image to obtain a segmented image comprising a plurality of characteristic regions, wherein at least one characteristic region in the plurality of characteristic regions corresponds to the target object;
connecting the characteristic regions meeting preset direction conditions on the segmentation image according to the direction information of the characteristic regions to obtain a target image containing the complete target object;
and extracting the target object in the target image.
In some embodiments, the image segmenting the ultrasound image to obtain a segmented image including a plurality of feature regions includes:
and carrying out binarization processing on the ultrasonic image to obtain a binary image containing a plurality of characteristic areas.
In some embodiments, the connecting the feature regions on the segmented image that satisfy a preset direction condition according to the direction information of the feature regions includes:
acquiring an end point of the characteristic region;
determining whether other endpoints are included in a predetermined area pointed to by an endpoint,
when other end points are included in the preset area pointed by the end point, extracting the other end point which is not in the same characteristic area with the end point from the other end points, wherein the difference value between the direction value of the other end point in the ultrasonic image direction field and the direction value of the end point in the ultrasonic image direction field is in a preset range;
connecting the one endpoint and the other endpoint.
In some embodiments, said connecting said one endpoint and said another endpoint comprises:
and filling pixels between the one end point and the other end point to obtain a connection area.
In some embodiments, the obtaining the end point of the feature region comprises:
thinning a plurality of characteristic regions on the binary image to obtain end points of the thinned plurality of characteristic regions;
further comprising, after said connecting said one endpoint and said another endpoint:
and performing expansion processing on the connecting area, and superposing the expanded connecting area on the binary image.
In some embodiments, the image segmenting the ultrasound image comprises:
and filtering the ultrasonic image, and performing image segmentation on the filtered ultrasonic image.
In some embodiments, the image segmentation of the ultrasound image and the connection of the feature regions on the segmented image according to the direction information of the feature regions further comprise:
and screening the segmentation image based on the characteristics of the target object.
In some embodiments, the target object comprises a fetal humerus and/or a femur.
In a second aspect, the present invention provides a method for measuring an object in an ultrasound image, including:
acquiring a target object on an ultrasonic image, wherein the target object is obtained by adopting the object extraction method in the ultrasonic image;
measuring a parameter of the target object.
In some embodiments, the measuring the parameter of the target object comprises:
performing straight line fitting on the target object;
and calculating the length of the line segment between two intersection points of the straight line and the target object.
In a third aspect, the present invention provides an apparatus for extracting an object in an ultrasound image, including:
an image acquisition module for acquiring an ultrasound image including a target object;
the image segmentation module is used for carrying out image segmentation on the ultrasonic image to obtain a segmented image comprising a plurality of characteristic areas, wherein at least one characteristic area in the characteristic areas corresponds to the target object;
the connecting module is used for connecting the characteristic regions on the segmented image according to the direction information of the characteristic regions to obtain a target image containing the complete target object; and
and the extraction module is used for extracting the target object in the target image.
In a fourth aspect, the present invention provides an apparatus for measuring an object in an ultrasound image, including:
the acquisition module is used for acquiring a target object on an ultrasonic image, wherein the target object is obtained by adopting the ultrasonic image object extraction method; and
a measurement module for measuring a parameter of the target object.
In a fifth aspect, the present invention provides a medical device comprising:
a processor; and
a memory communicatively coupled to the processor and storing computer readable instructions executable by the processor, the processor executing the above-described method for extracting an object in an ultrasound image when the computer readable instructions are executed.
The invention provides an object extraction method in an ultrasonic image, which comprises the steps of obtaining an ultrasonic image of a target object, carrying out image segmentation on the ultrasonic image to obtain a segmented image comprising a plurality of characteristic areas, wherein the characteristic areas comprise fracture areas corresponding to the target object and other interference areas, and the fracture areas cause difficulty in subsequent screening and calculation.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and other drawings can be obtained by those skilled in the art without creative efforts.
FIG. 1 is a schematic illustration of an ultrasound image in accordance with some embodiments of the present invention;
FIG. 2 is a schematic diagram of an ultrasound image object extraction method according to an embodiment of the present invention;
fig. 3 is a schematic diagram of ultrasonic fetal humerus/femur image binarization processing;
FIG. 4 is a schematic diagram of connecting several feature areas in accordance with some embodiments of the invention;
FIG. 5 is a schematic diagram of a method of measuring an ultrasound image object according to some embodiments of the invention;
FIG. 6 is a schematic diagram of a target object measurement according to an embodiment of the present invention;
FIG. 7 is a schematic diagram of an ultrasound image object extraction apparatus according to some embodiments of the present invention;
FIG. 8 is a schematic diagram of an ultrasound image object measurement device in accordance with some embodiments of the present invention;
FIG. 9 is a block diagram of a computer system suitable for implementing a method or processor in accordance with embodiments of the invention.
Detailed Description
The technical solutions of the present invention will be described clearly and completely with reference to the accompanying drawings, and it should be understood that the described embodiments are some examples of the present invention, but not all embodiments. All other embodiments, which can be obtained by a person skilled in the art without any inventive step based on the embodiments of the present invention, are within the scope of the present invention. In addition, the technical features involved in the different embodiments of the present invention described below may be combined with each other as long as they do not conflict with each other.
The method for extracting the object in the ultrasonic image can be used for extracting the target object on the ultrasonic imaging in medical diagnosis. It should be noted that, since there are a lot of problems such as uneven gray scale distribution, artifacts, speckle noise, etc. on the ultrasound image, the signal-to-noise ratio of the ultrasound image is low, and therefore, it is difficult for an operator to manually measure the ultrasound image target object, which results in inaccurate measurement. Meanwhile, due to the above problems of the ultrasound image, a large number of non-target areas are generated when the ultrasound image is segmented by using the existing image identification method, which brings difficulty to target extraction.
More importantly, after the traditional image recognition is adopted to segment the ultrasonic image, segmentation fracture of the target object on the segmented image can be caused. The reason for the fracture is that the gray scale distribution of the target object in the ultrasound image is not completely uniform, and a situation of lower gray scale may occur at different positions of the same target object, in which case the image recognition easily recognizes the position as belonging to the representation of the background area, so that the same target object is segmented into two or even more different sub-areas after segmentation. In the subsequent calculation process, theoretically, the fracture sub-regions belong to the target object, so that the complexity and the accuracy of the subsequent calculation can be increased, and the target object cannot be well selected in the subsequent screening due to the integrity damage of the target object, so that the target extraction of the ultrasonic image by the conventional image identification method is difficult to realize. The method for extracting the object in the ultrasonic image is based on the connection of the fracture subareas generated after the ultrasonic image is segmented, so that the target extraction is simpler and more accurate, and the subsequent measurement and calculation are facilitated.
A method for extracting an object from an ultrasound image according to some embodiments of the present invention is shown in fig. 1. As shown in fig. 1, in some embodiments, the method for extracting an object in an ultrasound image includes:
and S1, acquiring an ultrasonic image containing the target object.
S2, the ultrasound image is segmented to obtain a segmented image including a plurality of feature regions.
And S3, connecting and dividing a plurality of characteristic areas meeting the preset direction condition on the image according to the direction information of each characteristic area to obtain a target image containing a complete target object.
And S4, extracting the target object from the target image.
Specifically, in step S1, an ultrasound image including a complete target object is obtained, so as to extract the integrity of the object finally, the target object may be any object with bright features, such as a bone region, but the invention is not limited thereto.
In step S2, the ultrasound image is image-segmented based on the ultrasound image to obtain a segmented image, and the segmented image includes a plurality of feature regions, where the feature regions may be a region corresponding to a target object, a region corresponding to a non-target object, or a plurality of sub-regions formed by breaking these regions at the time of segmentation. For example, for an ultrasound bone image, due to the uneven gray scale of the image, the same bone region may be fractured into two or more feature regions after segmentation, and meanwhile, a non-bone highlight region on the ultrasound image may also be fractured into a plurality of feature regions during segmentation.
In step S3, based on the segmented images obtained as described above, the feature regions on the segmented images are connected based on the direction information of the feature regions in the direction field of the segmented images, the direction information of the feature regions refers to the directionality of the feature regions in the direction field of the ultrasound image, and for example, the main direction of each feature region may be extracted, or the direction of a point (for example, an end point) in each feature region may be extracted, and feature regions satisfying a preset direction condition are connected. The preset direction condition may be, for example, when the direction difference between two feature regions is within a threshold range, connecting the two feature regions to finally obtain a plurality of feature regions connected together, where the plurality of feature regions may be one or two or more, and the feature regions may be connected as long as the direction information of the plurality of feature regions satisfies the preset direction condition.
For example, taking the above ultrasonic bone image as an example, in the connection process, according to the direction information of each feature region, it is determined whether the main direction of each feature region is within the threshold range, and it is considered that a plurality of feature regions within the threshold range are divided into a plurality of feature regions due to the low brightness occurring around the interior of the bone, that is, fractured bones, so that it is necessary to connect each fractured feature region satisfying the threshold range, thereby obtaining a target image including complete bone feature regions.
In step S4, a target object is extracted from the connected target images, and a complete bone feature region is extracted from the target images, taking the ultrasound bone region as an example, so as to obtain the target object. When the method extracts the ultrasonic image target, the characteristic regions of the fractured target object on the segmented image are connected, and the complete target object can be extracted, so that the target object is identified in the ultrasonic image more completely and clearly, and each parameter is measured more accurately in the follow-up process.
In fig. 2, an object extraction method in an ultrasound image according to some embodiments of the present invention is shown, and in an exemplary embodiment, for convenience of description, the ultrasound image is an ultrasound fetal humerus/femur image, and it should be noted that the method provided by the present invention is not limited to extracting a fetal humerus/femur, and any other suitable target object may use the present invention, such as an ultrasound image of a bone at another position, and the like.
As shown in fig. 2, in some embodiments, the methods of the present invention comprise:
s10, an ultrasound image including the fetal humerus/femur is acquired.
Generally in medical diagnosis of a fetal humerus/femur, length parameters of the humerus/femur are required, and therefore in the acquisition of an ultrasound image, it is generally based on comparing a complete bone image, such that the image contains the complete humerus/femur region.
And S11, filtering the ultrasonic image.
Since ultrasound images are noisier, in some embodiments, filtering is performed prior to segmenting the ultrasound image. In an exemplary embodiment, the filtering process may employ anisotropic filtering, and the main idea of the anisotropic filter may overcome the defect of the gaussian blur filter, that is, the gray scale relative contrast information of the bone edge region is not destroyed, and the image edge may be preserved while the image is smoothed. This method is well suited for filtering ultrasound images.
The anisotropy considers the image as a thermal field, each pixel as a heat flow, and the degree of diffusion to the surroundings is determined according to the relationship between the neighborhood pixels and the current pixel. When the difference between the surrounding pixels and the current pixel is larger, the current pixel is possibly positioned on the boundary, so that the heat flow stops diffusing from the current pixel to the direction, and the boundary is reserved; meanwhile, if the difference is small, the diffusion can be continued.
The main iterative equation for anisotropy is as follows:
Figure BDA0002111732650000071
where x and y represent the horizontal and vertical coordinates of the image, respectively, and λ may take the value [0,1/4], where 1/4 represents the average of the degree of diffusion in four directions. I denotes the ultrasound image, and since the above formula is an iterative formula, there is a current iteration number t. The four divergence formulas are the partial derivatives of the current pixel in four directions:
Figure BDA0002111732650000081
Figure BDA0002111732650000082
Figure BDA0002111732650000083
Figure BDA0002111732650000084
cN, cS, cE, cW represent diffusion coefficients in four directions, and the diffusion coefficients at the boundaries are relatively small. The diffusion coefficient c is formulated as follows:
Figure BDA0002111732650000085
through the anisotropic filtering processing, the ultrasonic image is smoother, and the noise of the image is removed to a certain degree. In the present embodiment, a diffusion coefficient method is used, which is only used for explaining the present step and is not used for limiting the present invention, and a person skilled in the art may also use other anisotropic treatments according to specific situations.
And S20, carrying out binarization processing on the ultrasonic image to obtain a binary image containing a plurality of characteristic areas.
The ultrasound image after the filtering process of step S11 is subjected to image segmentation, and in some embodiments, the image segmentation is performed by binarization. The binarization processing can convert the detection of the target object originally into simple shape detection, feature detection, and the like, thereby simplifying the calculation.
In an exemplary embodiment, the image is segmented using the mean-shift method, which aims to calculate the class label of each pixel. The class label depends on the cluster to which it belongs, for each cluster it has a class center, the central point gathering points around it, enabling them to form a class, i.e. points that are the same as the class center are a class. mean-shift takes the extreme point of its density function as the center point in the iterative process, called the mode point. In an iterative process, a modulo point will cluster towards the true center of the class to which it belongs.
The formula for the kernel density function is:
Figure BDA0002111732650000086
where K (x) can have a variety of functional forms, such as an exponential kernel function:
Figure BDA0002111732650000091
it should be noted that the mean-shift method provided in this embodiment is only a preferred embodiment, and due to differences in physical characteristics and parameter settings of machines and probes of different manufacturers, the segmentation method may be selected according to specific situations, and the method for extracting a binary image may be flexibly selected according to specific characteristics and specific situations of a target image, for example, the image may be segmented by using methods such as an OTSU, a maximum entropy threshold segmentation method, a cluster segmentation method, and a level set, and a binary image including a humerus/femur of a fetus may be obtained through this step.
And S21, screening the segmentation images based on the target object characteristics to obtain the screened segmentation images.
In the above embodiment, due to the influence of speckle noise on the ultrasound image, the binary image obtained after image segmentation tends to have many interference regions, and due to the non-uniform gray distribution on the ultrasound image, the feature region of the binary image tends to be broken into two or more feature regions. Therefore, the binary image is screened before the plurality of characteristic regions are connected, the influence of partial interference regions is removed, and the subsequent calculation is simplified.
In an exemplary embodiment, a certain degree of screening is performed by the characteristic parameter of the roundness, which is expressed as:
Figure BDA0002111732650000092
where L denotes the perimeter of the connected region and S denotes the area of the region to be screened, generally speaking, a-1 corresponds to the case of a circle, while the bone region of the target object generally exhibits an elongated characteristic, so in the present embodiment this parameter is greater than the threshold TAThe threshold value may be T in the present embodimentA=0.75。
And S30, connecting and dividing the characteristic areas on the image according to the direction information of the characteristic areas to obtain a target image containing a complete target object.
For convenience of illustration, as shown in fig. 3, in an example, fig. 3(a) is an ultrasound image including a fetal humerus, and a bright area in the middle of the image is a representation feature of the fetal humerus, as can be seen from the figure, a large amount of noise exists on the ultrasound image, and meanwhile, the gray distribution in the middle of the humerus area is not uniform, so on a binary image obtained after binary processing, the fetal humerus area is fractured into two or more feature areas, and meanwhile, an upper non-humerus feature area also appears as a plurality of feature areas fractured on the binary image, and meanwhile, a large amount of interference areas also exist on the binary image. Fig. 3(b) is the binary image after the feature screening in step S21, and it can be seen from fig. 3(b) that a large number of interference regions representing non-humeral features are screened from the binary image after the feature screening, but there are non-humeral feature regions representing features closer to the humerus, and these non-humeral feature regions and humeral feature regions represent feature regions that are fractured, so in this step, these fractured feature regions are connected to form a plurality of complete and clearly defined target regions.
And S40, extracting a characteristic region corresponding to the target object from the connected characteristic regions.
And screening based on the connected images so as to extract a fetal humerus/femur characteristic region. In an exemplary implementation, the method of screening may select each feature region by using a structural cost function, and the features used in this embodiment are a long axis, a region luminance average, a circularity, and the like of each feature region. Namely, the cost function of the ith characteristic region is:
F(i)=c1f1(i)+c2f2(i)+…+cdfd(i) (9)
wherein d is the total number of features, c1,c2,…,cdIs the weight of each characteristic parameter, and has:
Figure BDA0002111732650000101
in the present embodiment
Figure BDA0002111732650000102
Then, the characteristic region with the maximum cost function is the target object.
In some embodiments shown in fig. 2 and 3, the ultrasound image object extraction method of the present invention is illustrated by taking an ultrasound fetal humerus/femur image as an example, in which a complete fetal humerus/femur target is obtained by connecting fractured fetal humerus/femur regions on a segmented image. A schematic diagram of a method of connecting feature regions in some embodiments of the invention is shown in fig. 4.
As shown in fig. 4, connecting each feature region on the binary image includes:
and S301, thinning each characteristic region on the binary image. Binary image refinement is also called skeletonization, and means that each feature region on a binary image is reduced to the width of a unit pixel, so that region end points can be conveniently found.
S302, acquiring the endpoint of the refined structure. Based on the above, the binary image includes the characteristic region of the fetal humerus/femur and the characteristic region of the non-fetal humerus/femur, so that the binary image is refined to form a plurality of curve segments, each curve segment has two end points, and the end points of the curve segments are obtained.
S303, judging whether other endpoints are included in the preset area pointed by any endpoint.
The purpose of this step is to confirm whether the positions of the endpoints to be connected are adjacent, and when there is no other endpoint in the preset area pointed by an endpoint, the endpoint can be regarded as the endpoint of a certain feature area, and no connection is needed. When other endpoints are included in the preset area pointed by the endpoint, the process proceeds to step S304. In an exemplary implementation, the predetermined area pointed by an end point may be an area around the end point with r as a radius, where r may be 3-8 pixels. It should be understood by those skilled in the art that the predetermined area pointed by the end point may also be other shapes and threshold ranges, and will not be described herein.
And S304, extracting another endpoint which is not in the same characteristic region with the endpoint from other endpoints. The difference value between the direction value of the other end point in the ultrasonic image direction field and the direction value of one end point in the ultrasonic image direction field is within a preset range. When other endpoints are detected in the endpoint preset area, the other endpoint with the direction approximate to that of the endpoint is further judged, so that the fact that the endpoint needs to be connected with the other endpoint can be confirmed. When the end point is judged, the judgment can be carried out based on the direction field of the image, and because the humerus/femur characteristic of the fetus is in a slender straight shape, the difference value of the direction values of the two end points can be set in a smaller range, such as 0-30 degrees, and can also be smaller, so that the judgment precision is improved.
S305, filling pixels between one end point and the other end point to obtain a connection area. Based on the judgment of the steps, two end points needing to be connected are obtained, and pixel filling is carried out between the two end points, so that a connecting line segment, namely a connecting area, can be obtained.
S306, expanding the connecting area, and overlaying the expanded connecting area on the binary image. The connected line segment obtained in step S305 is expanded to obtain an expanded region, and the expanded region is superimposed on the binary image in fig. 3(b), for example, so that the connection of the broken feature region can be completed.
Through the embodiment, the ultrasonic image is binarized to obtain a binary image, the regions with fracture characteristics on the binary image are connected to obtain a complete fetal humerus/femur target, and then a target object with a complete and clear region is extracted from the ultrasonic image, so that subsequent measurement is facilitated.
In a second aspect, the present invention also provides a method for measuring an object in an ultrasound image, as shown in fig. 5, in some embodiments, the method includes:
and S50, acquiring a target object on the ultrasonic image, wherein the target object is obtained by adopting the object extraction method in the ultrasonic image of any one of the above embodiments.
And S60, measuring the parameters of the target object.
The method can be used for automatically measuring the target object extracted in the embodiment, and random errors and repeated labor of manual measurement are avoided.
In one exemplary implementation, the fetal humerus/femur length is measured, for example, using the ultrasonic fetal humerus/femur image described above. As shown in fig. 6, the measurement method includes:
and S500, obtaining a fetal humerus/femur region by adopting the object extraction method in the ultrasonic image in the embodiment.
And S601, performing straight line fitting on the fetal humerus/femur region. The fitting method may be a least square method, a Hough straight line fitting, etc., which is not limited in the present invention.
In an exemplary implementation, the fitting method uses a least squares method, which is calculated by minimizing the residual error of the fitted line or curve, and if n pairs of points to be fitted are known, let the fitted line equation be:
y=b0+b1x (11)
the residual equation for the line fit is:
Figure BDA0002111732650000121
thus, residual erroriThe sum of the squares of:
Figure BDA0002111732650000122
by minimizing Q, i.e. taking extreme values is equivalent to finding it about a parameter0,b1And the partial derivative of (a) is made equal to 0, then the parameter b is obtained0,b1The linear equation can be determined by substituting the linear equation (11). If a curve fitting method is needed, similar calculation ideas are adopted, but different parameters of the curve are possibly more, the Q value is constructed in a formula (13) mode by listing curve expression equations, and the parameters of the Q value when the extreme value is obtained are respectively solved, so that a linear equation where a line segment is located can be determined, and the linear equation is the equation of the straight line and the two intersection points of the straight line and the humerus/femur region of the fetusI.e. the two end points of the line segment.
And S602, calculating the unit length of the line segment pixels. And calculating the pixel unit length of the two end points of the line segment.
And S603, converting the pixel unit length into a physical unit length to obtain the fetal humerus/femur length parameter.
In a third aspect, the present invention provides an apparatus for extracting an object in an ultrasound image, as shown in fig. 7, the apparatus may include:
an image acquisition module 10 for acquiring an ultrasound image including a target object;
an image segmentation module 20, configured to perform image segmentation on the ultrasound image to obtain a segmented image including a plurality of feature regions, where at least one feature region of the plurality of feature regions corresponds to the target object;
the connection module 30 is configured to connect the feature regions on the segmented image according to the direction information of the feature regions to obtain a target image including a complete target object; and
an extracting module 40, configured to extract the target object in the target image.
In a fourth aspect, the present invention provides an apparatus for measuring an object in an ultrasound image, as shown in fig. 8, the apparatus may include:
an obtaining module 50, configured to obtain a target object on an ultrasound image, where the target object is obtained by using the ultrasound image object extraction method according to the above embodiment; and
a measuring module 60 for measuring a parameter of the target object.
In a fifth aspect, the present invention provides a medical device, which may be an ultrasound device, such as an ultrasound diagnostic apparatus, comprising:
a processor; and
a memory communicatively coupled to the processor and storing computer readable instructions executable by the processor, wherein the processor executes the method for extracting and/or measuring objects in ultrasound images of the above embodiments when the computer readable instructions are executed.
In a sixth aspect, the present invention provides a storage medium storing computer instructions for causing a computer to execute the above-mentioned object extracting and/or measuring method in an ultrasound image.
In particular, fig. 9 shows a schematic structural diagram of a computer system 600 suitable for implementing the method or processor of the embodiment of the invention, and the corresponding functions of the medical device and the storage medium are implemented by the system shown in fig. 9.
As shown in fig. 9, the computer system 600 includes a Central Processing Unit (CPU)601 that can perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM)602 or a program loaded from a storage section 608 into a Random Access Memory (RAM) 603. In the RAM 603, various programs and data necessary for the operation of the system 600 are also stored. The CPU 601, ROM 602, and RAM 603 are connected to each other via a bus 604. An input/output (I/O) interface 605 is also connected to bus 604.
The following components are connected to the I/O interface 605: an input portion 606 including a keyboard, a mouse, and the like; an output portion 607 including a display such as a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and the like, and a speaker; a storage section 608 including a hard disk and the like; and a communication section 609 including a network interface card such as a LAN card, a modem, or the like. The communication section 609 performs communication processing via a network such as the internet. The driver 610 is also connected to the I/O interface 605 as needed. A removable medium 611 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 610 as necessary, so that a computer program read out therefrom is mounted in the storage section 608 as necessary.
In particular, according to embodiments of the present disclosure, the process described above with reference to fig. 1 may be implemented as a computer software program. For example, embodiments of the present disclosure include a computer program product comprising a computer program tangibly embodied on a machine-readable medium, the computer program comprising program code for performing the method of fig. 1. In such embodiments, the computer program may be downloaded and installed from a network through the communication section 609, and/or installed from the removable medium 611.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
It should be understood that the above embodiments are only examples for clearly illustrating the present invention, and are not intended to limit the present invention. Other variations and modifications will be apparent to persons skilled in the art in light of the above description. And are neither required nor exhaustive of all embodiments. And obvious variations or modifications therefrom are within the scope of the invention.

Claims (13)

1. A method for extracting an object in an ultrasonic image is characterized by comprising the following steps:
acquiring an ultrasonic image containing a target object;
carrying out image segmentation on the ultrasonic image to obtain a segmented image containing a plurality of characteristic regions;
connecting a plurality of characteristic regions meeting preset direction conditions on the segmentation image according to the direction information of each characteristic region to obtain a target image containing the complete target object;
and extracting the target object in the target image.
2. The method of claim 1, wherein the image segmentation of the ultrasound image to obtain a segmented image including a plurality of feature regions comprises:
and carrying out binarization processing on the ultrasonic image to obtain a binary image containing a plurality of characteristic areas.
3. The method for extracting an object from an ultrasound image according to claim 2, wherein the connecting a plurality of feature regions satisfying a preset direction condition on the segmented image according to the direction information of each feature region comprises:
acquiring an end point of the characteristic region;
determining whether other endpoints are included in a predetermined area pointed to by an endpoint,
when other end points are included in the preset area pointed by the end point, extracting the other end point which is not in the same characteristic area with the end point from the other end points, wherein the difference value between the direction value of the other end point in the ultrasonic image direction field and the direction value of the end point in the ultrasonic image direction field is in a preset range;
connecting the one endpoint and the other endpoint.
4. The method of claim 3, wherein the connecting the one end point and the other end point comprises:
and filling pixels between the one end point and the other end point to obtain a connection area.
5. The method according to claim 4, wherein the obtaining the end points of the feature regions comprises:
thinning the characteristic regions on the binary image to obtain end points of the thinned characteristic regions;
further comprising, after said connecting said one endpoint and said another endpoint:
and performing expansion processing on the connecting area, and superposing the expanded connecting area on the binary image.
6. The method for extracting an object in an ultrasound image according to claim 1, wherein the image segmentation of the ultrasound image comprises:
and filtering the ultrasonic image, and performing image segmentation on the filtered ultrasonic image.
7. The method for extracting an object from an ultrasound image according to claim 1, wherein between the image segmentation of the ultrasound image and the connection of the feature region on the segmented image according to the direction information of the feature region, the method further comprises:
and screening the segmentation image based on the characteristics of the target object.
8. The method for extracting an object in an ultrasonic image according to claim 1,
the target object comprises a fetal humerus and/or a femur.
9. A method for measuring an object in an ultrasound image, comprising:
acquiring a target object on an ultrasonic image, wherein the target object is obtained by adopting the object extraction method in the ultrasonic image according to any one of claims 1 to 8;
measuring a parameter of the target object.
10. The method of claim 9, wherein the measuring the parameter of the target object comprises:
performing straight line fitting on the target object;
and calculating the length of a line segment between two intersection points of the straight line and the target object.
11. An apparatus for extracting an object in an ultrasound image, comprising:
an image acquisition module for acquiring an ultrasound image including a target object;
the image segmentation module is used for carrying out image segmentation on the ultrasonic image to obtain a segmented image containing a plurality of characteristic regions;
the connecting module is used for connecting a plurality of characteristic regions meeting a preset direction condition on the segmentation image according to the direction information of the characteristic regions to obtain a target image containing a complete target object; and
and the extraction module is used for extracting the target object in the target image.
12. An apparatus for measuring an object in an ultrasound image, comprising:
an obtaining module, configured to obtain a target object on an ultrasound image, where the target object is obtained by using the ultrasound image object extraction method according to any one of claims 1 to 8; and
a measurement module for measuring a parameter of the target object.
13. A medical device, comprising:
a processor; and
a memory communicatively coupled to the processor and storing computer readable instructions executable by the processor, the processor performing the method of extracting an object in an ultrasound image according to any of claims 1 to 8 when the computer readable instructions are executed.
CN201910574501.3A 2019-06-28 2019-06-28 Method and device for extracting and measuring object in ultrasonic image Pending CN112233122A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910574501.3A CN112233122A (en) 2019-06-28 2019-06-28 Method and device for extracting and measuring object in ultrasonic image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910574501.3A CN112233122A (en) 2019-06-28 2019-06-28 Method and device for extracting and measuring object in ultrasonic image

Publications (1)

Publication Number Publication Date
CN112233122A true CN112233122A (en) 2021-01-15

Family

ID=74111369

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910574501.3A Pending CN112233122A (en) 2019-06-28 2019-06-28 Method and device for extracting and measuring object in ultrasonic image

Country Status (1)

Country Link
CN (1) CN112233122A (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107169975A (en) * 2017-03-27 2017-09-15 中国科学院深圳先进技术研究院 The analysis method and device of ultrasonoscopy
CN107437068A (en) * 2017-07-13 2017-12-05 江苏大学 Pig individual discrimination method based on Gabor direction histograms and pig chaeta hair pattern
CN109886938A (en) * 2019-01-29 2019-06-14 深圳市科曼医疗设备有限公司 A kind of ultrasound image blood vessel diameter method for automatic measurement

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107169975A (en) * 2017-03-27 2017-09-15 中国科学院深圳先进技术研究院 The analysis method and device of ultrasonoscopy
CN107437068A (en) * 2017-07-13 2017-12-05 江苏大学 Pig individual discrimination method based on Gabor direction histograms and pig chaeta hair pattern
CN109886938A (en) * 2019-01-29 2019-06-14 深圳市科曼医疗设备有限公司 A kind of ultrasound image blood vessel diameter method for automatic measurement

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
康文雄 等: "基于方向场分布率的静脉图像分割方法", 自动化学报, vol. 35, no. 12, pages 1496 - 1502 *
祝恩: "自动指纹识别技术", 31 May 2006, 国防科技大学出版社, pages: 85 - 88 *

Similar Documents

Publication Publication Date Title
Loizou et al. Snakes based segmentation of the common carotid artery intima media
Loizou A review of ultrasound common carotid artery image and video segmentation techniques
KR101121353B1 (en) System and method for providing 2-dimensional ct image corresponding to 2-dimensional ultrasound image
JP6265588B2 (en) Image processing apparatus, operation method of image processing apparatus, and image processing program
JP6422198B2 (en) Image processing apparatus, image processing method, and image processing program
Rueda et al. Feature-based fuzzy connectedness segmentation of ultrasound images with an object completion step
US8849000B2 (en) Method and device for detecting bright brain regions from computed tomography images
CN108186051B (en) Image processing method and system for automatically measuring double-apical-diameter length of fetus from ultrasonic image
JPH06121792A (en) Method for analyzing misdiagnosis on ordinary example in automatic detection of lung micronodulation in digital thoracic x-ray photographand its system
KR101926015B1 (en) Apparatus and method processing image
JPH06237925A (en) Automation method for selecting region of concern and detecting septal line in digital thoracic x-ray photograph and system therefor
CN108378869B (en) Image processing method and processing system for automatically measuring head circumference length of fetus from ultrasonic image
US20100322495A1 (en) Medical imaging system
TWI624807B (en) Iterative analysis of medical images
CN104732520A (en) Cardio-thoracic ratio measuring algorithm and system for chest digital image
CN108171696B (en) Placenta detection method and device
WO2018176319A1 (en) Ultrasound image analysis method and device
CN115294527B (en) Subway tunnel damage detection method based on computer vision
CN113689424B (en) Ultrasonic inspection system capable of automatically identifying image features and identification method
CN112690778B (en) Method and system for generating spinal disc positioning line
CN108135575B (en) Ultrasonic diagnostic apparatus and method
EP4167184A1 (en) Systems and methods for plaque identification, plaque composition analysis, and plaque stability detection
CN112233122A (en) Method and device for extracting and measuring object in ultrasonic image
Chakkarwar et al. Automated analysis of gestational sac in medical image processing
Taher et al. Applied Improved Canny Edge Detection for Diagnosis Medical Images of Human Brain Tumors

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination