WO2018095058A1 - Three-dimensional ultrasonic fetal face profile image processing method and system - Google Patents

Three-dimensional ultrasonic fetal face profile image processing method and system Download PDF

Info

Publication number
WO2018095058A1
WO2018095058A1 PCT/CN2017/093457 CN2017093457W WO2018095058A1 WO 2018095058 A1 WO2018095058 A1 WO 2018095058A1 CN 2017093457 W CN2017093457 W CN 2017093457W WO 2018095058 A1 WO2018095058 A1 WO 2018095058A1
Authority
WO
WIPO (PCT)
Prior art keywords
slice
target area
boundary
fetal
frame
Prior art date
Application number
PCT/CN2017/093457
Other languages
French (fr)
Chinese (zh)
Inventor
黄柳倩
孙慧
艾金钦
刘旭江
喻美媛
Original Assignee
深圳开立生物医疗科技股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳开立生物医疗科技股份有限公司 filed Critical 深圳开立生物医疗科技股份有限公司
Publication of WO2018095058A1 publication Critical patent/WO2018095058A1/en

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/52Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/5215Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves involving processing of medical diagnostic data

Definitions

  • the present invention relates to the field of ultrasonic imaging technology, and in particular, to an ultrasonic three-dimensional fetal facial contour image processing method and system.
  • the three-dimensional ultrasound imaging system is based on the traditional two-dimensional ultrasound, and acquires two-dimensional ultrasonic images of multiple sequential spatial sequences. According to the spatial positional relationship of the data acquisition, the volume data is reproduced by steps such as scanning conversion.
  • the three-dimensional ultrasound imaging system provides high-dimensional spatial information that traditional two-dimensional ultrasound can not provide, making clinical diagnosis and observation more intuitive and flexible, and making the communication between the doctors and patients more smooth. Due to its informative and intuitive information, the three-dimensional ultrasound imaging system is currently mainly used for observation of fetal morphology in the field of obstetrics, especially for facial observation.
  • the front of the fetal face may be blocked by the placenta, umbilical cord, arm, uterine wall, etc., so that the collected three-dimensional volume data may include placenta and amniocentesis.
  • Substances, umbilical cords, uterine tissue, etc. cause obstruction to the imaging target, which brings difficulties to the target observation.
  • the current volumetric cutting method usually has an interactive volume cutting method for the inspector to cut the shielding portion, the operation is cumbersome and takes a long time.
  • the invention provides an ultrasonic three-dimensional fetal facial contour image processing method and system, which automatically cuts the occlusion portion of the fetal face by cutting the volume data by using the trusted boundary point, simplifies the operation of the detecting personnel, and improves the three-dimensional imaging rate of the ultrasound.
  • the present invention adopts the following technical solutions:
  • an ultrasonic three-dimensional fetal facial contour image processing method comprising:
  • the fetal volume data is cropped according to the trusted boundary point to obtain an ultrasound three-dimensional fetal facial contour image.
  • the method further includes:
  • the position of the current frame slice target area is corrected.
  • the step of correcting the location of the current frame slice target area includes:
  • the center point coordinate of the current frame slice target area is replaced with the center point coordinate of the target slice target area.
  • the step of performing facial boundary detection on the selected slice to obtain a trusted boundary point includes:
  • the method before the step of acquiring a plurality of trusted boundary points according to the candidate segmentation boundary, the method further includes: a step of boundary growth.
  • the step of acquiring multiple trusted boundary points according to the candidate segmentation boundary includes:
  • the maximum value of each column in the voting matrix is counted, and the point corresponding to the maximum value is determined as a trusted boundary point.
  • the step of cropping the fetal volume data according to the trusted boundary point to obtain an ultrasound three-dimensional fetal facial contour image includes:
  • the fetal volume data is cropped according to the cutting module to obtain an ultrasound three-dimensional fetal facial contour image.
  • the step of detecting the multi-frame slice of the fetal volume data in a predetermined direction to obtain the target area of each frame slice includes:
  • the target area and its corresponding slice are saved.
  • an ultrasound three-dimensional fetal facial contour image processing system comprising:
  • a target area detecting module configured to detect a multi-frame slice of the fetal body data in a predetermined direction to obtain a target area of each frame slice, wherein the target area includes a fetal head area;
  • a trusted boundary point acquiring module configured to filter out a slice including a target area, and perform face boundary detection on the selected slice to obtain a trusted boundary point
  • a cropping module configured to crop the fetal volume data according to the trusted boundary point to obtain an ultrasound three-dimensional fetal facial contour image.
  • the system further includes: a correction module, configured to correct a position of the current frame slice target area when a difference between a position of the current frame slice target area and a position of the adjacent slice target area exceeds a predetermined threshold.
  • a correction module configured to correct a position of the current frame slice target area when a difference between a position of the current frame slice target area and a position of the adjacent slice target area exceeds a predetermined threshold.
  • the correction module is further configured to: traverse all the selected slices to obtain a sequence of ⁇ frame number, target area>; respectively obtain a position of a frame target area of each frame in the ⁇ frame number, target area> sequence and a neighboring slice thereof Deviation of the position of the target area; calculating the mean of the deviation; when the current frame slice target area When the deviation of the position of the domain from the position of the adjacent slice target area is greater than the mean value, the center point coordinate of the current frame slice target area is replaced with the center point coordinate of the target slice target area.
  • the trusted boundary point obtaining module is further configured to: obtain a transition boundary from the darker region to the brighter region of the selected slice in each frame; according to the gray value and the contour shape of the connected region where the transition boundary is located, Determining a face region of the sliced slice of each frame, and using an upper surface boundary of the face region as an alternative segmentation boundary; and acquiring a plurality of trusted boundary points according to the candidate segmentation boundary.
  • the trusted boundary point acquisition module comprises a boundary growth unit, and the boundary growth unit is used for boundary growth.
  • the trusted boundary point obtaining module is further configured to: construct a boundary matrix corresponding to the filtered slice in each frame according to the candidate segmentation boundary; and superimpose all the boundary matrix and the accumulation matrix, A voting matrix is obtained; the maximum value of each column in the voting matrix is counted, and the point corresponding to the maximum value is determined as a trusted boundary point.
  • the cutting module is further configured to: create a cropping template according to the trusted boundary point; and perform cropping on the fetal body data according to the cutting module to obtain an ultrasound three-dimensional fetal facial contour image.
  • the target area detecting module is further configured to: detect, by using a preset classifier, the target area of each frame slice of the fetal body data in a predetermined direction; and save the target area and the corresponding slice thereof.
  • the beneficial effects of the present invention are: detecting a multi-frame slice of a fetal body data in a predetermined direction to obtain a target region of each frame slice, wherein the target region includes a fetal head region; A slice including the target area is extracted, and the selected slice is subjected to face boundary detection to obtain a trusted boundary point; the fetal body data is cropped according to the trusted boundary point to obtain an ultrasound three-dimensional fetal facial contour image.
  • the invention automatically cuts the occlusion portion of the fetal face by cutting the volume data by using the trusted boundary point, simplifies the operation of the detecting personnel, and improves the three-dimensional imaging rate of the ultrasound.
  • FIG. 1 is a flow chart of a method of an embodiment of an ultrasonic three-dimensional fetal facial contour image processing method provided in an embodiment of the present invention.
  • FIG. 2 is a schematic diagram of a multi-frame slice of fetal volume data in a predetermined direction provided in an embodiment of the present invention.
  • FIG. 3 is a schematic diagram of a process for correcting the position of a current frame slice target area provided in an embodiment of the present invention.
  • Figure 4a is a schematic illustration of a target area prior to correction in accordance with an embodiment of the present invention.
  • Figure 4b is a schematic illustration of the target area of Figure 4a after correction.
  • FIG. 5 is a schematic diagram of a process for performing facial boundary detection on a selected slice to obtain a trusted boundary point according to an embodiment of the present invention.
  • Figure 6a is a schematic illustration of alternative segmentation boundaries provided in an embodiment of the invention.
  • Figure 6b is a schematic illustration of the alternate segmentation boundary of Figure 6a after boundary growth.
  • FIG. 7 is a schematic diagram of a process for acquiring multiple trusted boundary points according to an alternative segmentation boundary according to an embodiment of the present invention.
  • FIG. 8 is a schematic diagram of a process of cropping fetal body data according to a trusted boundary point to obtain an ultrasound three-dimensional fetal facial contour image according to an embodiment of the present invention.
  • 9 is a cut template provided in an embodiment of the present invention.
  • Figure 10a is a schematic illustration of a sagittal direction slice prior to trimming in accordance with an embodiment of the present invention.
  • Figure 10b is a schematic view of the cut of Figure 10a.
  • FIG. 11 is a block diagram showing the structure of an embodiment of an ultrasonic three-dimensional fetal facial contour image processing system provided in an embodiment of the present invention.
  • FIG. 1 is a flowchart of an ultrasonic three-dimensional fetal facial contour image processing method provided in an embodiment of the present invention. As shown in Figure 1, the method includes:
  • Step S101 detecting a multi-frame slice of the fetal volume data in a predetermined direction to obtain a target area of each frame slice, wherein the target area includes a fetal head area.
  • the target area of the multi-frame (slice) slice of the fetal volume data in a predetermined direction is detected to acquire a target area of at least one frame slice.
  • the target area includes a fetal head region
  • the predetermined direction may be a facet direction of the fetal volume data, such as a planar direction parallel to the transducer array direction.
  • the 3D/4D imaging examiner uses this plane direction to obtain the sagittal plane of the fetus, and the sagittal plane can also be replaced with a characteristic section such as the coronal plane.
  • the predetermined direction can also be used to traverse the plane direction of the three axes of the volume data. Of course, it can also be a predetermined direction determined by other algorithms, which will not be repeated here.
  • the target region detection adopts a Histogram of Oriented Gradient (HOG) feature extraction algorithm and an adboost classifier algorithm.
  • HOG Histogram of Oriented Gradient
  • the classifier is trained in advance using the data of the sagittal head region of the fetus, and the classifier is set according to the training result, and the target region of the sagittal plane is automatically positioned on each frame slice of the predetermined direction of the fetal volume data by using a preset classifier. , save the target area and its corresponding slice.
  • Step S102 Filter out a slice including the target area, and perform face boundary detection on the selected slice to obtain a trusted boundary point.
  • the multi-frame slice of the fetal volume data in a predetermined direction may not include all the slices in the target area, so it is necessary to screen out the slice containing the target area.
  • the gray value and the contour form of the connected region where the transition boundary of each frame slice is located are analyzed, and the gray value and the contour form are compared with the pre-stored face gray value and the face contour form to find the face region of each frame slice. And using the upper surface boundary of the face area as the face boundary, voting for each face boundary point according to the face boundary of each frame slice, and determining the trusted boundary by the point where the face boundary passes the most point.
  • Step S103 crop the fetal volume data according to the trusted boundary point to obtain an ultrasound fetal facial contour image.
  • the fetal body data is cropped according to the template to obtain an image of the ultrasonic three-dimensional fetal facial contour.
  • the automatic cutting of the occlusion portion of the fetal face is realized, which simplifies the operation of the detecting person and improves the three-dimensional imaging rate of the ultrasound.
  • the ultrasonic fetal facial contour image processing method of the above embodiment detects the multi-frame slice of the fetal volume data in a predetermined direction to acquire a target region of each frame slice; selects a slice including the target region, and performs face on the selected slice Boundary detection to obtain a trusted boundary point; the fetal body data is cropped according to the trusted boundary point to obtain an ultrasound three-dimensional fetal facial contour image.
  • the invention automatically cuts the occlusion portion of the fetal face by cutting the volume data by using the trusted boundary point, so that the operation is simple and quick, the error caused by the manual operation is reduced, and the image quality is higher.
  • the method further includes:
  • the position of the current target area is corrected.
  • the adjacent slice refers to a slice that is temporally adjacent to the current frame slice. For example, as shown in FIG. 2, on the slice sequence of the Z-axis, slices numbered "1", “3", "5", and "6" can detect a region of interest (ROI). Then, the neighboring ROI of the slice ROI numbered "3" is the ROI of the slice numbered "1" and "numbered 5".
  • ROI region of interest
  • the step of correcting the position of the current frame slice target area includes:
  • Step S301 Traverse all the selected slices to obtain a ⁇ frame number, target area> sequence.
  • Step S302 Find the deviation between the position of each frame slice target area in the ⁇ frame number, target area> sequence and the position of the adjacent slice target area.
  • the deviation includes a center point of each frame slice target area and a neighboring slice target area thereof.
  • the obtained X coordinate deviation and Y coordinate deviation are respectively stored in the ⁇ frame number, the X coordinate deviation from the center point of the adjacent slice target area> and ⁇ the frame number, and the Y coordinate deviation from the center point of the adjacent slice target area> In the sequence.
  • Step S303 Calculate the mean value of the deviation.
  • the mean value includes the mean value of the X coordinate deviation and the mean value of the Y coordinate deviation.
  • the ⁇ frame number obtained according to the above steps, the X coordinate deviation from the center point of the target area of the adjacent slice> and the ⁇ frame number, and the Y coordinate deviation from the center point of the target area of the adjacent slice> are two sequences, respectively, are eliminated.
  • the maximum value of the two deviation sequences is then calculated, and the mean value of the X coordinate deviation and the mean value of the Y coordinate deviation of the two sequences are calculated, that is, the mean value of the deviation between the target region of each frame slice and the target region of the adjacent slice is obtained.
  • Step S304 When the deviation of the position of the current frame slice target area from the position of the adjacent slice target area is greater than the mean value, the center point coordinate of the current frame slice target area is replaced with the center point coordinate of the target slice target area.
  • the distance between the target slice and the current frame slice is the smallest, and the deviation between the position of the target slice target area and the position of the adjacent slice target area is less than the mean value.
  • facial segmentation detection is performed on the selected slice to obtain
  • the steps to take a trusted boundary point include:
  • Step S501 Obtain a transition boundary from a darker region to a brighter region in the slice selected by each frame.
  • the boundary detection operator is used to find the transition boundary of each frame slice from the darker region to the brighter region.
  • the boundary detection operator here includes but is not limited to the following operator prewitt operator and sobel operator.
  • Step S502 determining a face region of each frame slice according to the gray value and the contour shape of the connected region where the transition boundary is located, and using the upper surface boundary of the face region as an alternative segmentation boundary, as shown in FIG. 6a.
  • the gray value and the contour form of the connected region where the transition boundary of each frame slice is located are analyzed, and the gray value and the contour form are compared with the pre-stored face gray value and the face contour form to find each frame.
  • the sliced face area and the upper surface boundary of the face area is used as an alternative segmentation boundary.
  • Step S503 acquiring a plurality of trusted boundary points according to the candidate segmentation boundary acquired in the above step.
  • Each of the face boundary points is voted according to an alternative segmentation boundary of each frame slice, and the point at which the candidate segmentation boundary passes the most is determined as a trusted boundary point.
  • the step of: boundary growth is further included before the step of acquiring a plurality of trusted boundary points according to the candidate segmentation boundary of each frame slice.
  • the starting and ending points of the candidate segmentation boundaries of each frame slice may be inconsistent, it will affect the subsequent acquisition of the trusted boundary. Therefore, by using the boundary growth, the complete face boundary of the slice can be obtained from left to right, thereby improving the accuracy of the boundary detection. .
  • the boundary growth is performed based on the gradient image. Taking the left and right endpoints of the candidate segmentation boundary as the growth point in the lateral direction, searching for the boundary point on the gradient image based on the neighborhood of the current growth point, and adding the searched boundary point to the current candidate segmentation boundary to obtain each frame Slice the full face border.
  • the contrast effects before and after the boundary growth are shown in Figures 6a and 6b.
  • the step of acquiring a plurality of trusted boundary points according to the candidate segmentation boundary includes:
  • Step S701 Construct a boundary matrix corresponding to the slice selected by each frame according to the candidate segmentation boundary.
  • each boundary matrix is the same, and the specific size can be determined as the case may be.
  • the background point of the boundary matrix is set to 0, and the boundary point is set to 1. Of course, it can be set to other values as long as the background and boundary points can be distinguished.
  • the boundary points are points corresponding to alternative segmentation boundaries in each frame slice.
  • the accumulation matrix may be a zero matrix of the same dimension as the boundary matrix. Superimpose all boundary matrices onto an accumulating matrix so that alternate partition boundaries that traverse the same point will vote for that point.
  • Step S703 the maximum value of each column in the voting matrix is counted, and the point corresponding to the maximum value is determined as a trusted boundary point.
  • the step of cropping the fetal volume data according to the trusted boundary point to obtain an ultrasound three-dimensional fetal facial contour image includes:
  • Step S801 a cropping template is created according to the trusted boundary point.
  • the main operation of cropping the fetal body data is to make a cropping template, which is to fill the trusted boundary point after the voting, the lower right corner of the image, and the lower left corner of the image as a closed area, as shown in FIG. 9 , which is a specific embodiment of the present invention.
  • a cropping template provided in .
  • Step S802 cutting the fetal volume data according to the cropping module to obtain an ultrasound three-dimensional fetal facial contour image.
  • the trimming template obtained in the above step is “ANDed” with the selected frame slices—that is, the data of each frame slice and the white area of the cropping template are retained, and the data of the black portion is deleted.
  • 10a and 10b are schematic views of the sagittal direction slice before and after cutting, respectively, provided in the embodiment of the present invention.
  • the method further comprises: three-dimensional rendering of the cropped fetal volume data.
  • the cropped volume data is rendered, and the rendering can be performed by a three-dimensional rendering method such as the well-known "ray casting method" to obtain a more intuitive image of the fetal facial contour.
  • face detection is performed on each frame slice that includes the target area, and an alternative segmentation boundary of each frame slice is obtained, and each of the face boundary points is performed by using an alternative segmentation boundary of each slice. Voting results in the point where the candidate segmentation boundary passes the most - the trusted boundary point.
  • the boundary growth can be performed after the candidate segmentation boundary is obtained, and the cropping template is created according to the trusted boundary point, and the cropping is performed.
  • the template automatically cuts the fetal body data, and automatically cuts the occlusion portion of the fetal face, which makes the operation simple and quick, reduces the error caused by the manual operation, and the image quality is higher.
  • the following is an embodiment of an ultrasonic three-dimensional fetal facial contour image processing system provided in an embodiment of the present invention.
  • the embodiment of the ultrasonic three-dimensional fetal facial contour image processing system is implemented based on the above-described embodiment of the ultrasonic three-dimensional fetal facial contour image processing method.
  • For an exhaustive description of the ultrasound three-dimensional fetal facial contour image processing system please refer to the aforementioned embodiment of the ultrasonic three-dimensional fetal facial contour image processing method.
  • FIG. 11 is a structural block diagram of an embodiment of an ultrasonic three-dimensional fetal facial contour image processing system provided in an embodiment of the present invention. As shown, the system includes:
  • the target area detecting module 111 is configured to detect a multi-frame slice of the fetal volume data in a predetermined direction to obtain a target area of each frame slice, wherein the target area includes a fetal head area.
  • the cropping module 113 is further configured to: create a cropping template according to the trusted boundary point; and perform cropping on the fetal volume data according to the cropping module to obtain an ultrasound three-dimensional fetal facial contour image.
  • the correction module is further configured to traverse all the selected slices to obtain a sequence of ⁇ frame number, target area>; respectively obtain the position of the frame target area of each frame in the ⁇ frame number, target area> sequence and the position of the adjacent slice target area Deviation of the deviation; calculating the mean value of the deviation; when the deviation of the position of the current frame slice target area from the position of the adjacent slice target area is greater than the mean value, the center point coordinate of the current frame slice target area is replaced with the center of the target slice target area Point coordinates.
  • the trusted boundary point obtaining module is further configured to: obtain a transition boundary from the darker region to the brighter region in the filtered slice of each frame; according to the gray of the connected region where the transition boundary is located And a contour shape, determining a face region of the selected slice of each frame, and using an upper surface boundary of the face region as an alternative segmentation boundary; and acquiring a plurality of trusted boundary points according to the candidate segmentation boundary.
  • the trusted boundary point acquisition module includes a boundary growth unit for boundary growth.
  • the trusted boundary point obtaining module is further configured to: construct, according to the candidate partitioning boundary, a boundary matrix corresponding to the filtered slice in each frame; and The accumulation matrix is superimposed to obtain a voting matrix; the maximum value of each column in the voting matrix is counted, and the point corresponding to the maximum value is determined as a trusted boundary point.
  • the target area detecting module 111 is further configured to: detect, by using a preset classifier, the target area of each frame slice of the fetal volume data in a predetermined direction; and save the target area and its corresponding slice.
  • system further includes a rendering module for three-dimensional rendering of the cropped fetal volume data.
  • the ultrasonic three-dimensional fetal facial contour image processing system of the present embodiment is used to implement the aforementioned ultrasonic three-dimensional fetal facial contour image processing method. Therefore, the specific embodiment in the ultrasonic three-dimensional fetal facial contour image processing system can be seen in the foregoing ultrasonic three-dimensional fetal facial contour image.
  • An embodiment of the processing method for example, the target area detecting module 111, the trusted boundary point acquiring module 112, and the trimming module 113 are respectively used to implement steps S101, S102, and S103 in the above-described ultrasonic three-dimensional fetal facial contour image processing method,
  • the target area detecting module 111, the trusted boundary point acquiring module 112, and the trimming module 113 are respectively used to implement steps S101, S102, and S103 in the above-described ultrasonic three-dimensional fetal facial contour image processing method.
  • the ultrasonic three-dimensional fetal facial contour image processing system performs face detection on the slice including the target region, obtains a face boundary of each frame slice, and sets a deviation from a position difference with a target region of the adjacent slice.
  • the position of the target area of the value is corrected to improve the accuracy of the detection, and then the face boundary detection is performed on the target area of the slice, and the point at which the face boundary passes the most is obtained by voting on the face boundary point - credible
  • the fetal body data is cut by the trusted boundary point, and the occlusion portion of the fetal face is automatically cut, which simplifies the operation of the detecting person and improves the three-dimensional imaging rate of the ultrasound.

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Radiology & Medical Imaging (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Biophysics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Pathology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Biomedical Technology (AREA)
  • Physics & Mathematics (AREA)
  • Medical Informatics (AREA)
  • Molecular Biology (AREA)
  • Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Ultra Sonic Daignosis Equipment (AREA)

Abstract

A three-dimensional ultrasonic fetal face profile image processing method and system. The method comprises: S101: detecting a multi-frame slice of fetal volume data in a predetermined direction to acquire a target area in each frame of the slice, the target area comprising a fetal head area; S102: screening out the slice comprising the target area and performing facial boundary detection on the screened-out slice to obtain trusted boundary points; and S103: cropping the fetal volume data according to the trusted boundary points to obtain a three-dimensional ultrasonic fetal face profile image. According to the image processing method, the volume data is cropped by using the trusted boundary points so that a shaded part of the fetal face can be automatically cut out; and thus the operation of a detector is simplified and three-dimensional ultrasonic imaging rate is improved.

Description

超声三维胎儿面部轮廓图像处理方法及系统Ultrasonic three-dimensional fetal facial contour image processing method and system
本申请要求于2016年11月22日提交中国专利局、申请号为201611055976.4、发明名称为“一种图像转换方法及装置”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。The present application claims priority to Chinese Patent Application No. 201611055976.4, the entire disclosure of which is incorporated herein in .
技术领域Technical field
本发明涉及超声成像技术领域,尤其涉及超声三维胎儿面部轮廓图像处理方法及系统。The present invention relates to the field of ultrasonic imaging technology, and in particular, to an ultrasonic three-dimensional fetal facial contour image processing method and system.
背景技术Background technique
三维超声成像系统是在传统二维超声的基础上,采集多个顺序空间序列的二维超声图像,根据数据采集的空间位置关系,通过扫描转换等步骤将体积数据再现。三维超声成像系统提供了传统二维超声所不能提供的高一维的空间信息,使临床诊断、观察变得更为直观和灵活,使医患双方对诊断结果的交流变得更为顺畅。也正由于其信息丰富直观,三维超声成像系统目前被主要用于产科领域胎儿形态学的观察,尤其是面部观察。然而,由于成像环境的特殊性,在对胎儿面部进行三维可视化绘制的时候,胎儿面部前方有可能会被胎盘、脐带、手臂、子宫壁等遮挡,使得采集的三维体数据可能包括胎盘、羊水悬浮物质、脐带、子宫组织等,对成像目标造成遮挡,给目标观察带来困难。The three-dimensional ultrasound imaging system is based on the traditional two-dimensional ultrasound, and acquires two-dimensional ultrasonic images of multiple sequential spatial sequences. According to the spatial positional relationship of the data acquisition, the volume data is reproduced by steps such as scanning conversion. The three-dimensional ultrasound imaging system provides high-dimensional spatial information that traditional two-dimensional ultrasound can not provide, making clinical diagnosis and observation more intuitive and flexible, and making the communication between the doctors and patients more smooth. Due to its informative and intuitive information, the three-dimensional ultrasound imaging system is currently mainly used for observation of fetal morphology in the field of obstetrics, especially for facial observation. However, due to the particularity of the imaging environment, when the fetal face is three-dimensionally visualized, the front of the fetal face may be blocked by the placenta, umbilical cord, arm, uterine wall, etc., so that the collected three-dimensional volume data may include placenta and amniocentesis. Substances, umbilical cords, uterine tissue, etc., cause obstruction to the imaging target, which brings difficulties to the target observation.
虽然目前超声检查设备上通常有交互的容积剪裁方法可提供给检查人员对遮挡部位进行裁剪,但是操作较为繁琐,耗费时间较长。Although the current volumetric cutting method usually has an interactive volume cutting method for the inspector to cut the shielding portion, the operation is cumbersome and takes a long time.
发明内容Summary of the invention
本发明提供了超声三维胎儿面部轮廓图像处理方法及系统,通过利用可信边界点裁剪体数据,自动对胎儿面部的遮挡部分进行裁剪,简化了检测人员的操作,提高了超声三维出图率。The invention provides an ultrasonic three-dimensional fetal facial contour image processing method and system, which automatically cuts the occlusion portion of the fetal face by cutting the volume data by using the trusted boundary point, simplifies the operation of the detecting personnel, and improves the three-dimensional imaging rate of the ultrasound.
为实现上述设计,本发明采用以下技术方案: To achieve the above design, the present invention adopts the following technical solutions:
一方面,提供了超声三维胎儿面部轮廓图像处理方法,该方法,包括:In one aspect, an ultrasonic three-dimensional fetal facial contour image processing method is provided, the method comprising:
对胎儿体数据在预定方向的多帧切片进行检测,以获取各帧切片的目标区域,其中,所述目标区域包括胎儿头部区域;Detecting a plurality of frame slices of the fetal volume data in a predetermined direction to obtain a target region of each frame slice, wherein the target region includes a fetal head region;
筛选出包含目标区域的切片,并对筛选出的切片进行面部边界检测,以获取可信边界点;Screening the slice containing the target area, and performing face boundary detection on the selected slice to obtain a trusted boundary point;
根据所述可信边界点对胎儿体数据进行裁剪,以得到超声三维胎儿面部轮廓图像。The fetal volume data is cropped according to the trusted boundary point to obtain an ultrasound three-dimensional fetal facial contour image.
其中,在所述筛选出包含目标区域的切片的步骤之后还包括:Wherein, after the step of filtering out the slice including the target area, the method further includes:
当当前帧切片目标区域的位置与邻近切片目标区域的位置的差值超过预定阈值时,则对所述当前帧切片目标区域的位置进行校正。When the difference between the position of the current frame slice target area and the position of the adjacent slice target area exceeds a predetermined threshold, the position of the current frame slice target area is corrected.
其中,所述对所述当前帧切片目标区域的位置进行校正的步骤,包括:The step of correcting the location of the current frame slice target area includes:
遍历所有所述筛选出的切片,得到<帧号,目标区域>的序列;Traversing all of the selected slices to obtain a sequence of <frame number, target area>;
分别求出<帧号,目标区域>序列中各帧切片目标区域的位置与其邻近切片目标区域的位置的偏差;Deviating the position of the target area of each frame slice in the <frame number, target area> sequence from the position of the adjacent slice target area, respectively;
计算所述偏差的均值;Calculating the mean of the deviations;
当当前帧切片目标区域的位置与其邻近切片目标区域的位置的偏差大于所述均值时,则将当前帧切片目标区域的中心点坐标替换为目标切片目标区域的中心点坐标。When the deviation of the position of the current frame slice target area from the position of the adjacent slice target area is greater than the mean value, the center point coordinate of the current frame slice target area is replaced with the center point coordinate of the target slice target area.
其中,所述对筛选出的切片进行面部边界检测,以获取可信边界点的步骤,包括:The step of performing facial boundary detection on the selected slice to obtain a trusted boundary point includes:
获取各帧所述筛选出的切片中从较暗区域到较亮区域的过渡边界;Obtaining a transition boundary from the darker region to the brighter region of the selected slice in each frame;
根据过渡边界所在连通区域的灰度值和轮廓形态,确定各帧所述筛选出的切片的面部区域,并将面部区域的上表面边界作为备选分割边界;Determining a face region of the sliced slice of each frame according to a gray value and a contour shape of the connected region where the transition boundary is located, and using an upper surface boundary of the face region as an alternative segmentation boundary;
根据所述备选分割边界,获取多个可信边界点。Obtaining a plurality of trusted boundary points according to the alternative segmentation boundary.
其中,在所述根据所述备选分割边界,获取多个可信边界点的步骤之前,还包括:边界生长的步骤。Wherein, before the step of acquiring a plurality of trusted boundary points according to the candidate segmentation boundary, the method further includes: a step of boundary growth.
其中,所述根据所述备选分割边界,获取多个可信边界点的步骤,包括:The step of acquiring multiple trusted boundary points according to the candidate segmentation boundary includes:
根据所述备选分割边界,构造与每帧所述筛选出的切片一一对应的边界矩 阵;Constructing a boundary moment corresponding to the selected slice of each frame according to the candidate segmentation boundary Array
将所有所述边界矩阵与累加矩阵叠加,得到投票矩阵;Superimposing all of the boundary matrix and the accumulation matrix to obtain a voting matrix;
统计所述投票矩阵中每列的最大值,将最大值对应的点确定为可信边界点。The maximum value of each column in the voting matrix is counted, and the point corresponding to the maximum value is determined as a trusted boundary point.
其中,所述根据所述可信边界点对胎儿体数据进行裁剪,以得到超声三维胎儿面部轮廓图像的步骤包括:The step of cropping the fetal volume data according to the trusted boundary point to obtain an ultrasound three-dimensional fetal facial contour image includes:
根据所述可信边界点制作裁剪模板;Making a cropping template according to the trusted boundary point;
根据所述裁剪模块对胎儿体数据进行裁剪,得到超声三维胎儿面部轮廓图像。The fetal volume data is cropped according to the cutting module to obtain an ultrasound three-dimensional fetal facial contour image.
其中,所述对胎儿体数据在预定方向的多帧切片进行检测,以获取各帧切片的目标区域的步骤包括:The step of detecting the multi-frame slice of the fetal volume data in a predetermined direction to obtain the target area of each frame slice includes:
利用预先设置的分类器检测胎儿体数据在预定方向的各帧切片的所述目标区域;Detecting, by using a preset classifier, the target area of each frame slice of the fetal volume data in a predetermined direction;
保存所述目标区域及其对应的切片。The target area and its corresponding slice are saved.
另一方面,提供了超声三维胎儿面部轮廓图像处理系统,该系统,包括:In another aspect, an ultrasound three-dimensional fetal facial contour image processing system is provided, the system comprising:
目标区域检测模块,用于对胎儿体数据在预定方向的多帧切片进行检测,以获取各帧切片的目标区域,其中,所述目标区域包括胎儿头部区域;a target area detecting module, configured to detect a multi-frame slice of the fetal body data in a predetermined direction to obtain a target area of each frame slice, wherein the target area includes a fetal head area;
可信边界点获取模块,用于筛选出包含目标区域的切片,并对筛选出的切片进行面部边界检测,以获取可信边界点;a trusted boundary point acquiring module, configured to filter out a slice including a target area, and perform face boundary detection on the selected slice to obtain a trusted boundary point;
裁剪模块,用于根据所述可信边界点对胎儿体数据进行裁剪,以得到超声三维胎儿面部轮廓图像。And a cropping module, configured to crop the fetal volume data according to the trusted boundary point to obtain an ultrasound three-dimensional fetal facial contour image.
其中,所述系统还包括:校正模块,用于当当前帧切片目标区域的位置与邻近切片目标区域的位置的差值超过预定阈值时,则对所述当前帧切片目标区域的位置进行校正。The system further includes: a correction module, configured to correct a position of the current frame slice target area when a difference between a position of the current frame slice target area and a position of the adjacent slice target area exceeds a predetermined threshold.
所述校正模块还用于,遍历所有所述筛选出的切片,得到<帧号,目标区域>的序列;分别求出<帧号,目标区域>序列中各帧切片目标区域的位置与其邻近切片目标区域的位置的偏差;计算所述偏差的均值;当当前帧切片目标区 域的位置与其邻近切片目标区域的位置的偏差大于所述均值时,则将当前帧切片目标区域的中心点坐标替换为目标切片目标区域的中心点坐标。The correction module is further configured to: traverse all the selected slices to obtain a sequence of <frame number, target area>; respectively obtain a position of a frame target area of each frame in the <frame number, target area> sequence and a neighboring slice thereof Deviation of the position of the target area; calculating the mean of the deviation; when the current frame slice target area When the deviation of the position of the domain from the position of the adjacent slice target area is greater than the mean value, the center point coordinate of the current frame slice target area is replaced with the center point coordinate of the target slice target area.
其中,所述可信边界点获取模块还用于:获取各帧所述筛选出的切片中从较暗区域到较亮区域的过渡边界;根据过渡边界所在连通区域的灰度值和轮廓形态,确定各帧所述筛选出的切片的面部区域,并将面部区域的上表面边界作为备选分割边界;根据所述备选分割边界,获取多个可信边界点。The trusted boundary point obtaining module is further configured to: obtain a transition boundary from the darker region to the brighter region of the selected slice in each frame; according to the gray value and the contour shape of the connected region where the transition boundary is located, Determining a face region of the sliced slice of each frame, and using an upper surface boundary of the face region as an alternative segmentation boundary; and acquiring a plurality of trusted boundary points according to the candidate segmentation boundary.
其中,所述可信边界点获取模块包括边界生长单元,所述边界生长单元用于边界生长。Wherein the trusted boundary point acquisition module comprises a boundary growth unit, and the boundary growth unit is used for boundary growth.
其中,所述可信边界点获取模块还用于:根据所述备选分割边界,构造与每帧所述筛选出的切片一一对应的边界矩阵;将所有所述边界矩阵与累加矩阵叠加,得到投票矩阵;统计所述投票矩阵中每列的最大值,将最大值对应的点确定为可信边界点。The trusted boundary point obtaining module is further configured to: construct a boundary matrix corresponding to the filtered slice in each frame according to the candidate segmentation boundary; and superimpose all the boundary matrix and the accumulation matrix, A voting matrix is obtained; the maximum value of each column in the voting matrix is counted, and the point corresponding to the maximum value is determined as a trusted boundary point.
其中,所述剪裁模块还用于根据所述可信边界点制作裁剪模板;根据所述裁剪模块对胎儿体数据进行裁剪,得到超声三维胎儿面部轮廓图像。The cutting module is further configured to: create a cropping template according to the trusted boundary point; and perform cropping on the fetal body data according to the cutting module to obtain an ultrasound three-dimensional fetal facial contour image.
其中,所述目标区域检测模块还用于:利用预先设置的分类器检测胎儿体数据在预定方向的各帧切片的所述目标区域;保存所述目标区域及其对应的切片。The target area detecting module is further configured to: detect, by using a preset classifier, the target area of each frame slice of the fetal body data in a predetermined direction; and save the target area and the corresponding slice thereof.
与现有技术相比,本发明的有益效果为:对胎儿体数据在预定方向的多帧切片进行检测,以获取各帧切片的目标区域,其中,所述目标区域包括胎儿头部区域;筛选出包含目标区域的切片,并对筛选出的切片进行面部边界检测,以获取可信边界点;根据所述可信边界点对胎儿体数据进行裁剪,以得到超声三维胎儿面部轮廓图像。本发明通过利用可信边界点裁剪体数据,自动对胎儿面部的遮挡部分进行裁剪,简化了检测人员的操作,提高了超声三维出图率。Compared with the prior art, the beneficial effects of the present invention are: detecting a multi-frame slice of a fetal body data in a predetermined direction to obtain a target region of each frame slice, wherein the target region includes a fetal head region; A slice including the target area is extracted, and the selected slice is subjected to face boundary detection to obtain a trusted boundary point; the fetal body data is cropped according to the trusted boundary point to obtain an ultrasound three-dimensional fetal facial contour image. The invention automatically cuts the occlusion portion of the fetal face by cutting the volume data by using the trusted boundary point, simplifies the operation of the detecting personnel, and improves the three-dimensional imaging rate of the ultrasound.
附图说明DRAWINGS
为了更清楚地说明本发明实施例中的技术方案,下面将对本发明实施例描述中所需要使用的附图作简单的介绍,显而易见地,下面描述中的附图仅仅是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的 前提下,还可以根据本发明实施例的内容和这些附图获得其他的附图。In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings used in the description of the embodiments of the present invention will be briefly described below. It is obvious that the drawings in the following description are only some embodiments of the present invention. For those of ordinary skill in the art, without creative work Other drawings may also be obtained in accordance with the teachings of the embodiments of the present invention and the drawings.
图1是本发明具体实施方式中提供的超声三维胎儿面部轮廓图像处理方法的实施例的方法流程图。1 is a flow chart of a method of an embodiment of an ultrasonic three-dimensional fetal facial contour image processing method provided in an embodiment of the present invention.
图2是本发明具体实施方式中提供的胎儿体数据在预定方向的多帧切片的示意图。2 is a schematic diagram of a multi-frame slice of fetal volume data in a predetermined direction provided in an embodiment of the present invention.
图3是本发明具体实施方式中提供的对当前帧切片目标区域的位置进行校正的过程示意图。FIG. 3 is a schematic diagram of a process for correcting the position of a current frame slice target area provided in an embodiment of the present invention.
图4a是本发明具体实施方式中提供的目标区域校正前的示意图。Figure 4a is a schematic illustration of a target area prior to correction in accordance with an embodiment of the present invention.
图4b是图4a的目标区域校正后的示意图。Figure 4b is a schematic illustration of the target area of Figure 4a after correction.
图5是本发明具体实施方式中提供的对筛选出的切片进行面部边界检测,以获取可信边界点的过程示意图。FIG. 5 is a schematic diagram of a process for performing facial boundary detection on a selected slice to obtain a trusted boundary point according to an embodiment of the present invention.
图6a是本发明具体实施方式中提供的备选分割边界的示意图。Figure 6a is a schematic illustration of alternative segmentation boundaries provided in an embodiment of the invention.
图6b是图6a中的备选分割边界进行边界生长后的示意图。Figure 6b is a schematic illustration of the alternate segmentation boundary of Figure 6a after boundary growth.
图7是本发明具体实施方式中提供的根据备选分割边界,获取多个可信边界点的过程示意图。FIG. 7 is a schematic diagram of a process for acquiring multiple trusted boundary points according to an alternative segmentation boundary according to an embodiment of the present invention.
图8是本发明具体实施方式中提供的根据可信边界点对胎儿体数据进行裁剪,以得到超声三维胎儿面部轮廓图像的过程示意图。FIG. 8 is a schematic diagram of a process of cropping fetal body data according to a trusted boundary point to obtain an ultrasound three-dimensional fetal facial contour image according to an embodiment of the present invention.
图9是本发明具体实施方式中提供的一种裁剪模板。9 is a cut template provided in an embodiment of the present invention.
图10a是本发明具体实施方式中提供的矢状面方向切片裁剪前的示意图。Figure 10a is a schematic illustration of a sagittal direction slice prior to trimming in accordance with an embodiment of the present invention.
图10b是图10a的剪裁后的示意图。Figure 10b is a schematic view of the cut of Figure 10a.
图11是本发明具体实施方式中提供的超声三维胎儿面部轮廓图像处理系统的实施例的结构方框图。11 is a block diagram showing the structure of an embodiment of an ultrasonic three-dimensional fetal facial contour image processing system provided in an embodiment of the present invention.
具体实施方式detailed description
为使本发明解决的技术问题、采用的技术方案和达到的技术效果更加清楚,下面将结合附图对本发明实施例的技术方案作进一步的详细描述,显然,所描述的实施例仅仅是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域技术人员在没有作出创造性劳动前提下所获得的所有其 他实施例,都属于本发明保护的范围。The technical solutions of the embodiments of the present invention will be further described in detail below with reference to the accompanying drawings, and it is obvious that the described embodiments are only the present invention. Some embodiments, but not all of the embodiments. Based on the embodiments of the present invention, all of those obtained by those skilled in the art without creative efforts The embodiments thereof are all within the scope of protection of the present invention.
请参考图1,其是本发明具体实施方式中提供的超声三维胎儿面部轮廓图像处理方法的流程图。如图1所示,该方法包括:Please refer to FIG. 1 , which is a flowchart of an ultrasonic three-dimensional fetal facial contour image processing method provided in an embodiment of the present invention. As shown in Figure 1, the method includes:
步骤S101:对胎儿体数据在预定方向的多帧切片进行检测,以获取各帧切片的目标区域,其中,所述目标区域包括胎儿头部区域。Step S101: detecting a multi-frame slice of the fetal volume data in a predetermined direction to obtain a target area of each frame slice, wherein the target area includes a fetal head area.
对胎儿体数据在预定方向的多帧(张)切片的目标区域进行检测,以获取至少一帧切片的目标区域。在本实施例中,目标区域包括胎儿头部区域,预定方向可以为胎儿体数据的切面方向,如与换能器阵列方向平行的平面方向。通常三维/四维成像检测人员会使用这个平面方向来获取胎儿的矢状面,矢状面也可以替换为冠状面等特征明显的切面。预定方向也可以用遍历体数据三个轴的平面方向的方式。当然也可以为通过其他算法确定的预定方向,这里不再一一赘述。The target area of the multi-frame (slice) slice of the fetal volume data in a predetermined direction is detected to acquire a target area of at least one frame slice. In this embodiment, the target area includes a fetal head region, and the predetermined direction may be a facet direction of the fetal volume data, such as a planar direction parallel to the transducer array direction. Usually the 3D/4D imaging examiner uses this plane direction to obtain the sagittal plane of the fetus, and the sagittal plane can also be replaced with a characteristic section such as the coronal plane. The predetermined direction can also be used to traverse the plane direction of the three axes of the volume data. Of course, it can also be a predetermined direction determined by other algorithms, which will not be repeated here.
在本实施例中,目标区域检测采用方向梯度直方图(Histogram of Oriented Gradient,HOG)特征提取算法及adboost分类器算法。分类器预先使用胎儿的矢状面头部区域的数据进行了训练,根据训练结果设置分类器,利用预先设置的分类器自动在胎儿体数据预定方向的各帧切片上定位矢状面的目标区域,保存目标区域及其对应的切片。In this embodiment, the target region detection adopts a Histogram of Oriented Gradient (HOG) feature extraction algorithm and an adboost classifier algorithm. The classifier is trained in advance using the data of the sagittal head region of the fetus, and the classifier is set according to the training result, and the target region of the sagittal plane is automatically positioned on each frame slice of the predetermined direction of the fetal volume data by using a preset classifier. , save the target area and its corresponding slice.
步骤S102:筛选出包含目标区域的切片,对筛选出的切片进行面部边界检测,以获取可信边界点。Step S102: Filter out a slice including the target area, and perform face boundary detection on the selected slice to obtain a trusted boundary point.
胎儿体数据在预定方向的多帧切片,可能不是所有的切片都包括目标区域,因此需要筛选出包含目标区域的切片。The multi-frame slice of the fetal volume data in a predetermined direction may not include all the slices in the target area, so it is necessary to screen out the slice containing the target area.
对包含所述目标区域的切片进行面部边界检测,可使用边界检测算子找出各帧切片从较暗区域到较亮区域的过渡边界,这里的边界检测算子包括但不仅限于prewitt算子和sobel算子。Performing a face boundary detection on a slice including the target area, and using a boundary detection operator to find a transition boundary of each frame slice from a darker region to a brighter region, where the boundary detection operator includes but is not limited to the prewitt operator and Sobel operator.
分析各帧切片的所述过渡边界所在连通区域的灰度值和轮廓形态,将这些灰度值和轮廓形态与预先存储的面部灰度值和面部轮廓形态比较,找出各帧切片的面部区域,并将面部区域的上表面边界作为面部边界,根据各帧切片的面部边界对各个面部边界点进行投票,被面部边界经过最多的点确定为可信边界 点。The gray value and the contour form of the connected region where the transition boundary of each frame slice is located are analyzed, and the gray value and the contour form are compared with the pre-stored face gray value and the face contour form to find the face region of each frame slice. And using the upper surface boundary of the face area as the face boundary, voting for each face boundary point according to the face boundary of each frame slice, and determining the trusted boundary by the point where the face boundary passes the most point.
步骤S103:根据可信边界点对胎儿体数据进行裁剪,以得到超声胎儿面部轮廓图像。Step S103: crop the fetal volume data according to the trusted boundary point to obtain an ultrasound fetal facial contour image.
根据上述步骤S102获取的可信边界点制作模块,根据模板裁剪胎儿体数据,得到超声三维胎儿面部轮廓的图像。实现了自动对胎儿面部的遮挡部分进行裁剪,简化了检测人员的操作,提高了超声三维出图率。According to the trusted boundary point making module acquired in the above step S102, the fetal body data is cropped according to the template to obtain an image of the ultrasonic three-dimensional fetal facial contour. The automatic cutting of the occlusion portion of the fetal face is realized, which simplifies the operation of the detecting person and improves the three-dimensional imaging rate of the ultrasound.
上述实施例的超声胎儿面部轮廓图像处理方法,对胎儿体数据在预定方向的多帧切片进行检测,以获取各帧切片的目标区域;筛选出包含目标区域的切片,对筛选出的切片进行面部边界检测,以获取可信边界点;根据可信边界点对胎儿体数据进行裁剪,以得到超声三维胎儿面部轮廓图像。本发明通过利用可信边界点裁剪体数据,自动对胎儿面部的遮挡部分进行裁剪,使得操作简便、快捷,减少了操作手动操作带来的误差,图像质量更高。The ultrasonic fetal facial contour image processing method of the above embodiment detects the multi-frame slice of the fetal volume data in a predetermined direction to acquire a target region of each frame slice; selects a slice including the target region, and performs face on the selected slice Boundary detection to obtain a trusted boundary point; the fetal body data is cropped according to the trusted boundary point to obtain an ultrasound three-dimensional fetal facial contour image. The invention automatically cuts the occlusion portion of the fetal face by cutting the volume data by using the trusted boundary point, so that the operation is simple and quick, the error caused by the manual operation is reduced, and the image quality is higher.
在一个实施例中,在筛选出包含目标区域的切片的步骤之后,还包括:In one embodiment, after the step of filtering out the slice including the target area, the method further includes:
当当前帧切片目标区域的位置与邻近切片目标区域的位置的差值超过预定阈值时,则对当前目标区域的位置进行校正。When the difference between the position of the current frame slice target area and the position of the adjacent slice target area exceeds a predetermined threshold, the position of the current target area is corrected.
筛选出包含目标区域的切片之后,需要对与邻近切片的目标区域的位置的差值达到预定阈值的目标区域的位置进行校正,以增加胎儿面部轮廓检测的准确性。所述预定阈值,这里不做具体限定,可以根据实际应用对准确率的要求进行选值。在本实施例中,邻近切片是指与当前帧切片在位置上相邻的切片。例如图2所示,在Z轴的切片序列上,编号为“1”、“3”、“5”、“6”的切片可检测出目标区域(ROI,region of interest)。那么编号为“3”的切片ROI的邻近ROI为编号为“1”及“编号为5”的切片的ROI。After filtering out the slice including the target area, it is necessary to correct the position of the target area whose difference from the position of the target area of the adjacent slice reaches a predetermined threshold to increase the accuracy of the fetal face contour detection. The predetermined threshold is not specifically limited herein, and the accuracy requirement may be selected according to an actual application. In the present embodiment, the adjacent slice refers to a slice that is temporally adjacent to the current frame slice. For example, as shown in FIG. 2, on the slice sequence of the Z-axis, slices numbered "1", "3", "5", and "6" can detect a region of interest (ROI). Then, the neighboring ROI of the slice ROI numbered "3" is the ROI of the slice numbered "1" and "numbered 5".
优选地,在一个实施例中,如图3所示,对当前帧切片目标区域的位置进行校正的步骤包括:Preferably, in one embodiment, as shown in FIG. 3, the step of correcting the position of the current frame slice target area includes:
步骤S301:遍历所有筛选出的切片,得到<帧号,目标区域>序列。Step S301: Traverse all the selected slices to obtain a <frame number, target area> sequence.
步骤S302:分别求出<帧号,目标区域>序列中各帧切片目标区域的位置与其邻近切片目标区域的位置的偏差。Step S302: Find the deviation between the position of each frame slice target area in the <frame number, target area> sequence and the position of the adjacent slice target area.
在本实施例中,偏差包括各帧切片目标区域的中心点与其邻近切片目标区 域的中心点的X坐标偏差和Y坐标偏差。将得到的X坐标偏差和Y坐标偏差分别保存在<帧号,与相邻切片目标区域的中心点的X坐标偏差>和<帧号,与相邻切片目标区域的中心点的Y坐标偏差>的序列中。In this embodiment, the deviation includes a center point of each frame slice target area and a neighboring slice target area thereof. The X coordinate deviation and the Y coordinate deviation of the center point of the domain. The obtained X coordinate deviation and Y coordinate deviation are respectively stored in the <frame number, the X coordinate deviation from the center point of the adjacent slice target area> and <the frame number, and the Y coordinate deviation from the center point of the adjacent slice target area> In the sequence.
步骤S303:计算偏差的均值。Step S303: Calculate the mean value of the deviation.
在本实施例中,均值包括X坐标偏差的均值和Y坐标偏差的均值。根据上述步骤得到的<帧号,与相邻切片的目标区域的中心点的X坐标偏差>和<帧号,与相邻切片的目标区域的中心点的Y坐标偏差>两个序列,分别剔除两个偏差序列中的最大值,然后计算上述两个序列的X坐标偏差的均值和Y坐标偏差的均值,即得到各帧切片目标区域与其邻近切片目标区域偏差的均值。In the present embodiment, the mean value includes the mean value of the X coordinate deviation and the mean value of the Y coordinate deviation. The <frame number obtained according to the above steps, the X coordinate deviation from the center point of the target area of the adjacent slice> and the <frame number, and the Y coordinate deviation from the center point of the target area of the adjacent slice> are two sequences, respectively, are eliminated. The maximum value of the two deviation sequences is then calculated, and the mean value of the X coordinate deviation and the mean value of the Y coordinate deviation of the two sequences are calculated, that is, the mean value of the deviation between the target region of each frame slice and the target region of the adjacent slice is obtained.
步骤S304:当当前帧切片目标区域的位置与其邻近切片目标区域的位置的偏差大于均值时,则将当前帧切片目标区域的中心点坐标替换为目标切片目标区域的中心点坐标。其中,目标切片与当前帧切片的距离最小,且目标切片目标区域的位置与其邻近切片目标区域的位置的偏差均小于均值。Step S304: When the deviation of the position of the current frame slice target area from the position of the adjacent slice target area is greater than the mean value, the center point coordinate of the current frame slice target area is replaced with the center point coordinate of the target slice target area. The distance between the target slice and the current frame slice is the smallest, and the deviation between the position of the target slice target area and the position of the adjacent slice target area is less than the mean value.
在本实施例中,存在三种情况:In this embodiment, there are three cases:
(1)当当前帧切片目标区域与邻近切片目标区域的中心点的X坐标偏差大于X坐标偏差的均值时,则将当前帧切片目标区域的中心点的X坐标替换为目标切片目标区域的中心点的X坐标。(1) When the X coordinate deviation of the center point of the current frame slice target area and the adjacent slice target area is greater than the mean value of the X coordinate deviation, the X coordinate of the center point of the current frame slice target area is replaced with the center of the target slice target area. The X coordinate of the point.
(2)当当前帧切片目标区域与邻近切片目标区域的中心点的Y坐标偏差大于Y坐标偏差的均值时,则将当前帧切片目标区域中心点的Y坐标替换为目标切片目标区域中心点的Y坐标。(2) When the Y coordinate deviation of the center point of the current frame slice target area and the adjacent slice target area is greater than the mean value of the Y coordinate deviation, the Y coordinate of the current frame slice target area center point is replaced with the target slice target area center point. Y coordinate.
(3)当当前帧切片的目标区域与邻近切片目标区域的中心点的X坐标偏差大于X坐标偏差的均值,并且当前帧切片目标区域与邻近切片目标区域的中心点的Y坐标偏差大于Y坐标偏差的均值时,则将当前帧切片目标区域中心点的X坐标和Y坐标分别替换为目标切片目标区域中心点的X坐标和Y坐标。如图4a和图4b所示,图中的小方框代表目标区域,大方块代表切片,图4a和图4b分别是本发明具体实施方式中提供的一种面部目标区域的校正前和校正后的对比示意图。(3) When the X coordinate deviation of the target area of the current frame slice and the center point of the adjacent slice target area is greater than the mean value of the X coordinate deviation, and the Y coordinate deviation of the center point of the current frame slice target area and the adjacent slice target area is greater than the Y coordinate When the mean value of the deviation is obtained, the X coordinate and the Y coordinate of the center point of the current frame slice target area are replaced with the X coordinate and the Y coordinate of the center point of the target slice target area, respectively. As shown in FIG. 4a and FIG. 4b, the small squares in the figure represent the target area, the large squares represent the slices, and FIG. 4a and FIG. 4b are respectively before and after correction of a facial target area provided in the embodiment of the present invention. A comparison of the diagrams.
在一个实施例中,如图5所示,对筛选出的切片进行面部边界检测,以获 取可信边界点的步骤包括:In one embodiment, as shown in FIG. 5, facial segmentation detection is performed on the selected slice to obtain The steps to take a trusted boundary point include:
步骤S501,获取各帧筛选出的切片中从较暗区域到较亮区域的过渡边界。Step S501: Obtain a transition boundary from a darker region to a brighter region in the slice selected by each frame.
利用边界检测算子找出各帧切片从较暗区域到较亮区域的过渡边界,这里的边界检测算子包括但不仅限于如下算子prewitt算子和sobel算子。The boundary detection operator is used to find the transition boundary of each frame slice from the darker region to the brighter region. The boundary detection operator here includes but is not limited to the following operator prewitt operator and sobel operator.
步骤S502,根据过渡边界所在连通区域的灰度值和轮廓形态,确定各帧切片的面部区域,并将面部区域的上表面边界作为备选分割边界,如图6a所示。Step S502, determining a face region of each frame slice according to the gray value and the contour shape of the connected region where the transition boundary is located, and using the upper surface boundary of the face region as an alternative segmentation boundary, as shown in FIG. 6a.
在本实施例中,分析各帧切片的过渡边界所在连通区域的灰度值和轮廓形态,将这些灰度值和轮廓形态与预先存储的面部灰度值和面部轮廓形态比较,找出各帧切片的面部区域,并将面部区域的上表面边界作为备选分割边界。In this embodiment, the gray value and the contour form of the connected region where the transition boundary of each frame slice is located are analyzed, and the gray value and the contour form are compared with the pre-stored face gray value and the face contour form to find each frame. The sliced face area and the upper surface boundary of the face area is used as an alternative segmentation boundary.
步骤S503,根据上述步骤获取的备选分割边界,获取多个可信边界点。Step S503, acquiring a plurality of trusted boundary points according to the candidate segmentation boundary acquired in the above step.
根据各帧切片的备选分割边界对各个面部边界点进行投票,被备选分割边界经过最多的点确定为可信边界点。在一个实施例中,在根据各帧切片的备选分割边界,获取多个可信边界点的步骤之前,还包括:边界生长的步骤。Each of the face boundary points is voted according to an alternative segmentation boundary of each frame slice, and the point at which the candidate segmentation boundary passes the most is determined as a trusted boundary point. In one embodiment, before the step of acquiring a plurality of trusted boundary points according to the candidate segmentation boundary of each frame slice, the step of: boundary growth is further included.
由于各帧切片的备选分割边界的起止点可能不一致,将会影响后续获取可信边界,因此采用边界生长,能够得到从左到右贯穿切片的完整的面部边界,从而提高边界检测的准确性。Since the starting and ending points of the candidate segmentation boundaries of each frame slice may be inconsistent, it will affect the subsequent acquisition of the trusted boundary. Therefore, by using the boundary growth, the complete face boundary of the slice can be obtained from left to right, thereby improving the accuracy of the boundary detection. .
在本实施例中,边界生长是基于梯度图像进行的。以备选分割边界的左右两个端点作为横向方向的生长点,基于当前生长点的邻域在梯度图像上搜索边界点,将搜索到的边界点添加到当前备选分割边界上,得到各帧切片的完整的面部边界。边界生长的前后对比效果,如图6a和图6b所示。In the present embodiment, the boundary growth is performed based on the gradient image. Taking the left and right endpoints of the candidate segmentation boundary as the growth point in the lateral direction, searching for the boundary point on the gradient image based on the neighborhood of the current growth point, and adding the searched boundary point to the current candidate segmentation boundary to obtain each frame Slice the full face border. The contrast effects before and after the boundary growth are shown in Figures 6a and 6b.
在一个实施例中,如图7所示,根据备选分割边界,获取多个可信边界点的步骤,包括:In an embodiment, as shown in FIG. 7, the step of acquiring a plurality of trusted boundary points according to the candidate segmentation boundary includes:
步骤S701,根据备选分割边界,构造与每帧筛选出的切片一一对应的边界矩阵。Step S701: Construct a boundary matrix corresponding to the slice selected by each frame according to the candidate segmentation boundary.
每个边界矩阵的维度相同,具体大小可以视情况而定。在本实施例中,边界矩阵的背景点设置为0,边界点设置为1。当然也可以是设置为其它值,只要能区分出背景和边界点即可。边界点为各帧切片中的备选分割边界对应的点。The dimensions of each boundary matrix are the same, and the specific size can be determined as the case may be. In this embodiment, the background point of the boundary matrix is set to 0, and the boundary point is set to 1. Of course, it can be set to other values as long as the background and boundary points can be distinguished. The boundary points are points corresponding to alternative segmentation boundaries in each frame slice.
步骤S702,将所有边界矩阵与累加矩阵叠加,得到投票矩阵。 Step S702, superimposing all the boundary matrices with the accumulating matrix to obtain a voting matrix.
在本实施例中,累加矩阵可以为与边界矩阵同维度的零矩阵。将所有边界矩阵叠加到一个累加矩阵,这样穿过同一个点的备选分割边界会投票该点。In this embodiment, the accumulation matrix may be a zero matrix of the same dimension as the boundary matrix. Superimpose all boundary matrices onto an accumulating matrix so that alternate partition boundaries that traverse the same point will vote for that point.
步骤S703,统计投票矩阵中每列的最大值,将最大值对应的点确定为可信边界点。Step S703, the maximum value of each column in the voting matrix is counted, and the point corresponding to the maximum value is determined as a trusted boundary point.
备选分割边界穿过同一个点会投票该点,背景点为0,边界点为1,如果一个元素(点)上累加的1的个数越多,分别统计每一列元素(点)的最大值,便可得到各列被备选分割边界经过最多的点,即亮度最大的点,确定该点为可信边界点,便可获取各列的可信边界点,由各列的可信边界点得到超声三维胎儿面部轮廓的可信边界。The alternative segmentation boundary will vote for the point through the same point. The background point is 0 and the boundary point is 1. If the number of 1s accumulated on an element (point) is more, the maximum of each column element (point) is counted separately. Value, you can get the point where each column is the most divided by the candidate partition boundary, that is, the point with the highest brightness, and determine that the point is a trusted boundary point, you can obtain the trusted boundary points of each column, and the trusted boundary of each column Point to obtain a trusted boundary of the ultrasound three-dimensional fetal facial contour.
在一个实施例中,如图8所示,根据可信边界点对胎儿体数据进行裁剪,以得到超声三维胎儿面部轮廓图像的步骤,包括:In one embodiment, as shown in FIG. 8, the step of cropping the fetal volume data according to the trusted boundary point to obtain an ultrasound three-dimensional fetal facial contour image includes:
步骤S801,根据可信边界点,制作裁剪模板。Step S801, a cropping template is created according to the trusted boundary point.
裁剪胎儿体数据的主要操作是制作裁剪模板,裁剪模板是将投票后的可信边界点、图像右下角、图像左下角作为一个封闭区域进行填充,如图9所示,是本发明具体实施方式中提供的一种裁剪模板。The main operation of cropping the fetal body data is to make a cropping template, which is to fill the trusted boundary point after the voting, the lower right corner of the image, and the lower left corner of the image as a closed area, as shown in FIG. 9 , which is a specific embodiment of the present invention. A cropping template provided in .
步骤S802,根据裁剪模块对胎儿体数据进行裁剪,得到超声三维胎儿面部轮廓图像。Step S802, cutting the fetal volume data according to the cropping module to obtain an ultrasound three-dimensional fetal facial contour image.
将上述步骤获取的剪裁模板与筛选出的各帧切片做“与”操作——即各帧切片与裁剪模板白色区域一致的数据保留,黑色部分的数据删去。图10a和图10b分别是本发明具体实施方式中提供的矢状面方向切片裁剪前和裁剪后的示意图。利用剪裁模板裁剪胎儿体数据,便可得到胎儿面部轮廓的图像,实现了自动对胎儿面部的遮挡部分进行裁剪,简化了操作,提高了出图率。The trimming template obtained in the above step is “ANDed” with the selected frame slices—that is, the data of each frame slice and the white area of the cropping template are retained, and the data of the black portion is deleted. 10a and 10b are schematic views of the sagittal direction slice before and after cutting, respectively, provided in the embodiment of the present invention. By cutting the fetal body data with the trimming template, an image of the contour of the fetal face can be obtained, and the occlusion portion of the fetal face is automatically cropped, which simplifies the operation and improves the drawing rate.
在一个实施例中,在根据可信边界点对胎儿体数据进行裁剪,以得到胎儿面部轮廓图像的步骤之后,还包括:对裁剪后的胎儿体数据进行三维渲染。In one embodiment, after the step of cropping the fetal volume data according to the trusted boundary point to obtain the fetal facial contour image, the method further comprises: three-dimensional rendering of the cropped fetal volume data.
将裁剪后的体数据进行渲染,渲染可采用公知的“光线投射法”等三维渲染方法来进行,得到更为直观的胎儿面部轮廓的图像。The cropped volume data is rendered, and the rendering can be performed by a three-dimensional rendering method such as the well-known "ray casting method" to obtain a more intuitive image of the fetal facial contour.
本实施例对筛选出包含所述目标区域的各帧切片进行面部检测,得到各帧切片的备选分割边界,通过利用各切片的备选分割边界对各个面部边界点进行 投票得到被备选分割边界经过最多的点——可信边界点,为提高边界检测的准确性,也可在得到备选分割边界之后进行边界生长,根据可信边界点制作裁剪模板,利用裁剪模板自动裁剪胎儿体数据,实现了自动对胎儿面部的遮挡部分进行裁剪,使得操作简便、快捷,减少了操作手动操作带来的误差,图像质量更高。In this embodiment, face detection is performed on each frame slice that includes the target area, and an alternative segmentation boundary of each frame slice is obtained, and each of the face boundary points is performed by using an alternative segmentation boundary of each slice. Voting results in the point where the candidate segmentation boundary passes the most - the trusted boundary point. To improve the accuracy of the boundary detection, the boundary growth can be performed after the candidate segmentation boundary is obtained, and the cropping template is created according to the trusted boundary point, and the cropping is performed. The template automatically cuts the fetal body data, and automatically cuts the occlusion portion of the fetal face, which makes the operation simple and quick, reduces the error caused by the manual operation, and the image quality is higher.
以下是本发明具体实施方式中提供的超声三维胎儿面部轮廓图像处理系统的实施例,超声三维胎儿面部轮廓图像处理系统的实施例基于上述的超声三维胎儿面部轮廓图像处理方法的实施例实现,在超声三维胎儿面部轮廓图像处理系统中未尽的描述,请参考前述超声三维胎儿面部轮廓图像处理方法的实施例。The following is an embodiment of an ultrasonic three-dimensional fetal facial contour image processing system provided in an embodiment of the present invention. The embodiment of the ultrasonic three-dimensional fetal facial contour image processing system is implemented based on the above-described embodiment of the ultrasonic three-dimensional fetal facial contour image processing method. For an exhaustive description of the ultrasound three-dimensional fetal facial contour image processing system, please refer to the aforementioned embodiment of the ultrasonic three-dimensional fetal facial contour image processing method.
请参考图11,其是本发明具体实施方式中提供的超声三维胎儿面部轮廓图像处理系统的实施例的结构方框图。如图所示,该系统,包括:Please refer to FIG. 11, which is a structural block diagram of an embodiment of an ultrasonic three-dimensional fetal facial contour image processing system provided in an embodiment of the present invention. As shown, the system includes:
目标区域检测模块111,用于对胎儿体数据在预定方向的多帧切片进行检测,以获取各帧切片的目标区域,其中,所述目标区域包括胎儿头部区域。The target area detecting module 111 is configured to detect a multi-frame slice of the fetal volume data in a predetermined direction to obtain a target area of each frame slice, wherein the target area includes a fetal head area.
可信边界点获取模块112,用于筛选出包含目标区域的切片,并对筛选出的切片进行面部边界检测,以获取可信边界点。The trusted boundary point obtaining module 112 is configured to filter the slice including the target area, and perform face boundary detection on the selected slice to obtain a trusted boundary point.
剪裁模块113还用于根据所述可信边界点制作裁剪模板;根据裁剪模块对胎儿体数据进行裁剪,得到超声三维胎儿面部轮廓图像。The cropping module 113 is further configured to: create a cropping template according to the trusted boundary point; and perform cropping on the fetal volume data according to the cropping module to obtain an ultrasound three-dimensional fetal facial contour image.
在一个实施例中,该系统还包括:校正模块。校正模块用于当当前帧切片目标区域的位置与邻近切片目标区域的位置的差值超过预定阈值时,则对当前帧切片目标区域的位置进行校正。In one embodiment, the system further includes: a correction module. The correction module is configured to correct the position of the current frame slice target area when the difference between the position of the current frame slice target area and the position of the adjacent slice target area exceeds a predetermined threshold.
校正模块还用于,遍历所有筛选出的切片,得到<帧号,目标区域>的序列;分别求出<帧号,目标区域>序列中各帧切片目标区域的位置与其邻近切片目标区域的位置的偏差;计算偏差的均值;当当前帧切片目标区域的位置与其邻近切片目标区域的位置的偏差大于所述均值时,则将当前帧切片目标区域的中心点坐标替换为目标切片目标区域的中心点坐标。The correction module is further configured to traverse all the selected slices to obtain a sequence of <frame number, target area>; respectively obtain the position of the frame target area of each frame in the <frame number, target area> sequence and the position of the adjacent slice target area Deviation of the deviation; calculating the mean value of the deviation; when the deviation of the position of the current frame slice target area from the position of the adjacent slice target area is greater than the mean value, the center point coordinate of the current frame slice target area is replaced with the center of the target slice target area Point coordinates.
在一个实施例中,所述可信边界点获取模块还用于:获取各帧所述筛选出的切片中从较暗区域到较亮区域的过渡边界;根据过渡边界所在连通区域的灰 度值和轮廓形态,确定各帧所述筛选出的切片的面部区域,并将面部区域的上表面边界作为备选分割边界;根据所述备选分割边界,获取多个可信边界点。In an embodiment, the trusted boundary point obtaining module is further configured to: obtain a transition boundary from the darker region to the brighter region in the filtered slice of each frame; according to the gray of the connected region where the transition boundary is located And a contour shape, determining a face region of the selected slice of each frame, and using an upper surface boundary of the face region as an alternative segmentation boundary; and acquiring a plurality of trusted boundary points according to the candidate segmentation boundary.
在一个实施例中,所述可信边界点获取模块包括边界生长单元,所述边界生长单元用于边界生长。In one embodiment, the trusted boundary point acquisition module includes a boundary growth unit for boundary growth.
在一个实施例中,所述可信边界点获取模块还用于:根据所述备选分割边界,构造与每帧所述筛选出的切片一一对应的边界矩阵;将所有所述边界矩阵与累加矩阵叠加,得到投票矩阵;统计所述投票矩阵中每列的最大值,将最大值对应的点确定为可信边界点。In an embodiment, the trusted boundary point obtaining module is further configured to: construct, according to the candidate partitioning boundary, a boundary matrix corresponding to the filtered slice in each frame; and The accumulation matrix is superimposed to obtain a voting matrix; the maximum value of each column in the voting matrix is counted, and the point corresponding to the maximum value is determined as a trusted boundary point.
在一个实施例中,目标区域检测模块111还用于:利用预先设置的分类器检测胎儿体数据在预定方向的各帧切片的所述目标区域;保存所述目标区域及其对应的切片。In one embodiment, the target area detecting module 111 is further configured to: detect, by using a preset classifier, the target area of each frame slice of the fetal volume data in a predetermined direction; and save the target area and its corresponding slice.
在一个实施例中,所述系统还包括渲染模块,用于对裁剪后的胎儿体数据进行三维渲染。In one embodiment, the system further includes a rendering module for three-dimensional rendering of the cropped fetal volume data.
本实施例的超声三维胎儿面部轮廓图像处理系统用于实现前述的超声三维胎儿面部轮廓图像处理方法,因此超声三维胎儿面部轮廓图像处理系统中的具体实施方式可见前文中的超声三维胎儿面部轮廓图像处理方法的实施例部分,例如,目标区域检测模块111、可信边界点获取模块112、剪裁模块113,分别用于实现上述超声三维胎儿面部轮廓图像处理方法中步骤S101,S102,和S103,所以,其具体实施方式可以参照相应的各个部分实施例的描述,在此不再赘述。The ultrasonic three-dimensional fetal facial contour image processing system of the present embodiment is used to implement the aforementioned ultrasonic three-dimensional fetal facial contour image processing method. Therefore, the specific embodiment in the ultrasonic three-dimensional fetal facial contour image processing system can be seen in the foregoing ultrasonic three-dimensional fetal facial contour image. An embodiment of the processing method, for example, the target area detecting module 111, the trusted boundary point acquiring module 112, and the trimming module 113 are respectively used to implement steps S101, S102, and S103 in the above-described ultrasonic three-dimensional fetal facial contour image processing method, For the specific implementation manners, reference may be made to the descriptions of the respective partial embodiments, and details are not described herein again.
本实施例提供的超声三维胎儿面部轮廓图像处理系统对筛选出包含所述目标区域的切片进行面部检测,得到各帧切片的面部边界,对与邻近切片的目标区域的位置相差值达到设定偏差值的目标区域的位置进行校正,以提高检测的准确性,然后对所述切片的所述目标区域进行面部边界检测,通过对面部边界点进行投票得到被面部边界经过最多的点——可信边界点,利用可信边界点裁剪胎儿体数据,实现了自动对胎儿面部的遮挡部分进行裁剪,简化了检测人员的操作,提高了超声三维出图率。以上结合具体实施例描述了本发明的技术原理。这些描述只是为了解释本发明的原理,而不能以任何方式解释为对本发 明保护范围的限制。基于此处的解释,本领域的技术人员不需要付出创造性的劳动即可联想到本发明的其它具体实施方式,这些方式都将落入本发明的保护范围之内。 The ultrasonic three-dimensional fetal facial contour image processing system provided by the embodiment performs face detection on the slice including the target region, obtains a face boundary of each frame slice, and sets a deviation from a position difference with a target region of the adjacent slice. The position of the target area of the value is corrected to improve the accuracy of the detection, and then the face boundary detection is performed on the target area of the slice, and the point at which the face boundary passes the most is obtained by voting on the face boundary point - credible At the boundary point, the fetal body data is cut by the trusted boundary point, and the occlusion portion of the fetal face is automatically cut, which simplifies the operation of the detecting person and improves the three-dimensional imaging rate of the ultrasound. The technical principles of the present invention have been described above in connection with specific embodiments. These descriptions are only for the purpose of explaining the principles of the present invention and are not to be construed as Limit the scope of protection. Based on the explanation herein, those skilled in the art can devise various other embodiments of the present invention without departing from the scope of the invention.

Claims (10)

  1. 超声三维胎儿面部轮廓图像处理方法,其特征在于,包括:An ultrasonic three-dimensional fetal facial contour image processing method, comprising:
    对胎儿体数据在预定方向的多帧切片进行检测,以获取各帧切片的目标区域,其中,所述目标区域包括胎儿头部区域;Detecting a plurality of frame slices of the fetal volume data in a predetermined direction to obtain a target region of each frame slice, wherein the target region includes a fetal head region;
    筛选出包含目标区域的切片,并对筛选出的切片进行面部边界检测,以获取可信边界点;Screening the slice containing the target area, and performing face boundary detection on the selected slice to obtain a trusted boundary point;
    根据所述可信边界点对胎儿体数据进行裁剪,以得到超声三维胎儿面部轮廓图像。The fetal volume data is cropped according to the trusted boundary point to obtain an ultrasound three-dimensional fetal facial contour image.
  2. 根据权利要求1所述的方法,其特征在于,在所述筛选出包含目标区域的切片的步骤之后还包括:The method according to claim 1, wherein after the step of filtering out the slice including the target area, the method further comprises:
    当当前帧切片目标区域的位置与邻近切片目标区域的位置的差值超过预定阈值时,则对所述当前帧切片目标区域的位置进行校正。When the difference between the position of the current frame slice target area and the position of the adjacent slice target area exceeds a predetermined threshold, the position of the current frame slice target area is corrected.
  3. 根据权利要求2所述的方法,其特征在于,所述对所述当前帧切片目标区域的位置进行校正的步骤,包括:The method according to claim 2, wherein the step of correcting the position of the current frame slice target area comprises:
    遍历所有所述筛选出的切片,得到<帧号,目标区域>的序列;Traversing all of the selected slices to obtain a sequence of <frame number, target area>;
    分别求出<帧号,目标区域>序列中各帧切片目标区域的位置与其邻近切片目标区域的位置的偏差;Deviating the position of the target area of each frame slice in the <frame number, target area> sequence from the position of the adjacent slice target area, respectively;
    计算所述偏差的均值;Calculating the mean of the deviations;
    当当前帧切片目标区域的位置与其邻近切片目标区域的位置的偏差大于所述均值时,则将当前帧切片目标区域的中心点坐标替换为目标切片目标区域的中心点坐标。When the deviation of the position of the current frame slice target area from the position of the adjacent slice target area is greater than the mean value, the center point coordinate of the current frame slice target area is replaced with the center point coordinate of the target slice target area.
  4. 根据权利要求1所述的方法,其特征在于,所述对筛选出的切片进行面部边界检测,以获取可信边界点的步骤,包括:The method according to claim 1, wherein the step of performing facial boundary detection on the selected slice to obtain a trusted boundary point comprises:
    获取各帧所述筛选出的切片中从较暗区域到较亮区域的过渡边界;Obtaining a transition boundary from the darker region to the brighter region of the selected slice in each frame;
    根据过渡边界所在连通区域的灰度值和轮廓形态,确定各帧所述筛选出的切片的面部区域,并将面部区域的上表面边界作为备选分割边界;Determining a face region of the sliced slice of each frame according to a gray value and a contour shape of the connected region where the transition boundary is located, and using an upper surface boundary of the face region as an alternative segmentation boundary;
    根据所述备选分割边界,获取多个可信边界点。Obtaining a plurality of trusted boundary points according to the alternative segmentation boundary.
  5. 权利要求4所述的方法,其特征在于,在所述根据所述备选分割边界, 获取多个可信边界点的步骤之前,还包括:边界生长的步骤。The method of claim 4, wherein said said partitioning boundary is based on said alternative Before the step of obtaining multiple trusted boundary points, the steps include: boundary growth.
  6. 根据权利要求4所述的方法,其特征在于,所述根据所述备选分割边界,获取多个可信边界点的步骤,包括:The method according to claim 4, wherein the step of acquiring a plurality of trusted boundary points according to the candidate segmentation boundary comprises:
    根据所述备选分割边界,构造与每帧所述筛选出的切片一一对应的边界矩阵;Constructing, according to the candidate segmentation boundary, a boundary matrix corresponding to the selected slice of each frame;
    将所有所述边界矩阵与累加矩阵叠加,得到投票矩阵;Superimposing all of the boundary matrix and the accumulation matrix to obtain a voting matrix;
    统计所述投票矩阵中每列的最大值,将最大值对应的点确定为可信边界点。The maximum value of each column in the voting matrix is counted, and the point corresponding to the maximum value is determined as a trusted boundary point.
  7. 根据权利要求1所述的方法,其特征在于,所述根据所述可信边界点对胎儿体数据进行裁剪,以得到超声三维胎儿面部轮廓图像的步骤包括:The method according to claim 1, wherein the step of cropping the fetal volume data according to the trusted boundary point to obtain an ultrasound three-dimensional fetal facial contour image comprises:
    根据所述可信边界点制作裁剪模板;Making a cropping template according to the trusted boundary point;
    根据所述裁剪模块对胎儿体数据进行裁剪,得到超声三维胎儿面部轮廓图像。The fetal volume data is cropped according to the cutting module to obtain an ultrasound three-dimensional fetal facial contour image.
  8. 根据权利要求1所述的方法,其特征在于,所述对胎儿体数据在预定方向的多帧切片进行检测,以获取各帧切片的目标区域的步骤包括:The method according to claim 1, wherein the step of detecting the multi-frame slice of the fetal volume data in a predetermined direction to acquire the target area of each frame slice comprises:
    利用预先设置的分类器检测胎儿体数据在预定方向的各帧切片的所述目标区域;Detecting, by using a preset classifier, the target area of each frame slice of the fetal volume data in a predetermined direction;
    保存所述目标区域及其对应的切片。The target area and its corresponding slice are saved.
  9. 超声三维胎儿面部轮廓图像处理系统,其特征在于,包括:An ultrasonic three-dimensional fetal facial contour image processing system, comprising:
    目标区域检测模块,用于对胎儿体数据在预定方向的多帧切片进行检测,以获取各帧切片的目标区域,其中,所述目标区域包括胎儿头部区域;a target area detecting module, configured to detect a multi-frame slice of the fetal body data in a predetermined direction to obtain a target area of each frame slice, wherein the target area includes a fetal head area;
    可信边界点获取模块,用于筛选出包含目标区域的切片,并对筛选出的切片进行面部边界检测,以获取可信边界点;a trusted boundary point acquiring module, configured to filter out a slice including a target area, and perform face boundary detection on the selected slice to obtain a trusted boundary point;
    裁剪模块,用于根据所述可信边界点对胎儿体数据进行裁剪,以得到超声三维胎儿面部轮廓图像。And a cropping module, configured to crop the fetal volume data according to the trusted boundary point to obtain an ultrasound three-dimensional fetal facial contour image.
  10. 根据权利要求9所述的系统,其特征在于,所述系统还包括:The system of claim 9 wherein said system further comprises:
    校正模块,用于当当前帧切片目标区域的位置与邻近切片目标区域的位置的差值超过预定阈值时,则对所述当前帧切片目标区域的位置进行校正。 And a correction module, configured to correct a position of the current frame slice target area when a difference between a position of the current frame slice target area and a position of the adjacent slice target area exceeds a predetermined threshold.
PCT/CN2017/093457 2016-11-22 2017-07-19 Three-dimensional ultrasonic fetal face profile image processing method and system WO2018095058A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201611055976.4A CN106725593B (en) 2016-11-22 2016-11-22 Ultrasonic three-dimensional fetal face contour image processing method and system
CN201611055976.4 2016-11-22

Publications (1)

Publication Number Publication Date
WO2018095058A1 true WO2018095058A1 (en) 2018-05-31

Family

ID=58910667

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/093457 WO2018095058A1 (en) 2016-11-22 2017-07-19 Three-dimensional ultrasonic fetal face profile image processing method and system

Country Status (2)

Country Link
CN (1) CN106725593B (en)
WO (1) WO2018095058A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112155603A (en) * 2020-09-24 2021-01-01 广州爱孕记信息科技有限公司 Weighted value determination method and device for fetal structural features
CN112638267A (en) * 2018-11-02 2021-04-09 深圳迈瑞生物医疗电子股份有限公司 Ultrasonic imaging method and system, storage medium, processor and computer device

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106725593B (en) * 2016-11-22 2020-08-11 深圳开立生物医疗科技股份有限公司 Ultrasonic three-dimensional fetal face contour image processing method and system
CN108322605A (en) * 2018-01-30 2018-07-24 上海摩软通讯技术有限公司 Intelligent terminal and its face unlocking method and system
CN109584368B (en) * 2018-10-18 2021-05-28 中国科学院自动化研究所 Method and device for constructing three-dimensional structure of biological sample
CN111281430B (en) * 2018-12-06 2024-02-23 深圳迈瑞生物医疗电子股份有限公司 Ultrasonic imaging method, device and readable storage medium
CN109727240B (en) * 2018-12-27 2021-01-19 深圳开立生物医疗科技股份有限公司 Method and related device for stripping shielding tissues of three-dimensional ultrasonic image
CN110706222B (en) * 2019-09-30 2022-04-12 杭州依图医疗技术有限公司 Method and device for detecting bone region in image
CN111568471B (en) * 2020-05-20 2021-01-01 杨梅 Full-moon formed fetus shape analysis system
CN116687442A (en) * 2023-08-08 2023-09-05 汕头市超声仪器研究所股份有限公司 Fetal face imaging method based on three-dimensional volume data

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1676104A (en) * 2004-04-01 2005-10-05 株式会社美蒂森 Apparatus and method for forming a 3D ultrasound image
US20090030314A1 (en) * 2007-07-23 2009-01-29 Sotaro Kawae Ultrasonic imaging apparatus and image processing apparatus
CN102283674A (en) * 2010-04-15 2011-12-21 通用电气公司 Method and system for determining a region of interest in ultrasound data
US20120078102A1 (en) * 2010-09-24 2012-03-29 Samsung Medison Co., Ltd. 3-dimensional (3d) ultrasound system using image filtering and method for operating 3d ultrasound system
CN104939864A (en) * 2014-03-28 2015-09-30 日立阿洛卡医疗株式会社 Diagnostic image generation apparatus and diagnostic image generation method
CN106725593A (en) * 2016-11-22 2017-05-31 深圳开立生物医疗科技股份有限公司 Ultrasonic three-dimensional fetus face contour image processing method system

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102949206B (en) * 2011-08-26 2015-12-02 深圳迈瑞生物医疗电子股份有限公司 A kind of method of 3-D supersonic imaging and device

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1676104A (en) * 2004-04-01 2005-10-05 株式会社美蒂森 Apparatus and method for forming a 3D ultrasound image
US20090030314A1 (en) * 2007-07-23 2009-01-29 Sotaro Kawae Ultrasonic imaging apparatus and image processing apparatus
CN102283674A (en) * 2010-04-15 2011-12-21 通用电气公司 Method and system for determining a region of interest in ultrasound data
US20120078102A1 (en) * 2010-09-24 2012-03-29 Samsung Medison Co., Ltd. 3-dimensional (3d) ultrasound system using image filtering and method for operating 3d ultrasound system
CN104939864A (en) * 2014-03-28 2015-09-30 日立阿洛卡医疗株式会社 Diagnostic image generation apparatus and diagnostic image generation method
CN106725593A (en) * 2016-11-22 2017-05-31 深圳开立生物医疗科技股份有限公司 Ultrasonic three-dimensional fetus face contour image processing method system

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112638267A (en) * 2018-11-02 2021-04-09 深圳迈瑞生物医疗电子股份有限公司 Ultrasonic imaging method and system, storage medium, processor and computer device
CN112638267B (en) * 2018-11-02 2023-10-27 深圳迈瑞生物医疗电子股份有限公司 Ultrasonic imaging method and system, storage medium, processor and computer device
CN112155603A (en) * 2020-09-24 2021-01-01 广州爱孕记信息科技有限公司 Weighted value determination method and device for fetal structural features

Also Published As

Publication number Publication date
CN106725593B (en) 2020-08-11
CN106725593A (en) 2017-05-31

Similar Documents

Publication Publication Date Title
WO2018095058A1 (en) Three-dimensional ultrasonic fetal face profile image processing method and system
EP3432803B1 (en) Ultrasound system and method for detecting lung sliding
JP4709604B2 (en) Image display device, image display method, storage medium, and program
RU2657855C2 (en) Three-dimensional ultrasound imaging system
KR101121396B1 (en) System and method for providing 2-dimensional ct image corresponding to 2-dimensional ultrasound image
EP3174467B1 (en) Ultrasound imaging apparatus
US20120078102A1 (en) 3-dimensional (3d) ultrasound system using image filtering and method for operating 3d ultrasound system
US20050074153A1 (en) Method of tracking position and velocity of objects&#39; borders in two or three dimensional digital images, particularly in echographic images
JP5138369B2 (en) Ultrasonic diagnostic apparatus and image processing method thereof
CN104408398B (en) A kind of recognition methods and system of liver boundary
CN111368586B (en) Ultrasonic imaging method and system
CN109727240B (en) Method and related device for stripping shielding tissues of three-dimensional ultrasonic image
CN111374712B (en) Ultrasonic imaging method and ultrasonic imaging equipment
CN105678746A (en) Positioning method and apparatus for the liver scope in medical image
CN101568941A (en) Medical imaging system
US10398411B2 (en) Automatic alignment of ultrasound volumes
US10970921B2 (en) Apparatus and method for constructing a virtual 3D model from a 2D ultrasound video
JP5134287B2 (en) Medical image display device, medical image display method, program, storage medium, and mammography device
KR102182357B1 (en) Surgical assist device and method for 3D analysis based on liver cancer area in CT image
US20200305837A1 (en) System and method for guided ultrasound imaging
JP2006068373A (en) Mammilla detector and program thereof
WO2020133236A1 (en) Spinal imaging method and ultrasonic imaging system
CN115187640A (en) CT and MRI3D/3D image registration method based on point cloud
CN115187600B (en) Brain hemorrhage volume calculation method based on neural network
CN113222886B (en) Jugular fossa and sigmoid sinus groove positioning method and intelligent temporal bone image processing system

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17874549

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 09.10.2019)

122 Ep: pct application non-entry in european phase

Ref document number: 17874549

Country of ref document: EP

Kind code of ref document: A1