CN109087357B - Scanning positioning method and device, computer equipment and computer readable storage medium - Google Patents

Scanning positioning method and device, computer equipment and computer readable storage medium Download PDF

Info

Publication number
CN109087357B
CN109087357B CN201810835317.5A CN201810835317A CN109087357B CN 109087357 B CN109087357 B CN 109087357B CN 201810835317 A CN201810835317 A CN 201810835317A CN 109087357 B CN109087357 B CN 109087357B
Authority
CN
China
Prior art keywords
plane
distance
image data
dimensional image
historical
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810835317.5A
Other languages
Chinese (zh)
Other versions
CN109087357A (en
Inventor
宋燕丽
吴迪嘉
滕艳群
徐健
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai United Imaging Intelligent Healthcare Co Ltd
Original Assignee
Shanghai United Imaging Intelligent Healthcare Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai United Imaging Intelligent Healthcare Co Ltd filed Critical Shanghai United Imaging Intelligent Healthcare Co Ltd
Priority to CN201810835317.5A priority Critical patent/CN109087357B/en
Priority to CN202110943320.0A priority patent/CN113610923A/en
Publication of CN109087357A publication Critical patent/CN109087357A/en
Application granted granted Critical
Publication of CN109087357B publication Critical patent/CN109087357B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10088Magnetic resonance imaging [MRI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30048Heart; Cardiac

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the invention discloses a scanning positioning method, a scanning positioning device, computer equipment and a computer readable storage medium, wherein the method comprises the following steps: acquiring three-dimensional image data of a part to be positioned; inputting the three-dimensional image data into a position point feature model trained in advance, and obtaining each position point feature data output by the position point feature model; and determining a plane fitting point according to the characteristic data of each position point, and determining a plane equation to be positioned according to the plane fitting point. The method provided by the embodiment of the invention can realize the positioning of the scanning plane only through the three-dimensional image data of the part to be positioned, and is suitable for the situation when different scanning parts are scanned and positioned through different scanning modes, so that the positioning of the scanning plane is faster and more accurate, the time required by the scanning process is reduced, the accuracy of the scanning image is improved, the inaccuracy of positioning according to anatomical mark points is overcome, and the positioning of the scanning plane can be still better realized when the pre-scanning range is not full.

Description

Scanning positioning method and device, computer equipment and computer readable storage medium
Technical Field
The embodiment of the invention relates to the technical field of medical image scanning, in particular to a scanning positioning method, a scanning positioning device, computer equipment and a computer readable storage medium.
Background
Medical imaging examinations play a very important role as an aid in the clinical diagnosis of diseases, and when medical scanning is performed, fixed-position/planar scanning of a target region of a subject is often required to assist in the diagnosis, treatment, and the like of diseases.
For example, in a conventional magnetic resonance scanning process, a physician manually calibrates a reference positioning line after identifying an anatomical location through a pre-scan image, and applies a scanning sequence to the anatomical location according to the reference positioning line to perform an accurate scan.
Taking cardiac magnetic resonance scanning as an example, cardiac magnetic resonance positioning scanning needs a patient to hold breath for at least 5 times, namely three sections, a multilayer axis, a pseudo two cavities, a pseudo four cavities and a multilayer short axis, the breath holding scanning time needs at least 4-5 minutes, six times of manual positioning by a doctor is needed in the middle, each positioning result influences subsequent scanning results, if the experience of a technician is insufficient, repeated scanning and repeated positioning are possibly needed, the positioning time is long, and the final positioning result possibly cannot meet the requirement of clinical diagnosis. Meanwhile, the results of the manual positioning scanning are not uniform, and the positioning results of different operators and the same operator at different times may be greatly different for the same subject. Therefore, the efficiency of the existing scanning positioning mode is low, the time required by the whole scanning process is prolonged, the identification result and the calibration result are possibly inconsistent due to manual difference and experience difference, and the precision of the scanning parameters cannot be guaranteed.
At present, the scanning plane can also be positioned according to the anatomical position point of the part to be positioned. Specifically, the detection of each anatomical position is carried out on the 3D scout image of the part to be positioned, and the scanning plane is determined through the fixed position point of the anatomy. For example, the positions of the mitral valve point, the apex point, the left ventricular to aortic outflow point, the right ventricular maximum corner point, and the like of the heart region are determined through the 3D scout image, and a plane formed by the specific positions is determined as a scanning plane. If the short axial plane is perpendicular to the line connecting the point of the mitral valve and the apex of the heart, the short axial plane passes through the center of the left ventricle, and the four-chamber plane passes through the line connecting the point of the mitral valve and the apex of the heart and the point of the maximum diameter of the right ventricle. However, these points are approximate location points summarized based on the experience of the doctor, and during actual scanning, the plane determined based on the anatomical location points may not be the optimal plane, and some pre-scanned images may have insufficient range to cause some planning location points to be missing, so that the location of the scanning plane is not accurate.
Disclosure of Invention
The embodiment of the invention provides a scanning positioning method, a scanning positioning device, computer equipment and a computer readable storage medium, which are used for realizing the rapid and accurate positioning of a scanning plane.
In a first aspect, an embodiment of the present invention provides a scanning and positioning method, including:
acquiring three-dimensional image data of a part to be positioned;
inputting the three-dimensional image data into a position point feature model trained in advance, and obtaining each position point feature data output by the position point feature model;
and extracting a plane fitting point from each position point according to the position point characteristic data, and determining a plane equation to be positioned according to the plane fitting point.
In a second aspect, an embodiment of the present invention further provides a scanning and positioning apparatus, including:
the data acquisition module is used for acquiring three-dimensional image data of a part to be positioned;
the characteristic data module is used for inputting the three-dimensional image data into a position point characteristic model trained in advance and obtaining each position point characteristic data output by the position point characteristic model;
and the plane determining module is used for determining a plane fitting point according to the characteristic data of each position point and determining a plane equation to be positioned according to the plane fitting point.
In a third aspect, an embodiment of the present invention further provides a computer device, where the computer device includes:
one or more processors;
storage means for storing one or more programs;
when executed by the one or more processors, cause the one or more processors to implement a scan location method as provided by any of the embodiments of the invention.
In a fourth aspect, an embodiment of the present invention further provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the scan positioning method provided in any embodiment of the present invention.
The embodiment of the invention acquires the three-dimensional image data of the part to be positioned; inputting the three-dimensional image data into a position point feature model trained in advance, and obtaining each position point feature data output by the position point feature model; the scanning and positioning method provided by the embodiment of the invention can realize the positioning of the scanning plane of the part to be positioned only by the three-dimensional image data of the part to be positioned, can be suitable for the situation when different scanning parts are scanned and positioned by different scanning modes, and can ensure that the scanning plane is positioned more quickly and accurately, thereby reducing the time required by the scanning process, improving the accuracy of the scanned image, overcoming the inaccuracy of positioning according to anatomical mark points and still better realizing the positioning of the scanning plane when the pre-scanning range is not full.
Drawings
Fig. 1 is a flowchart of a scan positioning method according to an embodiment of the present invention;
fig. 2a is a flowchart of a scanning positioning method according to a second embodiment of the present invention;
FIG. 2b is a schematic diagram of scanning and positioning using a distance calculation model in the scanning and positioning method according to the embodiment of the present invention;
fig. 3a is a flowchart of a scan positioning method according to a third embodiment of the present invention;
FIG. 3b is a diagram illustrating a process of training a distance field computation model in a scanning localization method according to a third embodiment of the invention;
FIG. 3c is a schematic diagram illustrating a distance computation model trained in the scanning and positioning method according to an embodiment of the present invention;
fig. 4a is a flowchart of a scanning positioning method according to a fourth embodiment of the present invention;
fig. 4b is a schematic diagram of scanning and positioning using a surface segmentation model in the scanning and positioning method according to the embodiment of the present invention;
fig. 5a is a flowchart of a scan positioning method according to a fifth embodiment of the present invention;
FIG. 5b is a schematic diagram illustrating a face segmentation model trained in the scan positioning method according to an embodiment of the present invention;
fig. 6 is a schematic structural diagram of a scanning and positioning apparatus according to a sixth embodiment of the present invention;
fig. 7 is a schematic structural diagram of a computer device according to a seventh embodiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the invention and are not limiting of the invention. It should be further noted that, for the convenience of description, only some of the structures related to the present invention are shown in the drawings, not all of the structures.
Example one
Fig. 1 is a flowchart of a scan positioning method according to an embodiment of the present invention, which is applicable to a situation when positioning and scanning each medical imaging region by various scanning modalities, especially to a situation when scanning and positioning a heart using a cardiac magnetic resonance imaging system. The method may be performed by a scanning and positioning apparatus, which may be implemented in software and/or hardware, for example, the scanning and positioning apparatus may be configured in a computer device. As shown in fig. 1, the method specifically includes:
and S110, acquiring three-dimensional image data of the part to be positioned.
In this embodiment, the scan plane is positioned by three-dimensional image data of the portion to be positioned. Optionally, the three-dimensional image data of the portion to be positioned is 3D scout data formed by prescanning the portion to be positioned. Illustratively, when the site to be located is a heart region, the three-dimensional image data is 3D scout data of the heart region.
Optionally, the obtaining method of the three-dimensional image data of the to-be-positioned portion includes: acquiring multilayer image data in the same direction in a scanning area, carrying out image reconstruction on the multilayer image data to form three-dimensional image data of the scanning area, and extracting the three-dimensional image data of a part to be positioned from the three-dimensional image data of the scanning area by using a preset image extraction algorithm. Optionally, an automatic threshold algorithm may be adopted to segment the three-dimensional image data of the scanning area, the three-dimensional image data of the scanning area is segmented into foreground data and background data, and then a morphological method is adopted to extract the three-dimensional image data of the part to be positioned.
Illustratively, multilayer scanning can be performed along the z-axis direction to obtain multilayer image data in the z-axis direction, image reconstruction is performed on the multilayer image data in the z-axis direction by using an image reconstruction algorithm to obtain three-dimensional image data of a scanning area, and then three-dimensional image data of a part to be positioned is extracted from the three-dimensional image data of the scanning area. When the part to be positioned is a heart area, the automatic threshold segmentation algorithm can be adopted to segment the foreground and the background of the three-dimensional image data of the scanning area, and then the three-dimensional image data of the heart area is extracted by adopting a morphological method.
And S120, inputting the three-dimensional image data into a position point feature model trained in advance, and obtaining each position point feature data output by the position point feature model.
In this embodiment, after obtaining the three-dimensional image data of the portion to be positioned, the three-dimensional image data of the portion to be positioned is input into a position point feature model trained in advance, and feature data of each position point in the three-dimensional image data of the portion to be positioned is obtained.
Optionally, the feature data of each position point includes coordinates of each position point and a distance parameter between each position point and the plane to be positioned. The distance parameter between each position point and the plane to be positioned may be a distance value between each position point and the plane to be positioned, or a distance related value between each position point and the plane to be positioned.
Optionally, before inputting the three-dimensional image data into the position point feature model trained in advance, the method further includes:
and normalizing the three-dimensional image data of the part to be positioned by a preset image data normalization algorithm to obtain image normalization data.
In this embodiment, before inputting the three-dimensional image data of the portion to be positioned into the position point feature model, normalization processing is performed on the three-dimensional image data. The normalization processing of the three-dimensional image data is to scale the three-dimensional image data according to a set rule so that the three-dimensional image data falls in a small specific interval, thereby facilitating the data processing process in the subsequent data processing process. The algorithm for normalizing the three-dimensional image data is not limited herein. For example, the normalization algorithm may be a min-max normalization algorithm, a zero-mean normalization algorithm, or a fractional scaling normalization algorithm. The minimum-maximum normalization algorithm is used for normalizing data through the maximum value and the minimum value of the data, the zero-mean normalization algorithm is used for normalizing data through the mean value and the standard deviation of the data, and the decimal scaling normalization algorithm is used for normalizing data through the position of a decimal point of the mobile data.
Optionally, the image data normalization algorithm is a min-max normalization algorithm, that is, the three-dimensional image data of the to-be-positioned part can be normalized through the min-max normalization algorithm. If the maximum value of the pixel value of each position point of the part to be positioned is ImaxMinimum value of IminThe pixel value of a certain position point of the part to be positioned is IcIf the pixel value of the position point is normalized to be I, the normalization calculation formula of the pixel value of each position point of the position to be positioned is as follows: i ═ I (I)c-Imin)/(Imax-Imin)。
Optionally, the image data normalization algorithm is a zero-mean normalization algorithm, that is, the three-dimensional image data of the to-be-positioned part can be normalized by the zero-mean normalization algorithm. If the mean value of the pixel values of each position point of the part to be positioned is
Figure BDA0001744367800000071
Standard deviation is sigma, pixel value of a certain position point of a position to be positioned is IcIf the pixel value of the position point is normalized to be I, the normalization calculation formula of the pixel value of each position point of the position to be positioned is as follows:
Figure BDA0001744367800000072
s130, determining a plane fitting point according to the characteristic data of each position point, and determining a plane equation to be positioned according to the plane fitting point.
In this embodiment, part of the position points may be screened from each position point feature data as plane fitting points, and a plane equation to be positioned is determined according to the screened plane fitting points; or screening each position point, directly selecting partial position points as plane fitting points according to all the position points or randomly, and determining a plane equation to be positioned according to the plane fitting points.
Optionally, determining a plane equation to be positioned according to the plane fitting point includes: and screening out plane fitting points from the position points according to the characteristic parameters of the position points, and fitting the coordinate values of the plane fitting points by adopting a preset fitting algorithm to obtain a plane equation to be positioned.
Optionally, each position point is screened according to a preset distance parameter range and a distance parameter between each position point of the three-dimensional image data of the position to be positioned and a plane to be positioned, the position point corresponding to the distance parameter meeting the preset distance parameter range condition is used as a plane fitting point, and then a plane equation to be positioned is fitted according to coordinate values of each plane fitting point. For example, a least square fitting algorithm may be used to fit the coordinate values of the fitting points on each plane, and a plane equation to be positioned is calculated.
Optionally, determining a plane equation to be positioned according to the plane fitting point includes: and determining a plane equation to be positioned according to the coordinate values of the plane fitting points through an optimization algorithm. Optionally, when the plane equation to be positioned is determined by the optimization algorithm, the plane fitting points may be all position points in the three-dimensional image data of the position to be positioned, or may be part of position points in the three-dimensional image data.
For example, an optimization algorithm may be used to calculate the parameters of the plane equation to be located. In the present embodiment, the optimization algorithm is not limited. For example, the optimization algorithm may be an algorithm such as a gradient descent method, a newton method, a conjugate direction method, or a conjugate gradient method.
Illustratively, by
Figure BDA0001744367800000081
And determining parameters of the plane equation to be positioned. Wherein n is the number of all plane fitting points in the part to be positioned, xiTo be positionedX-axis coordinate value, y, of the ith plane fitting pointiThe y-axis coordinate value, z, of the fitting point of the ith plane of the plane to be positionediZ-coordinate value of the fitting point for the ith plane of the plane to be located, DiWhen the equation of a plane to be positioned is ax + by + cz + d is 0, the sum of the difference between the distance value between the ith plane fitting point of the part to be positioned and the plane to be positioned and the distance value obtained by the distance field calculation model, which meets the requirement that the sum of the difference between the distance value between all the plane fitting points in the part to be positioned and the ax + by + cz + d is 0, is the minimum, and the sum meets the requirement that a2+b2+c2The values a, b, c and d which are 1 are taken as the parameters of the plane to be positioned.
Optionally, determining a plane equation to be positioned according to the plane fitting point includes: and determining a plane equation to be positioned according to the coordinate values of the plane fitting points in a voting mode. Optionally, when the plane equation to be positioned is determined in a voting manner, the plane fitting point may be all position points in the three-dimensional image data of the position to be positioned, or may be a part of position points in the three-dimensional image data. All or part of the plane fitting points can be selected for voting, and the parameter combination with the maximum probability is determined as the parameter of the plane equation to be positioned.
The embodiment of the invention acquires the three-dimensional image data of the part to be positioned; inputting the three-dimensional image data into a position point feature model trained in advance, and obtaining each position point feature data output by the position point feature model; the scanning and positioning method provided by the embodiment of the invention can realize the positioning of the scanning plane only through the three-dimensional image data of the part to be positioned, can be suitable for the situation when different scanning parts are scanned and positioned through different scanning modes, enables the positioning of the scanning plane to be faster and more accurate, further reduces the time required by the scanning process, improves the accuracy of the scanning image, overcomes the inaccuracy of positioning according to anatomical mark points, and can still better realize the positioning of the scanning plane when the pre-scanning range is not full.
Example two
Fig. 2a is a flowchart of a scan positioning method according to a second embodiment of the present invention, which is further optimized based on the above embodiments. As shown in fig. 2a, the method comprises:
s210, acquiring three-dimensional image data of the part to be positioned.
S220, inputting the three-dimensional image data into a distance field calculation model trained in advance, and obtaining a distance matrix output by the distance field calculation model.
In this embodiment, a position point feature model trained in advance is embodied as a distance field calculation model, position point feature data is embodied as a distance value between each position point and a plane to be positioned, a distance value between each position point in three-dimensional image data of the part to be positioned and the plane to be positioned is calculated through the distance field calculation model, and the distance values between each position point and the plane to be positioned form a distance matrix.
Note that, if normalization processing is performed on the three-dimensional image data of the location to be positioned before the data is input to the distance field calculation model, the distance value included in the distance matrix output by the distance field calculation model may be the distance value after the normalization processing, or may be the actual distance value between each position point and the plane to be positioned. Preferably, the distance value after the normalization processing is used as a distance value in a distance matrix output by the distance field calculation model, so that the positioning of the plane to be positioned is more accurate.
And S230, screening each position point according to the preset distance range and each distance value, taking the position points meeting the fitting condition as plane fitting points, and determining a plane equation to be positioned according to the plane fitting points.
Optionally, the position point corresponding to the distance value within the preset distance range is used as a plane fitting point, a preset fitting algorithm is adopted to fit each plane, and a plane equation to be positioned is obtained through calculation. Optionally, the preset distance range may be determined according to the specific situation of the location to be located. For example, when the location to be located is a heart, the preset distance range may be (0, 0.1). Optionally, the preset fitting algorithm may be a least squares fitting algorithm.
In another embodiment of the present invention, each position point may not be screened, and a plane equation to be positioned may be determined according to the plane fitting points by an optimization algorithm or a voting method directly according to all the position points or randomly selecting some position points as the plane fitting points.
Optionally, for more detailed contents of determining the plane equation to be positioned according to each plane fitting point through a preset fitting algorithm, an optimization algorithm, or a voting manner, reference may be made to the above embodiments, which are not described herein again.
Fig. 2b is a schematic diagram of scanning and positioning using a distance calculation model in the scanning and positioning method according to the embodiment of the present invention. The process of locating the scan plane through a pre-trained distance field computation model is shown schematically. As shown in fig. 2b, the three-dimensional image data of the to-be-positioned part extracted from the pre-scan map is input into a distance field calculation model trained in advance, a distance field output by the distance field calculation model is obtained, and the plane parameters of the to-be-positioned plane are determined according to the distance field.
It should be noted that 57 subjects were tested for cardiac magnetic resonance imaging by the scan positioning method provided in the embodiment of the present invention, and the test results are shown in table 1. Table 1 shows the average error of the normal vector and the average error of the distance field of the scan orientation plane determined by the scan orientation method provided by the embodiment of the present invention in different scan planes. From table 1, the normal vector average error and the distance field average error corresponding to each scan location plane can be obtained. In addition, the scanning positioning method provided by the embodiment of the invention is used for carrying out cardiac magnetic resonance imaging on each cardiac scanning plane of different subjects, and the imaging result is evaluated by a doctor, so that only one example needs to be finely adjusted by the doctor, and other clinical applications can be accepted.
TABLE 1
Short axial surface Two-cavity surface Three-cavity surface Four-cavity surface
Normal vector mean error (°) 5.7 5.4 7.2 5.4
Distance field mean error (mm) 5.4 3.7 4.7 6.1
The normal vector angle error calculation mode is as follows:
Figure BDA0001744367800000111
wherein,
Figure BDA0001744367800000112
in order to manually label the normal vector of the face,
Figure BDA0001744367800000113
the scanning plane normal vector determined by the scanning positioning method provided by the embodiment of the invention. In this embodimentIn table 1, the normal vector average error of each scanning plane refers to an average value of angle errors of normal vectors of the scanning positioning plane corresponding to all the test data.
The distance field error is calculated as:
Figure BDA0001744367800000114
wherein D is0Marking the distance between each position point of the heart and the manual marking surface D1The actual distances between the positions of the heart and the scan plane determined by the scan location method provided by the embodiments of the present invention. Alternatively, the actual distance between each location point and the scan plane can be determined from a distance matrix output by the distance field computation model.
Optionally, if the distance value in the distance matrix output by the distance field computation model is a distance value after normalization, the distance value in the distance matrix output by the distance field computation model is directly used as the actual distance between each corresponding position point and the scanning plane. If the distance values in the distance matrix output by the distance field computation model are normalized, the distances between the respective positions output by the distance field computation model and the scanning plane can be reversely calculated to obtain the actual distances between the respective positions of the heart and the scanning plane. Specifically, the product of each distance value output by the distance field computation model and a preset threshold value adopted when normalizing the distance matrix is used as the true distance between each position point and the scanning plane. The average error of the distance field in table 1 is an average value of the errors of the distance values of the position points corresponding to the distance values within the preset threshold.
According to the technical scheme of the embodiment of the invention, on the basis of the embodiment, the three-dimensional image data is input into a position point feature model trained in advance, the feature data of each position point output by the position point feature model is obtained, a plane fitting point is determined according to the feature data of each position point for concretization, the distance value between each position point of the position to be positioned and the plane to be positioned is calculated through a distance field calculation model trained in advance, and the plane fitting point is determined according to each distance value, so that the determination of the plane fitting point is more accurate, and the positioning of the plane to be positioned is more accurate.
EXAMPLE III
Fig. 3a is a flowchart of a scan positioning method according to a third embodiment of the present invention, which is further optimized based on the above embodiments. As shown in fig. 3a, the method comprises:
s310, obtaining historical three-dimensional image data and plane parameters of a to-be-positioned plane corresponding to the historical three-dimensional image data.
In this embodiment, a pre-established distance field computation model is trained based on historical three-dimensional image data and plane parameters of a plane to be located corresponding to the historical three-dimensional image data. And the historical three-dimensional image data and the plane parameters of the to-be-positioned plane corresponding to the historical three-dimensional image data are both data used for training. Optionally, the historical three-dimensional image data may be three-dimensional image data extracted from historical scan data of the to-be-positioned portion, and the plane parameter of the to-be-positioned plane corresponding to the historical three-dimensional image data may be a normal vector of the to-be-positioned plane or a plane equation of the to-be-positioned plane.
Optionally, the obtaining method of the plane parameter of the to-be-positioned plane corresponding to the historical three-dimensional image data includes:
and positioning each plane to be positioned by using a positioning line from historical three-dimensional image data through image processing software to obtain plane parameters of each plane to be positioned. Optionally, the plane to be positioned may be manually positioned by using a positioning line through image processing software, so as to obtain the plane parameter of the corresponding plane to be positioned.
Optionally, the obtaining method of the plane parameter of the to-be-positioned plane corresponding to the historical three-dimensional image data includes:
and calculating the plane parameters of the plane to be positioned according to the image information of the image of each sub-part of the part to be positioned in the scanning image information corresponding to the historical three-dimensional image data.
Optionally, the scanned image information is high-resolution scanned image information obtained when the position to be positioned is formally scanned. Optionally, the plane parameter of the to-be-positioned plane corresponding to the historical three-dimensional image data is acquired from real high-resolution scanned image information during scanning. Generally, the formal scanning data when scanning a part to be positioned is more abundant than the information content of the general pre-scanning data, and the formal scanning data includes labeling information (for example, positioning processing of various sections, cavities and the like of the part to be positioned) performed by a doctor on data formally scanned in a scanning area, and the formal scanning data and the labeling information of the doctor are used as a 'gold standard' in a training model process to train a pre-established distance field calculation model.
Optionally, the pre-scan image is an image in DICOM format, each DICOM image includes image information of the current layer, for example, a coordinate position of an upper left corner of the current layer, an x-axis unit vector vx and a y-axis unit vector vy of the current layer, a unit vector perpendicular to vx and vy is used as a unit normal vector of the plane to be located, and the determined unit normal vector (a, b, c) is used as a plane parameter of the plane to be located. In addition, the coordinate information (x) of the upper left corner of the current layer is combined0,y0,z0) Determining the parameter d ═ ax0-by0-cz0And finally obtaining plane parameters (a, b, c and d) of the plane to be positioned, wherein the plane equation of the plane to be positioned is ax + by + cz + d which is 0.
Preferably, the image processing software is used for positioning the plane to be positioned by using a positioning line in a manual positioning mode, so as to obtain the plane parameters of the corresponding plane to be positioned. The image processing software is used for acquiring the plane parameters of the plane to be positioned more conveniently and accurately.
And S320, calculating a historical distance matrix corresponding to the historical three-dimensional image data according to the historical three-dimensional image data and the plane parameters.
Optionally, a historical distance value between each position point in the historical three-dimensional image data and the plane to be positioned is calculated through a preset distance calculation algorithm, and the historical distance values between each position point and the plane to be positioned form a historical distance matrix corresponding to the historical three-dimensional image data. Optionally, a distance meterThe algorithm may be a euclidean distance algorithm. Illustratively, if the equation of the plane to be located is ax + by + cz + d is 0, then any position point (x) in the historical three-dimensional image datai,yi,zi) The distance between the positioning device and the plane to be positioned is as follows:
Figure BDA0001744367800000141
s330, generating a training sample set based on the historical three-dimensional image data and the historical distance matrix, and training a pre-established distance field calculation model by using the training sample set to obtain the trained distance field calculation model.
In this embodiment, a pre-established distance field computation model is trained using historical three-dimensional image data and a historical distance matrix corresponding to the historical three-dimensional image data as sample pairs. Optionally, the pre-established distance field computation model is a convolutional neural network model.
Figure 3b is a diagram illustrating a process for training a distance field computation model in a scan localization method according to a third embodiment of the invention. An exemplary convolutional neural network employed by an embodiment of the present invention is shown in fig. 3 b. The solid line (short) straight arrow represents the channel before the arrow and the channel after the arrow is obtained through the operations of the convolution layer, the block uniformization layer and the relu activation layer, the dotted line curved arrow and the plus sign represent the superposition and relu activation layer, and the solid line (long) curved arrow represents the concat layer, namely the front channel is arranged side by side with the back channel.
For example, the operation procedure for obtaining 16 channels a from 1 channel a is as follows: and (4) performing operations on the 1 channel A through a convolution layer, a block homogenization layer and a relu activation layer to obtain a 16 channel A. The operation process of obtaining 32 channel A from 16 channel A and 16 channel B is as follows: and superposing the 16 channels A and the 16 channels B, then operating by a relu activation layer, and operating by a convolution layer, a block homogenization layer and the relu activation layer to obtain the 32 channels A according to an operation result obtained by the relu activation layer. The operation procedure for obtaining 32 channels C from 16 channels B and 64 channels C is: and carrying out operations on the 64-channel C through a convolution layer, a block homogenization layer and a relu activation layer to obtain a 16-channel D, and arranging the 16-channel B to the 16-channel D in parallel to obtain a 32-channel C. The operation process of obtaining 64 channels C from 64 channels a, 64 channels B and 32 channels B is: and superposing the 64 channel A and the 64 channel B, then carrying out the operation of a relu activation layer, carrying out the operation of a convolution layer, a block homogenization layer and the relu activation layer on the operation result obtained by the relu activation layer to obtain a 32 channel E, and arranging the 32 channel B to the 32 channel E in parallel to obtain a 64 channel C. The operation process of obtaining the 1 channel B by the operation of the 32 channels C and D is as follows: and superposing the 32 channels C and the 32 channels D, then performing the relu activation layer operation, and performing the convolution layer, the block homogenization layer and the relu activation layer operation on the operation result obtained by the relu activation layer to obtain the 1 channel B.
Alternatively, the method of distance field computation model training is not limited herein. Illustratively, the training method for the distance field computation model can be a back-propagation algorithm, a stochastic gradient descent method, or a stochastic optimization method. In this embodiment, a random optimization method (adam method) can be employed as the training method for the distance field computation model.
Optionally, the distance field computation model is output IoutAnd gold Standard IlabelAs a function of the cost in training the distance field computation model. Wherein, gold standard IlabelThe distance matrix is a historical distance matrix corresponding to historical three-dimensional image data, namely a standard distance matrix which is output when the historical three-dimensional image data is input into the distance field calculation model. Illustratively, the cost function loss ═ Iout-IlabelL. Optionally, the cost function in the distance field computation model training process can also output I for the distance field computation modeloutAnd gold Standard Ilabel2 norm, weighted 1 norm, or weighted 2 norm in between, without limitation.
Optionally, to adapt the trained distance field computation model to multiple types of three-dimensional image data. The training data may be augmented to transform the historical three-dimensional image data and the corresponding historical distance matrix, and the transformed data may also be used as training data for the distance field computation model. For example, the distance field computation model may be trained by translating (e.g., randomly translating in x, y, or z directions within a range of ± 50mm), rotating (e.g., rotating about a random rotation axis by a random angle within a range of ± 20 °), scaling (e.g., randomly scaling by 0.7-1.3 times), and so on, using the processed data as a training sample. The training data after being deformed can be used as training data to amplify limited training data, so that training samples of the distance field calculation model are enlarged, and the trained distance field calculation model can still output an accurate distance matrix when input three-dimensional image data is inaccurate and generates offset.
Optionally, before generating the training sample set by using the historical three-dimensional image data and the historical distance matrix corresponding to the historical three-dimensional image data, the method further includes:
and according to a preset block fetching rule, the historical three-dimensional image data and the historical distance matrix corresponding to the historical three-dimensional image data are subjected to block fetching, and the historical three-dimensional data after block fetching and the historical distance matrix corresponding to the historical three-dimensional image data are used for generating a training sample set.
Optionally, the image blocks with the same size are extracted by using a sliding window or a random selection method, the physical range of the image blocks may be 50mm by 50mm to 120mm by 120mm, and the spatial resolution range of the image blocks may be 2mm by 2mm to 5mm by 5mm to 5 mm. Illustratively, the image patches are 100mm by 100mm in size and the spatial resolution is 3mm by 3 mm. And generating a training sample set by using the historical three-dimensional data after the block extraction and the historical distance matrix corresponding to the historical three-dimensional image data, so that the size of each training sample pair is reduced, and the training process based on the training sample set is faster.
Optionally, before generating the training sample by using the historical three-dimensional image data and the historical distance matrix, the method further includes:
and respectively carrying out normalization processing on the historical three-dimensional image data and the historical distance matrix data to form historical image normalization data and a historical distance normalization matrix.
Optionally, for more detailed content of normalizing the historical three-dimensional image data, reference may be made to the above-mentioned embodiment, and details are not described here again.
Optionally, the historical distance matrix may be normalized by using the same normalization algorithm as the historical three-dimensional image data, or may be normalized by using a different normalization algorithm from the historical three-dimensional image data. In this embodiment, the historical three-dimensional image data and the historical distance matrix have different data distribution rules, and a normalization algorithm different from the historical three-dimensional image data is used to perform normalization processing on the historical distance matrix.
Illustratively, the distance threshold T may be preset1And truncating and normalizing the historical distance matrix according to the distance matrix. Specifically, if any distance value in the historical distance matrix is DiThen D will bei>T1Is set to T1Then all distance values in the historical distance matrix are divided by T1And obtaining a historical distance normalization matrix. It can be seen that the historical distance normalization values within the historical distance normalization matrix are between 0-1. Optionally, a distance threshold T1The value range is (30mm,200 mm). The size of the normalized historical distance normalization matrix is the same as that of historical three-dimensional image data, and each value in the historical distance normalization matrix is the normalized distance from the corresponding position point to the plane to be positioned.
Fig. 3c is a schematic diagram of training a distance calculation model in the scan positioning method according to the embodiment of the present invention. The process of training the distance calculation model is schematically shown in the figure. As shown in fig. 3c, a distance field corresponding to the three-dimensional image data is calculated according to the plane parameters of the plane to be located, and the distance field and the three-dimensional image data of the part to be located extracted from the pre-scan image are used as training samples to train the distance calculation model, so as to obtain a trained distance calculation model.
And S340, acquiring three-dimensional image data of the part to be positioned.
And S350, inputting the three-dimensional image data into a distance field calculation model trained in advance, and obtaining a distance matrix output by the distance field calculation model.
S360, screening each position point according to the preset distance range and each distance value, taking the position points meeting the fitting conditions as distance plane fitting points, and determining a plane equation to be positioned according to the distance plane fitting points.
It should be noted that the distance field computation model training method provided by embodiments of the invention can be performed separately. That is, the training of the distance field calculation model can be completed by using the operation steps in S310 to S330 provided in the embodiment of the present invention alone, and the operation of determining the plane equation to be positioned by the distance calculation model based on the three-dimensional image data of the portion to be positioned in the subsequent steps S340 to S360 is not performed.
According to the technical scheme of the embodiment of the invention, the operation of training the distance field calculation model is added on the basis of the embodiment, and historical three-dimensional image data and plane parameters of a plane to be positioned corresponding to the historical three-dimensional image data are obtained; calculating a historical distance matrix corresponding to the historical three-dimensional image data according to the historical three-dimensional image data and the plane parameters; and generating a training sample set based on the historical three-dimensional image data and the historical distance matrix, and training a pre-established distance field calculation model by using the training sample set to obtain the trained distance field calculation model, so that the distance calculation model obtained by training is more accurate.
Example four
Fig. 4a is a flowchart of a scan positioning method according to a fourth embodiment of the present invention, which is further optimized based on the foregoing embodiments. As shown in fig. 4a, the method comprises:
s410, three-dimensional image data of the part to be positioned is obtained.
And S420, inputting the three-dimensional image data into a pre-trained surface segmentation model to obtain a distance segmentation matrix output by the surface segmentation model.
In this embodiment, a position point feature model trained in advance is embodied as a surface segmentation model, each position point feature data is embodied as a distance segmentation value between each position point and a plane to be positioned, a distance segmentation value between each position point and the plane to be positioned in three-dimensional image data of the part to be positioned is calculated through the surface segmentation model, and the distance segmentation values between each position point and the plane to be positioned form a distance segmentation matrix.
Optionally, the surface segmentation model is a classification model, the output of the classification model includes a foreground output channel and a background output channel, data output by the foreground output channel is obtained, and the data is used as a distance segmentation matrix corresponding to the three-dimensional image data of the part to be positioned.
S430, screening the position points according to a preset first segmentation threshold and each distance segmentation value, and taking the position points meeting the segmentation conditions as plane fitting points.
Optionally, after a distance segmentation matrix output by the surface segmentation model is obtained, each position point in the three-dimensional image data of the position to be positioned is screened according to a preset first segmentation threshold and a distance segmentation value between each position point in the distance segmentation matrix and the plane to be positioned, and a position point corresponding to the distance segmentation value larger than the preset first segmentation threshold is used as a plane fitting point. Optionally, the preset first segmentation threshold may be adjusted according to a specific part to be located. Illustratively, when the site to be located is a heart region, the preset first segmentation threshold may be 0.5.
And S440, fitting the fitting points of each plane through a preset fitting algorithm to obtain a plane equation to be positioned.
In this embodiment, a fitting algorithm is used to fit each plane fitting point to obtain a plane equation to be positioned. Optionally, the manner of fitting the plane fitting points to form the plane equation to be positioned according to the plane fitting points is similar to the manner of determining the plane equation to be positioned according to the plane fitting points in the foregoing embodiment, and further details thereof may be referred to the foregoing embodiment, and are not described herein again.
Fig. 4b is a schematic diagram of scanning and positioning using a surface segmentation model in the scanning and positioning method according to the embodiment of the present invention. The process of scan plane positioning by a pre-trained surface segmentation model is schematically shown in the figure. As shown in fig. 4b, the three-dimensional image data of the to-be-positioned part extracted from the pre-scan drawing is input into a pre-trained surface segmentation model, a distance segmentation matrix output by the surface segmentation model is obtained, and the plane parameters of the to-be-positioned plane are determined according to the distance segmentation matrix.
The technical scheme of the embodiment of the invention is that on the basis of the embodiment, the three-dimensional image data is input into a position point feature model trained in advance to obtain each position point feature data output by the position point feature model, determining a plane fitting point according to the characteristic data of each position point for concretization, calculating a distance segmentation value between each position point of a part to be positioned and a plane to be positioned through a pre-trained surface segmentation model, and the position points corresponding to the distance segmentation values meeting the preset segmentation conditions are used as the segmentation plane fitting points through presetting the first segmentation threshold value, so that the determination of the segmentation plane fitting points is more accurate, and furthermore, the positioning of the plane to be positioned is more accurate, and partial position points are screened from each position point to be used as the fitting points of the segmentation plane, so that the fitting speed of the plane to be positioned is higher, and the time required by the scanning process is reduced.
EXAMPLE five
Fig. 5a is a flowchart of a scan positioning method according to a fifth embodiment of the present invention, which is further optimized based on the foregoing embodiments. As shown in fig. 5a, the method comprises:
s510, obtaining historical three-dimensional image data and plane parameters of a to-be-positioned plane corresponding to the historical three-dimensional image data.
And S520, calculating a historical distance matrix corresponding to the historical three-dimensional image data according to the historical three-dimensional image data and the plane parameters.
In this embodiment, a manner of obtaining the historical three-dimensional image data and the plane parameter of the to-be-positioned plane corresponding to the historical three-dimensional image data, and a manner of calculating the historical distance matrix are similar to those in the above embodiments, and specific details may be referred to in the above embodiments, and are not described herein again.
S530, segmenting the historical distance matrix according to a preset second segmentation threshold value to obtain a historical distance segmentation matrix.
In this embodiment, after the historical distance matrix is divided according to the preset second division threshold, the training of the surface division model is performed by using the historical distance division matrix obtained by the division. Optionally, the historical distance segmentation matrix is composed of segmentation distance values of each position point and a plane to be positioned in the historical three-dimensional image data.
Optionally, the segmenting the historical distance matrix according to a preset second segmentation threshold to obtain a historical distance segmentation matrix, including:
and adjusting each historical distance value in the historical distance matrix according to a preset second segmentation threshold, and taking a matrix formed by the adjusted historical distance values as a historical distance segmentation matrix.
For example, if the second division threshold is T, each historical distance value in the historical distance matrix is adjusted according to the second division threshold T. Optionally, if any historical distance value in the historical distance matrix is DiThen D will bei<The historical distance value of T is adjusted to 1, and the other historical distance values are adjusted to 0. Optionally, the value of the second segmentation threshold may be adjusted according to the position to be positioned. Illustratively, when the site to be located is a heart region, the second segmentation threshold may be 5 mm.
And S540, generating a training sample set based on the historical three-dimensional image data and the historical segmentation distance matrix, and training the pre-established surface segmentation model by using the training sample set to obtain the trained surface segmentation model.
Optionally, the cost function in the surface segmentation model training process may be a cost function for classification, such as focus loss functions focalloss and dice. In this embodiment, the training method of the face segmentation model is similar to the training method of the distance computation model in the above embodiments, and for more details, reference may be made to the above embodiments, which are not described herein again.
Fig. 5b is a schematic diagram of training a surface segmentation model in the scan positioning method according to the embodiment of the present invention. The process of training the face segmentation model is schematically shown in the figure. As shown in fig. 5b, a distance segmentation matrix corresponding to the three-dimensional image data is calculated according to the plane parameters of the plane to be located, and the distance segmentation matrix and the three-dimensional image data of the part to be located extracted from the pre-scan image are used as training samples to train the face segmentation model, so as to obtain the trained face segmentation model.
And S550, acquiring three-dimensional image data of the part to be positioned.
And S560, inputting the three-dimensional image data into a pre-trained surface segmentation model to obtain a distance segmentation matrix output by the surface segmentation model.
And S570, screening the position points according to a preset first segmentation threshold and each distance segmentation value, and taking the position points meeting the segmentation conditions as plane fitting points.
And S580, fitting the fitting points of each plane through a preset fitting algorithm to obtain a plane equation to be positioned.
It should be noted that the training method of the surface segmentation model provided by the embodiment of the present invention may be implemented separately. That is to say, the operation steps in S510 to S540 provided by the embodiment of the present invention may be used alone to complete the training of the surface segmentation model, and the operation of determining the plane equation to be positioned through the surface segmentation model based on the three-dimensional image data of the portion to be positioned in the subsequent steps S550 to S580 is not performed.
According to the technical scheme of the embodiment of the invention, the operation of training the face segmentation model is added on the basis of the embodiment, and historical three-dimensional image data and plane parameters of a plane to be positioned corresponding to the historical three-dimensional image data are obtained; calculating a historical distance matrix corresponding to the historical three-dimensional image data according to the historical three-dimensional image data and the plane parameters; segmenting the historical distance matrix according to a preset second segmentation threshold value to obtain a historical distance segmentation matrix; and generating a training sample set based on the historical three-dimensional image data and the historical segmentation distance matrix, and training the pre-established surface segmentation model by using the training sample set to obtain the trained surface segmentation model, so that the trained surface segmentation model is more accurate.
In another embodiment of the invention, the pre-established plane determination model can be trained directly according to the historical three-dimensional image data and the plane parameters corresponding to the historical three-dimensional image data; when scanning and positioning are needed, the three-dimensional image data of the part to be positioned and the coordinate matrixes of the x, y and z axes of the three-dimensional image data of the part to be positioned are directly input into the trained plane determination model, and the plane parameters of the plane to be positioned output by the plane determination model are obtained. Optionally, the training mode and the data processing mode of the plane model may refer to the above embodiments, and are not described herein again.
EXAMPLE six
Fig. 6 is a schematic structural diagram of a scanning and positioning apparatus according to a sixth embodiment of the present invention. The scanning and positioning device can be implemented in software and/or hardware, for example, the scanning and positioning device can be configured in a computer device, as shown in fig. 6, the device includes: a data acquisition module 610, a feature data module 620, and a plane determination module 630, wherein:
the data acquisition module 610 is used for acquiring three-dimensional image data of a part to be positioned;
a feature data module 620, configured to input the three-dimensional image data into a position point feature model trained in advance, and obtain each position point feature data output by the position point feature model;
and a plane determining module 630, configured to determine a plane fitting point according to the feature data of each location point, and determine a plane equation to be located according to the plane fitting point.
The embodiment of the invention obtains the three-dimensional image data of the part to be positioned through the data obtaining module; the feature data module inputs the three-dimensional image data into a position point feature model trained in advance to obtain each position point feature data output by the position point feature model; the plane determination module determines a plane fitting point according to the characteristic data of each position point and determines a plane equation to be positioned according to the plane fitting point, the scanning positioning method provided by the embodiment of the invention can realize the positioning of a scanning plane only through the three-dimensional image data of the part to be positioned, and can be suitable for the situation when different scanning parts are scanned and positioned through different scanning modes, so that the positioning of the scanning plane is faster and more accurate, the time required by the scanning process is further reduced, the accuracy of the scanning image is improved, the inaccuracy of positioning according to anatomical mark points is overcome, and the positioning of the scanning plane can be still better realized when the pre-scanning range is not full.
On the basis of the above scheme, the feature data module 620 is specifically configured to:
and inputting the three-dimensional image data into a distance field calculation model trained in advance to obtain a distance matrix output by the distance field calculation model, wherein the distance matrix is composed of distance values of each position point and the plane to be positioned.
On the basis of the above scheme, the plane determining module 630 is specifically configured to:
and screening the position points according to a preset distance range and each distance value, taking the position points meeting the fitting condition as plane fitting points, and determining a plane equation to be positioned according to the plane fitting points.
On the basis of the above scheme, the apparatus further comprises:
the historical data acquisition unit is used for acquiring historical three-dimensional image data and plane parameters of a plane to be positioned corresponding to the historical three-dimensional image data before acquiring the three-dimensional image data of the part to be positioned;
the distance matrix determining unit is used for calculating a historical distance matrix corresponding to the historical three-dimensional image data according to the historical three-dimensional image data and the plane parameters, and the distance matrix consists of distance values of each position point in the historical three-dimensional image data and the plane to be positioned;
and the distance field model training unit is used for generating a training sample set based on the historical three-dimensional image data and the historical distance matrix, and training a pre-established distance field calculation model by using the training sample set to obtain the trained distance field calculation model.
On the basis of the above scheme, the feature data module 620 is specifically configured to:
and inputting the three-dimensional image data into a pre-trained surface segmentation model to obtain a distance segmentation matrix output by the surface segmentation model, wherein the distance segmentation matrix consists of distance segmentation values of each position point and a plane to be positioned.
On the basis of the above scheme, the plane determining module 630 includes:
the fit point determining unit is used for screening each position point according to a preset first segmentation threshold and each distance segmentation value, and taking the position points meeting the segmentation conditions as plane fit points;
and the plane fitting unit is used for fitting the plane fitting points through a preset fitting algorithm to obtain a plane equation to be positioned.
On the basis of the above scheme, the apparatus further comprises:
the historical data acquisition unit is used for acquiring historical three-dimensional image data and plane parameters of a plane to be positioned corresponding to the historical three-dimensional image data before acquiring the three-dimensional image data of the part to be positioned;
a distance matrix determining unit, configured to calculate a historical distance matrix corresponding to the historical three-dimensional image data according to the historical three-dimensional image data and the plane parameter, where the historical distance matrix is formed by distance values between each position point in the historical three-dimensional image data and the plane to be located;
the segmentation matrix determination unit is used for segmenting the historical distance matrix according to a preset second segmentation threshold value to obtain a historical distance segmentation matrix, and the historical distance segmentation matrix is composed of segmentation distance values of each position point and the plane to be positioned in the historical three-dimensional image data;
and the surface segmentation model training unit is used for generating a training sample set based on the historical three-dimensional image data and the historical segmentation distance matrix, and training a pre-established surface segmentation model by using the training sample set to obtain the trained surface segmentation model.
The scanning positioning device provided by the embodiment of the invention can execute the scanning positioning method provided by any embodiment, and has the corresponding functional modules and beneficial effects of the execution method.
EXAMPLE seven
Fig. 7 is a schematic structural diagram of a computer device according to a seventh embodiment of the present invention. FIG. 7 illustrates a block diagram of an exemplary computer device 712 suitable for use to implement embodiments of the present invention. The computer device 712 shown in fig. 7 is only an example and should not bring any limitations to the functionality or scope of use of the embodiments of the present invention.
As shown in fig. 7, computer device 712 is embodied in the form of a general purpose computing device. Components of computer device 712 may include, but are not limited to: one or more processors 716, a system memory 728, and a bus 718 that couples the various system components (including the system memory 728 and the processors 716).
Bus 718 represents one or more of any of several types of bus structures, including a memory bus or memory controller, a peripheral bus, an accelerated graphics port, and processor 716 or a local bus using any of a variety of bus architectures. By way of example, such architectures include, but are not limited to, Industry Standard Architecture (ISA) bus, micro-channel architecture (MAC) bus, enhanced ISA bus, Video Electronics Standards Association (VESA) local bus, and Peripheral Component Interconnect (PCI) bus.
Computer device 712 typically includes a variety of computer system readable media. Such media can be any available media that is accessible by computer device 712 and includes both volatile and nonvolatile media, removable and non-removable media.
The system memory 728 may include computer system readable media in the form of volatile memory, such as Random Access Memory (RAM)730 and/or cache memory 732. Computer device 712 may further include other removable/non-removable, volatile/nonvolatile computer system storage media. By way of example only, storage device 734 may be used to read from or write to non-removable, nonvolatile magnetic media (not shown in FIG. 7, and commonly referred to as a "hard drive"). Although not shown in FIG. 7, a magnetic disk drive for reading from and writing to a removable, nonvolatile magnetic disk (e.g., a "floppy disk") and an optical disk drive for reading from or writing to a removable, nonvolatile optical disk (e.g., a CD-ROM, DVD-ROM, or other optical media) may be provided. In these cases, each drive may be connected to the bus 718 by one or more data media interfaces. Memory 728 may include at least one program product having a set (e.g., at least one) of program modules that are configured to carry out the functions of embodiments of the invention.
Program/utility 740 having a set (at least one) of program modules 742 may be stored, for instance, in memory 728, such program modules 742 including, but not limited to, an operating system, one or more application programs, other program modules, and program data, each of which examples or some combination thereof may include an implementation of a network environment. Program modules 742 generally perform the functions and/or methodologies of embodiments of the invention as described herein.
Computer device 712 may also communicate with one or more external devices 714 (e.g., keyboard, pointing device, display 724, etc.), with one or more devices that enable a user to interact with computer device 712, and/or with any devices (e.g., network card, modem, etc.) that enable computer device 712 to communicate with one or more other computing devices. Such communication may occur through input/output (I/O) interfaces 722. Also, computer device 712 may communicate with one or more networks (e.g., a Local Area Network (LAN), a Wide Area Network (WAN) and/or a public network, such as the Internet) through network adapter 720. As shown, network adapter 720 communicates with the other modules of computer device 712 via bus 718. It should be appreciated that although not shown, other hardware and/or software modules may be used in conjunction with computer device 712, including but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, and data backup storage systems, among others.
The processor 716 executes programs stored in the system memory 728 to execute various functional applications and data processing, such as implementing a scan location method provided by an embodiment of the present invention, the method including:
acquiring three-dimensional image data of a part to be positioned;
inputting the three-dimensional image data into a position point feature model trained in advance, and obtaining each position point feature data output by the position point feature model;
and determining a plane fitting point according to the characteristic data of each position point, and determining a plane equation to be positioned according to the plane fitting point.
Of course, those skilled in the art can understand that the processor can also implement the technical solution of the scan positioning method provided in any embodiment of the present invention.
Example eight
An eighth embodiment of the present invention further provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements a scan positioning method provided in the embodiment of the present invention, where the method includes:
acquiring three-dimensional image data of a part to be positioned;
inputting the three-dimensional image data into a position point feature model trained in advance, and obtaining each position point feature data output by the position point feature model;
and determining a plane fitting point according to the characteristic data of each position point, and determining a plane equation to be positioned according to the plane fitting point.
Of course, the computer program stored on the computer-readable storage medium provided by the embodiments of the present invention is not limited to the method operations described above, and may also perform related operations in the scan positioning method provided by any embodiments of the present invention.
Computer storage media for embodiments of the invention may employ any combination of one or more computer-readable media. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
A computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Computer program code for carrying out operations for aspects of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
It is to be noted that the foregoing is only illustrative of the preferred embodiments of the present invention and the technical principles employed. It will be understood by those skilled in the art that the present invention is not limited to the particular embodiments described herein, but is capable of various obvious changes, rearrangements and substitutions as will now become apparent to those skilled in the art without departing from the scope of the invention. Therefore, although the present invention has been described in greater detail by the above embodiments, the present invention is not limited to the above embodiments, and may include other equivalent embodiments without departing from the spirit of the present invention, and the scope of the present invention is determined by the scope of the appended claims.

Claims (8)

1. A scan positioning method, comprising:
acquiring three-dimensional image data of a part to be positioned;
inputting the three-dimensional image data into a position point feature model trained in advance, and obtaining each position point feature data output by the position point feature model, wherein the feature data of each position point comprise coordinates of each position point and a distance parameter between each position point and a plane to be positioned;
screening partial position points according to the characteristic data of each position point to serve as plane fitting points, and determining a plane equation to be positioned according to the plane fitting points;
the inputting the three-dimensional image data into a position point feature model trained in advance to obtain each position point feature data output by the position point feature model includes:
inputting the three-dimensional image data into a distance field calculation model trained in advance to obtain a distance matrix output by the distance field calculation model, wherein the distance matrix is used as the feature data of the position points and consists of the distance values of the position points and the plane to be positioned;
or inputting the three-dimensional image data into a pre-trained surface segmentation model, obtaining a distance segmentation matrix output by the surface segmentation model, and taking the distance segmentation matrix as the position point characteristic data, wherein the distance segmentation matrix is composed of distance segmentation values of each position point and a plane to be positioned.
2. The method according to claim 1, wherein the screening out a part of the position points according to the characteristic data of each position point as a plane fitting point, and determining a plane equation to be positioned according to the plane fitting point comprises:
and screening the position points according to a preset distance range and each distance value, taking the position points meeting the fitting condition as plane fitting points, and determining a plane equation to be positioned according to the plane fitting points.
3. The method of claim 1, further comprising, prior to acquiring the three-dimensional image data of the site to be located:
acquiring historical three-dimensional image data and plane parameters of a to-be-positioned plane corresponding to the historical three-dimensional image data;
calculating a historical distance matrix corresponding to the historical three-dimensional image data according to the historical three-dimensional image data and the plane parameters, wherein the historical distance matrix is composed of historical distance values of each position point in the historical three-dimensional image data and the plane to be positioned;
and generating a training sample set based on the historical three-dimensional image data and the historical distance matrix, and training a pre-established distance field calculation model by using the training sample set to obtain the trained distance field calculation model.
4. The method according to claim 3, wherein the screening out a part of the position points according to the characteristic data of each position point as a plane fitting point, and determining a plane equation to be positioned according to the plane fitting point comprises:
screening each position point according to a preset first segmentation threshold and each distance segmentation value, and taking the position points meeting the segmentation conditions as plane fitting points;
and fitting the fitting points of each plane through a preset fitting algorithm to obtain a plane equation to be positioned.
5. The method of claim 1, further comprising, prior to acquiring the three-dimensional image data of the site to be located:
acquiring historical three-dimensional image data and plane parameters of a to-be-positioned plane corresponding to the historical three-dimensional image data;
calculating a historical distance matrix corresponding to the historical three-dimensional image data according to the historical three-dimensional image data and the plane parameters, wherein the historical distance matrix is composed of distance values of each position point in the historical three-dimensional image data and the plane to be positioned;
dividing the historical distance matrix according to a preset second division threshold, and taking a matrix formed by the divided historical distance values as a historical distance division matrix, wherein the historical distance division matrix is formed by the division distance values of each position point in the historical three-dimensional image data and the plane to be positioned;
and generating a training sample set based on the historical three-dimensional image data and the historical distance segmentation matrix, and training a pre-established surface segmentation model by using the training sample set to obtain the trained surface segmentation model.
6. A scanning positioning device, comprising:
the data acquisition module is used for acquiring three-dimensional image data of a part to be positioned;
the characteristic data module is used for inputting the three-dimensional image data into a position point characteristic model trained in advance to obtain each position point characteristic data output by the position point characteristic model, and the characteristic data of each position point comprises coordinates of each position point and a distance parameter between each position point and a plane to be positioned;
the plane determining module is used for screening out part of position points as plane fitting points according to the characteristic data of each position point and determining a plane equation to be positioned according to the plane fitting points;
the inputting the three-dimensional image data into a position point feature model trained in advance to obtain each position point feature data output by the position point feature model includes:
inputting the three-dimensional image data into a distance field calculation model trained in advance to obtain a distance matrix output by the distance field calculation model, wherein the distance matrix is used as the feature data of the position points and consists of the distance values of the position points and the plane to be positioned;
or inputting the three-dimensional image data into a pre-trained surface segmentation model, obtaining a distance segmentation matrix output by the surface segmentation model, and taking the distance segmentation matrix as the position point characteristic data, wherein the distance segmentation matrix is composed of distance segmentation values of each position point and a plane to be positioned.
7. A computer device, the device comprising:
one or more processors;
storage means for storing one or more programs;
when executed by the one or more processors, cause the one or more processors to implement the scan positioning method of any one of claims 1-5.
8. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the scan localization method according to any one of claims 1-5.
CN201810835317.5A 2018-07-26 2018-07-26 Scanning positioning method and device, computer equipment and computer readable storage medium Active CN109087357B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201810835317.5A CN109087357B (en) 2018-07-26 2018-07-26 Scanning positioning method and device, computer equipment and computer readable storage medium
CN202110943320.0A CN113610923A (en) 2018-07-26 2018-07-26 Scanning positioning method and device, computer equipment and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810835317.5A CN109087357B (en) 2018-07-26 2018-07-26 Scanning positioning method and device, computer equipment and computer readable storage medium

Related Child Applications (1)

Application Number Title Priority Date Filing Date
CN202110943320.0A Division CN113610923A (en) 2018-07-26 2018-07-26 Scanning positioning method and device, computer equipment and computer readable storage medium

Publications (2)

Publication Number Publication Date
CN109087357A CN109087357A (en) 2018-12-25
CN109087357B true CN109087357B (en) 2021-06-29

Family

ID=64830863

Family Applications (2)

Application Number Title Priority Date Filing Date
CN202110943320.0A Pending CN113610923A (en) 2018-07-26 2018-07-26 Scanning positioning method and device, computer equipment and computer readable storage medium
CN201810835317.5A Active CN109087357B (en) 2018-07-26 2018-07-26 Scanning positioning method and device, computer equipment and computer readable storage medium

Family Applications Before (1)

Application Number Title Priority Date Filing Date
CN202110943320.0A Pending CN113610923A (en) 2018-07-26 2018-07-26 Scanning positioning method and device, computer equipment and computer readable storage medium

Country Status (1)

Country Link
CN (2) CN113610923A (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111443318B (en) * 2019-01-16 2022-08-02 上海联影智能医疗科技有限公司 Magnetic resonance image processing method, magnetic resonance image processing device, storage medium and magnetic resonance imaging system
CN110163857B (en) * 2019-05-24 2022-03-04 上海联影医疗科技股份有限公司 Image background area detection method and device, storage medium and X-ray system
CN110223352B (en) * 2019-06-14 2021-07-02 浙江明峰智能医疗科技有限公司 Medical image scanning automatic positioning method based on deep learning

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101836235A (en) * 2007-08-16 2010-09-15 皇家飞利浦电子股份有限公司 Imaging method for sampling a cross-section plane in a three-dimensional (3d) image data volume
JP2014127788A (en) * 2012-12-26 2014-07-07 Nippon Hoso Kyokai <Nhk> Device of correcting stereoscopic image, and program of the same
CN104166978A (en) * 2013-12-27 2014-11-26 上海联影医疗科技有限公司 Blood vessel extracting method

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN100476876C (en) * 2007-04-05 2009-04-08 上海交通大学 Method for computer-assisted rebuilding heart mitral annulus
WO2009143491A2 (en) * 2008-05-22 2009-11-26 The Trustees Of Dartmouth College System and method for calibration for image-guided surgery
CN101299270B (en) * 2008-05-27 2010-06-02 东南大学 Multiple video cameras synchronous quick calibration method in three-dimensional scanning system
CN101315661B (en) * 2008-07-18 2010-07-07 东南大学 Fast three-dimensional face recognition method for reducing expression influence
CN101344373A (en) * 2008-08-14 2009-01-14 中国人民解放军总后勤部军需装备研究所 Standardization processing method based on three-dimensional head and face curved surface modeling
CN101650778A (en) * 2009-07-28 2010-02-17 复旦大学 Invariance identification method based on characteristic point and homography matching
RU2677055C2 (en) * 2013-11-05 2019-01-15 Конинклейке Филипс Н.В. Automated segmentation of tri-plane images for real time ultrasound imaging
CN104700099B (en) * 2015-03-31 2017-08-11 百度在线网络技术(北京)有限公司 The method and apparatus for recognizing traffic sign
CN106204514B (en) * 2015-04-30 2019-03-01 中国科学院深圳先进技术研究院 A kind of liver localization method and device based on three-dimensional CT image
CN108243623B (en) * 2016-09-28 2022-06-03 驭势科技(北京)有限公司 Automobile anti-collision early warning method and system based on binocular stereo vision

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101836235A (en) * 2007-08-16 2010-09-15 皇家飞利浦电子股份有限公司 Imaging method for sampling a cross-section plane in a three-dimensional (3d) image data volume
JP2014127788A (en) * 2012-12-26 2014-07-07 Nippon Hoso Kyokai <Nhk> Device of correcting stereoscopic image, and program of the same
CN104166978A (en) * 2013-12-27 2014-11-26 上海联影医疗科技有限公司 Blood vessel extracting method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于三维人体骨架模型的动作识别;张波;《中国优秀硕士学位论文全文数据库 信息科技辑》;20150115(第1期);第1-61页 *

Also Published As

Publication number Publication date
CN113610923A (en) 2021-11-05
CN109087357A (en) 2018-12-25

Similar Documents

Publication Publication Date Title
CN110956635B (en) Lung segment segmentation method, device, equipment and storage medium
CN111292314B (en) Coronary artery segmentation method, device, image processing system and storage medium
CN107909622B (en) Model generation method, medical imaging scanning planning method and medical imaging system
CN111369525B (en) Image analysis method, apparatus and storage medium
CN111080573B (en) Rib image detection method, computer device and storage medium
CN109087357B (en) Scanning positioning method and device, computer equipment and computer readable storage medium
CN111325714B (en) Method for processing region of interest, computer device and readable storage medium
US20210374452A1 (en) Method and device for image processing, and elecrtonic equipment
CN110728673A (en) Target part analysis method and device, computer equipment and storage medium
CN112950648B (en) Method and apparatus for determining a mid-sagittal plane in a magnetic resonance image
CN111932492A (en) Medical image processing method and device and computer readable storage medium
CN111968130A (en) Brain angiography image processing method, apparatus, medium, and electronic device
CN111223158B (en) Artifact correction method for heart coronary image and readable storage medium
CN103140875A (en) System and method for multi-modality segmentation of internal tissue with live feedback
CN108597589B (en) Model generation method, target detection method and medical imaging system
CN114299547A (en) Method and system for determining region of target object
US20230289969A1 (en) Method, system and device of image segmentation
US20230115927A1 (en) Systems and methods for plaque identification, plaque composition analysis, and plaque stability detection
US20230099906A1 (en) Image registration method, computer device, and storage medium
CN112530554B (en) Scanning positioning method and device, storage medium and electronic equipment
CN113255756A (en) Image fusion method and device, electronic equipment and storage medium
CN113409273A (en) Image analysis method, device, equipment and medium
CN112767314A (en) Medical image processing method, device, equipment and storage medium
WO2024094088A1 (en) Systems and methods for image analysis
US20230298174A1 (en) Image processing apparatus, image processing method, and non-transitory computer readable medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant