CN115965570A - Generation method of ultrasonic breast three-dimensional panoramic image and ultrasonic equipment - Google Patents

Generation method of ultrasonic breast three-dimensional panoramic image and ultrasonic equipment Download PDF

Info

Publication number
CN115965570A
CN115965570A CN202111192345.8A CN202111192345A CN115965570A CN 115965570 A CN115965570 A CN 115965570A CN 202111192345 A CN202111192345 A CN 202111192345A CN 115965570 A CN115965570 A CN 115965570A
Authority
CN
China
Prior art keywords
dimensional image
dimensional
image
processed
points
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111192345.8A
Other languages
Chinese (zh)
Inventor
邱静雯
丁勇
王�琦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Qingdao Hisense Medical Equipment Co Ltd
Original Assignee
Qingdao Hisense Medical Equipment Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Qingdao Hisense Medical Equipment Co Ltd filed Critical Qingdao Hisense Medical Equipment Co Ltd
Priority to CN202111192345.8A priority Critical patent/CN115965570A/en
Priority to PCT/CN2022/109194 priority patent/WO2023061000A1/en
Publication of CN115965570A publication Critical patent/CN115965570A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • G06T3/4038Scaling the whole image or part thereof for image mosaicing, i.e. plane images composed of plane sub-images
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves

Abstract

The application relates to the technical field of ultrasonic image processing, in particular to a method for generating an ultrasonic breast three-dimensional panoramic image and ultrasonic equipment, and solves the problems that the visual field is not comprehensive enough and missed diagnosis is likely to be caused by looking up a plurality of images during clinical diagnosis. In the prior art, a doctor adopts handheld two-dimensional ultrasonic equipment to acquire a mammary gland image, the imaging visual field is small, multiple acquisition is required, and each sampling surface is displayed independently. The method fuses a plurality of two-dimensional images collected by a doctor in each scanning direction into a three-dimensional image, and then splices and fuses the three-dimensional images in adjacent scanning directions into a three-dimensional breast ultrasound panoramic image.

Description

Generation method of ultrasonic breast three-dimensional panoramic image and ultrasonic equipment
Technical Field
The application relates to the technical field of ultrasonic image processing, in particular to a method for generating an ultrasonic breast three-dimensional panoramic image and ultrasonic equipment.
Background
In the ultrasonic clinical application, a doctor adopts a handheld two-dimensional ultrasonic device to collect a mammary gland image in the traditional mode, the imaging visual field of the mode is small, multiple times of collection are needed, and each sampling surface is displayed independently. When in clinical diagnosis, a plurality of images need to be consulted, which easily causes the problems of insufficient and comprehensive visual field and possible missed diagnosis.
Disclosure of Invention
The application discloses a generation method of an ultrasonic breast three-dimensional panoramic image and ultrasonic equipment, which are used for solving the problems that the imaging visual field of a breast image acquired in a traditional mode is small, the visual field is not comprehensive enough and missed diagnosis is likely to be caused during clinical diagnosis.
In a first aspect, the present application provides a method for generating an ultrasound breast three-dimensional panoramic image, in which a projection plane of a breast area is divided into a plurality of sector areas by taking a nipple as a center, each sector area corresponds to a scanning direction, and a breast part of which a projection point falls into the same sector area is taken as a breast block, the method includes:
for each scanning orientation: carrying out ultrasonic imaging from the bottom of the breast block in the scanning direction to the nipple direction to obtain a first three-dimensional image in the scanning direction; the first three-dimensional image comprises a plurality of cross-section images; respectively carrying out noise reduction processing on the first three-dimensional images to obtain second three-dimensional images; carrying out spatial position registration on different cross section images of the second three-dimensional image to obtain a third three-dimensional image of the scanning direction;
constructing a three-dimensional space transformation coefficient between third three-dimensional images of adjacent scanning directions;
and performing fusion processing on the third three-dimensional images of the adjacent scanning directions based on the three-dimensional space transformation coefficients of the adjacent scanning directions to obtain a three-dimensional panorama of the mammary gland area.
Optionally, the registering spatial positions of the different cross-section images of the second three-dimensional image to obtain a third three-dimensional image of the scanning orientation specifically includes:
sequentially acquiring cross section images to be processed according to a scanning time sequence, and executing the following operations on the cross section images to be processed:
determining a two-dimensional space transformation coefficient between the cross-section image to be processed and a reference two-dimensional image;
and multiplying the cross section image to be processed by the two-dimensional space transformation coefficient to obtain a reference two-dimensional image of the next cross section image to be processed, and taking the multiplication result as the cross section image in the third three-dimensional image, wherein the first cross section image obtained by arranging according to the scanning time sequence is the reference two-dimensional image of the second cross section image.
Optionally, the determining a two-dimensional spatial transform coefficient between the to-be-processed cross-sectional image and the reference two-dimensional image specifically includes:
respectively extracting characteristic points of the cross-section image to be processed and the reference two-dimensional image;
registering the first type of characteristic points of the cross-section image to be processed and the reference two-dimensional image, eliminating singular points in the cross-section image to be processed and eliminating the first type of singular points in the reference two-dimensional image;
and constructing a rigid transformation coefficient between the cross-section image to be processed and the reference two-dimensional image based on the residual feature points to serve as the two-dimensional space transformation coefficient.
Optionally, the scanning orientations are sorted according to a target direction, the target direction is a clockwise direction or an anticlockwise direction, and the fusion processing is performed on the third three-dimensional images of adjacent scanning orientations based on the three-dimensional space transformation coefficients of adjacent scanning orientations, specifically including:
sequentially carrying out fusion processing on the third three-dimensional images according to the sequence of the target direction until the fusion processing of the last third three-dimensional image is finished;
and fusing a third three-dimensional image into a fused subgraph obtained by the last fusion treatment in each fusion treatment.
Optionally, the constructing a three-dimensional space transformation coefficient between the third three-dimensional images in the adjacent scanning orientations specifically includes:
the scanning directions are sequenced according to the target direction, and third three-dimensional images to be processed are sequentially obtained;
determining a rigid transformation coefficient between the third three-dimensional image to be processed and the reference three-dimensional image as an initial three-dimensional space transformation coefficient; the first three-dimensional image obtained according to the target direction is a reference three-dimensional image of the second three-dimensional image, and the reference three-dimensional image of the third three-dimensional image to be processed is a fusion subgraph obtained by last fusion processing from the second three-dimensional image;
and circularly executing the following operations until iterating for n times aiming at the third three-dimensional image to be processed, wherein n is a positive integer:
mapping the third three-dimensional image to be processed to the reference three-dimensional image by adopting the initial three-dimensional space transformation coefficient to obtain a feature point set of the third three-dimensional image to be processed on the reference three-dimensional image;
determining coordinate points of the feature points in the third three-dimensional image to be processed on the reference three-dimensional image based on the feature point set;
searching matching points of the characteristic points in a specified range by taking the coordinate point as a center in the reference three-dimensional image; determining the correlation degree between the characteristic points and the matching points;
if the correlation degree is greater than the correlation degree threshold value, the matching point is reserved to a first point set;
filtering out second singular points in the first point set to obtain a second point set;
if the number of elements in the second point set is larger than the designated element number, updating the three-dimensional space transformation coefficient based on the second point set, and if the number of elements in the second point set is not larger than the designated element number, keeping the three-dimensional space transformation coefficient of the last iteration unchanged;
and the three-dimensional space transformation coefficient obtained after iteration for n times is the three-dimensional space transformation coefficient between the scanning directions of the third three-dimensional image to be processed and the last third three-dimensional image.
Optionally, filtering out singular points of a second type in the first point set to obtain a second point set specifically includes:
taking the first coordinate in the first point set as a reference, and counting a histogram of the correlation degree corresponding to each matching point;
searching a target range in the histogram by taking the maximum correlation as a reference, wherein the ratio of the number of matching points in the target range to the number of matching points in the first point set is within a specified ratio range;
and taking the paired points outside the target range as the second type singular points, and filtering the second type singular points from the first point set to obtain the second point set.
Optionally, the sequentially performing fusion processing on the third three-dimensional image according to the sequence of the target direction specifically includes:
sequentially acquiring a third three-dimensional image to be fused according to the sequence of the target direction, and executing the following operations:
obtaining a target rigidity change coefficient based on a three-dimensional space transformation coefficient between the third three-dimensional image to be fused and a reference three-dimensional image of the third three-dimensional image to be fused and multiplying the three-dimensional space transformation coefficient during the last fusion processing, wherein during the first fusion, the target rigidity change coefficient is a three-dimensional space transformation coefficient between the third three-dimensional image in the first scanning direction and the third three-dimensional image in the second scanning direction; the reference three-dimensional image is a fused image obtained in the last fusion processing, and the reference three-dimensional image is a third three-dimensional image in the first scanning direction in the first fusion processing;
determining the position coordinates of each pixel point mapped to the reference three-dimensional image by adopting the target rigid transformation coefficient for each pixel point of the third three-dimensional image to be fused;
if the position coordinate is an integer, taking the pixel value of the pixel point as the pixel value of the position coordinate in the reference three-dimensional image;
and if the position coordinate is a non-integer, determining the pixel value of the position coordinate by adopting an interpolation mode.
Optionally, the determining the pixel value of the position coordinate by using an interpolation method specifically includes:
determining pixel values for the location coordinates based on the following formula:
I n =w·I s +(1-w)·I m
w=d/PointDis tance
wherein d is the distance from the position coordinate of the pixel point mapped to the reference three-dimensional image to the central point of the reference three-dimensional image;
the PointDistance is a distance between a position coordinate of the central point in the three-dimensional image to be fused and the central point of the reference three-dimensional image;
I n mapping said pixel point to a pixel value, I, in said reference three-dimensional image s Is the pixel value of the position coordinate in the reference three-dimensional image, I m And w is the weight of the pixel value of the pixel point.
Optionally, the determining a rigid transformation coefficient between the third three-dimensional image to be processed and the reference three-dimensional image as an initial three-dimensional space transformation coefficient specifically includes:
determining an overlapping area between the third three-dimensional image to be processed and the reference three-dimensional image;
extracting feature points of the overlapping area in the third three-dimensional image to be processed to serve as a first group of feature points;
extracting the characteristic points of the overlapping area in the reference three-dimensional image as a second group of characteristic points;
the following operations are executed in a loop until iteration is executed a times, wherein a is a positive integer:
randomly selecting m matching points from the first group of characteristic points and the second group of characteristic points;
determining a transformation matrix based on the m pairing points;
for each pairing point other than the m pairing points, determining a projection error between the pairing points based on the transformation matrix;
if the projection error is smaller than the error threshold value, adding the matched point into an inner point set; wherein the inner point set further comprises the m matching points;
if the number of elements in the inner point set is higher than that of the elements in the optimal inner point set obtained by the last iteration, updating the optimal inner point set into the inner point set of the current iteration;
after iteration is performed a times, the initial three-dimensional space transformation coefficient is calculated based on the optimal inner point set.
In a second aspect, the present application provides an ultrasound device, a display, a memory, and a processor, wherein:
the display is used for displaying the ultrasonic image;
the memory to store executable instructions;
the processor is configured to perform any of the methods of the first aspect based on the executable instructions.
In a third aspect, the present application provides a computer storage medium, wherein the computer storage medium stores computer-executable instructions for causing a computer to execute a method for generating an ultrasound breast three-dimensional panoramic image in an ultrasound image according to any one of the first aspect of the present application.
In a fourth aspect, the present application further provides a computer readable storage medium comprising a computer program, wherein the computer program is capable of executing the method for generating an ultrasound breast three-dimensional panorama as provided in the first aspect of the present application by a processor.
The technical scheme provided by the embodiment of the application at least has the following beneficial effects:
according to the method, traditional ultrasonic equipment can be adopted to collect breast ultrasonic images of a plurality of scanning positions, and then the breast ultrasonic images are spliced and fused to obtain a three-dimensional panorama of the breast. During splicing, spatial position registration of different ultrasonic images in the same scanning direction is considered, so that when three-dimensional images in different scanning directions are fused, distortion can be avoided, and the image quality of a panoramic image is improved. Therefore, the small-view limitation of the traditional mammary gland ultrasonic imaging image is solved, the whole panorama of the mammary gland is provided for the mammary gland ultrasonic examination, the user can select any tangent plane from the three-dimensional data to display, and the problem of missed examination of the mammary gland examination is effectively solved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings needed to be used in the embodiments of the present application will be briefly described below, and it is obvious that the drawings described below are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a block diagram of an ultrasound apparatus provided in an embodiment of the present application;
fig. 2a is a schematic flowchart of a method for acquiring an ultrasound breast three-dimensional panoramic image according to an embodiment of the present application;
fig. 2b is a schematic view of sector area division provided in the embodiment of the present application;
fig. 3a is a flowchart of spatial position registration of different cross-sectional images according to an embodiment of the present application;
FIG. 3b is a schematic diagram of a fused cross-sectional image provided by an embodiment of the present application;
fig. 4a is a flowchart of feature point extraction on a two-dimensional plane according to an embodiment of the present disclosure;
fig. 4b is a schematic diagram of dividing a target image into a plurality of first areas according to an embodiment of the present application;
FIG. 5 is a flowchart of constructing three-dimensional spatial transform coefficients for adjacent scan orientations according to an embodiment of the present application;
fig. 6 is a flowchart for calculating an initial three-dimensional spatial transform coefficient according to an embodiment of the present application;
FIG. 7 is a flow chart of filtering singularities of the second type according to an embodiment of the present disclosure;
fig. 8 is a flowchart of acquiring a third three-dimensional image to be fused according to the embodiment of the present application;
FIG. 9 is a schematic diagram of a three-dimensional image fusion method for adjacent scanning orientations according to an embodiment of the present disclosure;
FIG. 10 is a schematic diagram of an overlap region provided by an embodiment of the present application;
fig. 11 is a schematic diagram of acquiring an ultrasound breast three-dimensional panoramic image provided in an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application.
Wherein in the description of the embodiments of the present application, "/" indicates or means, for example, a/B may indicate a or B; "and/or" in the text is only an association relationship describing an associated object, and means that three relationships may exist, for example, a and/or B may mean: three cases of a alone, a and B both, and B alone exist, and in addition, "a plurality" means two or more than two in the description of the embodiments of the present application.
The terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as implying or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include one or more of that feature, and in the description of embodiments of the application, "plurality" means two or more unless stated otherwise.
Hereinafter, some terms in the embodiments of the present application are explained to facilitate understanding by those skilled in the art.
SURF (Speed Up Robust Features) feature algorithm: the method is a robust image identification and description algorithm, and is applied to object identification, image stitching, image registration and three-dimensional reconstruction in computer vision.
ICP (Iterative Closest Point) registration algorithm: the method is based on a data registration method, and utilizes a closest point search method, thereby solving an algorithm based on a free form curved surface.
Gaussian filtering: the method is linear smoothing filtering, is suitable for eliminating Gaussian noise, and is widely applied to the noise reduction process of image processing. Generally speaking, gaussian filtering is a process of performing weighted average on the whole image, and the value of each pixel point is obtained by performing weighted average on the value of each pixel point and other pixel values in the neighborhood.
RANdom SAmple Consensus (RANSAC, RANdom SAmple Consensus) algorithm: the algorithm is an algorithm for calculating mathematical model parameters of data according to a group of sample data sets containing abnormal data to obtain effective sample data.
Haar-like wavelets: the signal (function) is gradually subjected to multi-scale refinement through telescopic translation operation, finally time subdivision at a high frequency and frequency subdivision at a low frequency are achieved, the requirement of time-frequency signal analysis can be automatically adapted, and therefore any details of the signal can be focused.
First-class singular points: the method includes the steps that two images are respectively a first image (such as a cross-section image to be processed) and a second image (such as a reference two-dimensional image), some feature points on the cross-section image to be processed do not have corresponding feature points of matching points on the reference two-dimensional image, and the feature points on the cross-section image to be processed are first-class singular points.
Second-type singular points: it is assumed that two images are respectively a first image (such as a three-dimensional image to be processed) and a second image (such as a reference three-dimensional image), a second-type singular point of the first image is a feature point of a feature point in the first image, which does not have a corresponding matching point in the second image, and similarly, a second-type singular point of the second image is a feature point of a feature point in the second image, which does not have a corresponding matching point in the first image.
In the clinical application of supersound, the tradition mode is that the doctor adopts hand-held type two-dimensional ultrasound equipment to gather mammary gland image, and this mode formation of image field of vision is less and need gather many times, and every sampling face shows alone, need look up a plurality of images when clinical diagnosis, causes the not enough comprehensive, probably the problem of leaking diagnosing of field of vision easily.
In view of the above, the embodiments of the present application propose a method for generating an ultrasound breast three-dimensional panoramic image and an ultrasound apparatus, mainly in light of the above drawbacks. The inventive concept of the method can be summarized as follows: the projection plane of the breast is divided into a plurality of sector areas, and each sector area corresponds to one scanning direction. In the ultrasonic examination, the doctor scans the breast from the bottom toward the nipple with a probe for each scanning position to obtain a three-dimensional image of the scanning position. Because the three-dimensional images in the same scanning direction comprise a plurality of cross-section images, and different cross-section images can be regarded as two-dimensional images, and because different cross-section images can be partially overlapped and stretched in space when forming the second three-dimensional image, in order to be capable of well fusing the three-dimensional images in different scanning directions to obtain the following three-dimensional panoramic image, the cross-section images in the same scanning direction not only need to be subjected to noise reduction treatment, but also can be subjected to spatial position registration to determine the actual spatial positions of the different cross-section images, and distortion is reduced. Then, the three-dimensional images of the adjacent scanning directions are subjected to fusion processing, so that the following three-dimensional panoramic image is obtained. Based on the method provided by the application, the three-dimensional panoramic image solves the small visual field limitation of the traditional mammary gland ultrasonic imaging image, provides an integral three-dimensional and ultrasonic image of a mammary gland for mammary gland ultrasonic examination, and a doctor can visually check the whole mammary gland tissue structure of an examined person through the panoramic image, so that more accurate and efficient diagnosis is realized for mammary gland disease screening.
After introducing the design concept of the embodiment of the present application, some simple descriptions are provided below for application scenarios to which the technical solution of the embodiment of the present application can be applied, and it should be noted that the application scenarios described below are only used for describing the embodiment of the present application and are not limited. In specific implementation, the technical scheme provided by the embodiment of the application can be flexibly applied according to actual needs.
Referring to fig. 1, a block diagram of an ultrasound apparatus provided in an embodiment of the present application is shown.
It should be understood that the ultrasound device 100 shown in fig. 1 is merely an example, and that the ultrasound device 100 may have more or fewer components than shown in fig. 1, may combine two or more components, or may have a different configuration of components. The various components shown in the figures may be implemented in hardware, software, or a combination of hardware and software, including one or more signal processing and/or application specific integrated circuits.
A block diagram of a hardware configuration of an ultrasound apparatus 100 according to an exemplary embodiment is exemplarily shown in fig. 1.
As shown in fig. 1, the ultrasound apparatus 100 may include, for example: a processor 110, a memory 120, a display unit 130 and an ultrasound image acquisition device 140; wherein:
an ultrasound image acquisition means 140 for emitting an ultrasound beam;
a display unit 130 for displaying an ultrasound image;
the memory 120 is configured to store data required for ultrasound imaging, which may include software programs, application interface data, and the like;
the processor 110 is connected to the ultrasound image obtaining apparatus 140 and the display unit 130, and configured to execute the method for generating the ultrasound breast three-dimensional panorama provided by the embodiment of the present application.
To further explain the technical solutions provided by the embodiments of the present application, the following detailed description is made with reference to the accompanying drawings and the specific embodiments. Although the embodiments of the present application provide method operation steps as shown in the following embodiments or figures, more or fewer operation steps may be included in the methods based on conventional or non-inventive efforts. In steps where no necessary causal relationship exists logically, the order of execution of these steps is not limited to the order of execution provided by the embodiments of the present application.
Fig. 2a is a schematic flow chart of a method for generating an ultrasound breast three-dimensional panorama provided in an embodiment of the present application, including the following steps:
in step 201, a projection plane of a breast area is divided into a plurality of sector areas with a nipple as a center, each sector area corresponds to a scanning direction, and a breast part with a projection point falling in the same sector area is used as a breast block.
The circular area shown in fig. 2b has a center point as a projected point of the nipple, and the circular area can be divided into 4, 8, and 12 sector areas, which are not limited in this application. The specific division can be determined by the imaging range of each scanning of the probe. For example, four scans can cover the entire breast, and are divided into 4 regions, 5 scans can cover the entire breast, and are divided into 5 sector regions, and 8 scans can cover the entire breast, and are divided into 8 sector regions. The dashed line shown in fig. 2b is the central axis of each sector, representing the central position of one scanning position.
In step 202, for each scanning position, performing ultrasound imaging from the bottom of the breast block of each scanning position to the direction of the nipple to obtain a first three-dimensional image of each scanning position, wherein the first three-dimensional image includes a plurality of cross-sectional images of each scanning position.
In step 203, the first three-dimensional image of each scanning direction is subjected to noise reduction processing, so as to obtain a second three-dimensional image.
In the embodiment of the present application, an image acquired by an ultrasound probe has noise spots or has distortion, so that before spatial position registration is performed on a cross-sectional image of a first three-dimensional image, noise reduction processing needs to be performed on the cross-sectional image, a specific processing manner adopts gaussian filtering processing as shown in formula (1), and other filtering manners are also applicable to the embodiment of the present application.
Figure BDA0003301690820000071
Wherein, x and y are coordinate values of pixel points on the image, and G (x, y) is a Gaussian weighted value corresponding to the (x, y) coordinates on the image.
When the cross-sectional image is subjected to the gaussian filtering process, a gaussian kernel template with discrete distribution as shown in table 1 can be used, and the weight of the gaussian kernel is 1/273. And (3) moving a central element of the convolution kernel on the cross-section image of the first three-dimensional image according to the Gaussian kernel template in the table 1, and performing weighted summation on the point Gaussian kernel template and each corresponding pixel point in the corresponding image.
TABLE 1
Figure BDA0003301690820000072
Figure BDA0003301690820000081
And carrying out Gaussian denoising on the cross section image of the first three-dimensional image to obtain a second three-dimensional image.
In step 204, the spatial position registration is performed on the different cross-sectional images of the second three-dimensional image, so as to obtain a third three-dimensional image of each scanning position.
Since the image spaces of different cross-sectional images are different, the spatial position registration of different cross-sectional images requires the operation shown in fig. 3a, which specifically includes the following steps.
In step 301, the cross-sectional images to be processed are sequentially acquired according to the scanning sequence, and step 302 to step 305 are performed on the cross-sectional images to be processed.
In step 302, feature points of the cross-sectional image to be processed and the reference two-dimensional image are extracted, respectively.
In one embodiment, for feature point extraction on a two-dimensional plane, the application takes a cross-sectional image to be processed and a reference two-dimensional image as target images, and performs a Speeded Up Robust Features (SURF) algorithm as shown in fig. 4a on the target images to extract feature points, including the following steps:
in step 401, a target image is divided into a plurality of first regions according to a preset size.
In some embodiments, the predetermined size of the first region may be 40pix (pixels) × 40pix. The parameters can be set by a doctor in specific implementation. The first area may be slid in steps to obtain a plurality of first areas. As shown in fig. 4b, the position relationship between the first region 1 and the first region 2 obtained after sliding to the left by step length is shown, wherein the distance of the step length is smaller than the side length of the first region, and the part enclosed by the dotted line in the figure is the overlapping region generated after the first region shakes. According to the mode of the first region 1 and the first region 2 obtained after sliding to the left according to the step length, the first region 1 is slid to the first region n-1 and the first region n, and all pixel points of the target image are covered by n first regions.
In step 402, each first region is divided into a plurality of region blocks, and a feature vector of each region block is extracted, respectively.
In step 403, feature vectors of a plurality of region blocks of the same first region are added to obtain feature points of the same first region.
In the present application, if the feature vector of the first region is directly extracted, the computation amount is large and the error is large, so that the present application divides the first region into a plurality of region blocks, and at this time, the region processed by the processor is reduced, the computation amount is also reduced, and the processing speed is faster. After the feature points of the cross-sectional image to be processed and the reference two-dimensional image are obtained, step 303 is executed.
In step 303, performing registration operation on the feature points of the cross-section image to be processed and the reference two-dimensional image, removing first-class singular points in the cross-section image to be processed and removing first-class singular points in the reference two-dimensional image;
the characteristic points on the cross-section images to be processed are called as first-class singular points.
In step 304, a rigid transformation coefficient between the cross-sectional image to be processed and the reference two-dimensional image is constructed based on the remaining feature points, and the rigid transformation coefficient is used as a two-dimensional space transformation coefficient of the cross-sectional image to be processed and the reference two-dimensional image.
In the embodiment of the application, when a rigid transformation coefficient between the cross-section image to be processed and the reference two-dimensional image is established, b pairs of feature point pairs are randomly selected each time from the remaining matched feature point pairs to calculate the matching model between the feature point pairs, R times are randomly selected in total, and a more accurate matching model is obtained. The matching model is a rigid transformation coefficient between the cross-section image to be processed and the reference two-dimensional image.
Step 305 is performed for all pairs of feature points, the pair of feature points with the larger filtering error, and the processed cross-sectional image and the reference two-dimensional image after the pair of feature points with the larger filtering error.
In step 305, the cross-sectional image to be processed is multiplied by the two-dimensional spatial transform coefficient to obtain a reference two-dimensional image of the next cross-sectional image to be processed, and the multiplication result is used as the cross-sectional image in the third three-dimensional image, wherein the first cross-sectional image arranged according to the scanning time sequence is the reference two-dimensional image of the second cross-sectional image.
As shown in fig. 3b, taking three cross-sectional images in the same scanning direction as an example, assuming that the second cross-sectional image is a to-be-processed cross-sectional image, the first cross-sectional image is a first reference two-dimensional image T1, and the second cross-sectional image is multiplied by a first two-dimensional spatial transform coefficient between the second cross-sectional image and the first cross-sectional image to obtain a second reference two-dimensional image T2.
And taking the third cross-section image as a cross-section image to be processed, multiplying a second two-dimensional space transformation coefficient between the third cross-section image and the second reference two-dimensional image by the third cross-section image and the second two-dimensional space transformation coefficient to obtain a third reference two-dimensional image T2. And the like to obtain an image sequence (T1, T2 and T3) of each reference two-dimensional image as a third three-dimensional image in the scanning direction.
In the application, the cross-sectional images scanned by the ultrasound equipment are two-dimensional images, the number of the cross-sectional images in each scanning direction is not necessarily the same, and an error occurs if the images in each scanning direction are directly fused, so that through the step shown in fig. 3a, a plurality of cross-sectional images in each scanning direction are fused into a three-dimensional image in the scanning direction, and an error in the process of performing the fusion operation of the third three-dimensional images in the adjacent scanning directions can be avoided.
In step 205, three-dimensional spatial transform coefficients between the third three-dimensional images of adjacent scanning orientations are constructed.
In step 206, the third three-dimensional images of the adjacent scanning orientations are fused based on the three-dimensional spatial transformation coefficients of the adjacent scanning orientations, so as to obtain a three-dimensional panorama of the breast area.
The steps of obtaining the three-dimensional panorama of the mammary gland area in the application are described above, and in specific implementation, a plurality of cross-section images in each scanning direction are fused into a third three-dimensional image in the scanning direction, so that it is ensured that all data in the scanning direction are included in the third three-dimensional image, and then when the third three-dimensional image in the adjacent scanning direction is fused based on the spatial transformation parameters of the adjacent scanning direction, the error in the fusion process is reduced, and the calculation amount of a processor is smaller when feature point matching is performed. And the third three-dimensional images in the adjacent scanning directions have overlapping regions, so that the images in the overlapping regions after fusion do not have the condition of discontinuous fault or spliced parts, and doctors can be ensured to be capable of completely observing the breast condition at all angles.
In an embodiment of the present application, when performing the fusion processing on the third three-dimensional images in the adjacent scanning orientations, the fusion processing should be sequentially iterated on the scanning orientations according to the sequence of the target directions and based on the three-dimensional space transformation coefficients in the adjacent scanning orientations until the fusion processing on the last third three-dimensional image is completed. And fusing a third three-dimensional image into a fused subgraph obtained by the last fusion treatment in each fusion treatment. The application adopts an ICP (Iterative closed Point) registration algorithm to construct three-dimensional space transformation coefficients of adjacent scanning orientations, and the specific operation mode is as shown in FIG. 5, and the method comprises the following steps:
in step 501, the scanning directions are ordered according to the target direction, and the third three-dimensional images to be processed are sequentially obtained.
In step 502, determining a rigid transformation coefficient between the third three-dimensional image to be processed and the reference three-dimensional image as an initial three-dimensional space transformation coefficient; the first three-dimensional image obtained according to the target direction is a reference three-dimensional image of the second three-dimensional image, and the reference three-dimensional image of the third three-dimensional image to be processed is a fusion subgraph obtained by last fusion processing from the second three-dimensional image.
In this application, the third three-dimensional image to be processed and the reference three-dimensional image are calculated in a three-dimensional space, and a method for calculating an initial three-dimensional space transformation coefficient is shown in fig. 6, and specifically includes the following steps:
in step 601, an overlap region between the third three-dimensional image to be processed and the reference three-dimensional image is determined.
In step 602, feature points in an overlapping area in the third three-dimensional image to be processed are extracted as a first group of feature points.
In step 603, feature points in the overlapping region of the reference three-dimensional image are extracted as a second set of feature points.
In this application, the sequence of step 602 and step 603 is not limited to the sequence described in this application, and step 603 may be executed first and then step 602 may be executed. After the first set of feature points and the second set of feature points are extracted, in order to perform the iteration from step 606 to step 610, i represents the number of iterations in step 604, the upper limit of the number of iterations is set to a, and a is a positive integer. Then, in step 605, it is determined whether the iteration count reaches the upper limit of the iteration count, and if so, step 612 is executed; otherwise, go to step 606.
In step 606, m pairing points are randomly selected from the first set of feature points and the second set of feature points.
In step 607, a transformation matrix is determined based on the m pairing points.
In step 608, for each pairing point other than the m pairing points, a projection error between the pairing points is determined based on the transformation matrix.
In step 609, after the projection error between the paired points is determined, if the error is smaller than the error threshold, the paired points are added to the inner point set, wherein the inner point set further includes m paired points.
In step 610, after the interior point set is obtained, if the number of elements in the interior point set is greater than the number of the optimal interior point set obtained in the previous iteration, the optimal interior point set is updated to the interior point set of the current iteration.
After the steps 606-610 are performed, the value of i is incremented by 1 in step 611, the record is performed once for the steps 606-610, and then the process returns to step 604 to determine whether the value of i is equal to a.
After steps 606-610 are performed a times iteratively, the optimal interior point set is output in step 612, and the initial three-dimensional spatial transform coefficients are calculated based on the optimal interior point set.
The overlapping area between the third three-dimensional image to be processed and the reference three-dimensional image is shown in fig. 10. When the initial three-dimensional space transformation coefficient is calculated, the feature points in the third three-dimensional image and the reference three-dimensional image are not necessarily matched exactly, and the best matched point of the feature point on the third three-dimensional image on the reference three-dimensional image is not known, so the initial three-dimensional space transformation coefficient is calculated before the third three-dimensional image and the reference three-dimensional image are fused. When the initial three-dimensional space transformation coefficient is calculated, a RANdom SAmple Consensus (RANSAC) algorithm is adopted to calculate the initial three-dimensional space transformation coefficient, and as the result of randomly extracting m pairing points once for pairing has contingency, the result is repeatedly extracted a times in the application for rigor, so that the reliability of the output result is ensured.
After the initial three-dimensional spatial transform coefficient is calculated, in step 503, assuming the number of iterations as j, step 505-is executed until n iterations are performed for the third three-dimensional image loop to be processed, where n is a positive integer.
In step 504, it is determined whether j is less than n, if so, step 505 is performed, otherwise, step 514 is performed.
In step 505, the initial three-dimensional space transformation coefficient is used to map the third three-dimensional image to be processed to the reference three-dimensional image according to the formula (2), so as to obtain a feature point set of the third three-dimensional image to be processed on the reference three-dimensional image.
Figure BDA0003301690820000111
In the formula (2), g x And g y And the gradient images of the third three-dimensional image to be processed in the horizontal direction and the vertical direction are obtained. x is the number of m For the feature points on the third three-dimensional image to be processed, converting the feature points to the x-axis coordinate, y, of the corresponding points on the reference three-dimensional image m For the transformation of the feature points on the third three-dimensional image to be processed to the y-axis coordinates, g, of the corresponding points on the reference three-dimensional image x (x, y) is the pixel value corresponding to the gradient image in the horizontal direction of the third three-dimensional image to be processed, g y (x, y) is the pixel value corresponding to the gradient image in the vertical direction of the third three-dimensional image to be processed, x 、y And f (x, y) and g (x, y) are pixel values of the reference three-dimensional image and the third three-dimensional image to be processed.
In step 506, based on the feature point set, coordinate points of feature points in the third three-dimensional image to be processed on the reference three-dimensional image are determined.
In step 507, the coordinate point is used as the center in the reference three-dimensional image, the matching point of the feature point is searched in the specified range, and the correlation degree between the feature point and the matching point is determined.
The specified range is determined in the manner shown in formula (3), pre _ C X And Pre _ C Y Is a pair of user-provided parameters that are used to limit the search range in the reference three-dimensional image.
[x M -|Pre_C X |,x M +|Pre_C X |]×[y M -|Pre_C Y |,y M +|Pre_C Y |]......(3)
In step 508, whether the correlation sum is greater than a correlation threshold is determined according to formula (4), and if the correlation sum is greater than the correlation threshold, step 509 is executed, where in formula (4), G represents the first point set, NC represents the correlation between two points, f is the correlation threshold, (x, y) represents a feature point in the third three-dimensional image to be processed, and (x 1, y 1) represents a coordinate point in the reference three-dimensional image.
G={(x,y)|NC((x,y),(x 1 ,y 1 ))>f)}.......................(4)
In step 509, the matching points are retained in the first point set, and after all elements of the first motor are screened out based on the correlation threshold, the second singular points in the first point set are filtered out, so as to obtain a second point set.
In step 510, it is determined whether the number of elements in the second point set is greater than the designated element number, if the number of elements in the second point set is greater than the designated element number, step 511 is executed, and if the number of elements in the second point set is not greater than the designated element number, step 512 is executed.
In step 511, the three-dimensional spatial transform coefficients are updated based on the second set of points.
In step 512, the three-dimensional spatial variation coefficient of the last iteration is kept unchanged.
In step 513, the number of iterations j is incremented by 1, one iteration is recorded, and then the step 504 is executed to determine whether the iteration is executed n times.
In the application, the three-dimensional space transformation coefficient obtained after iteration is performed for n times is the three-dimensional space transformation coefficient between the scanning directions of the third three-dimensional image to be processed and the previous third three-dimensional image. The step in fig. 5 is repeatedly executed to make the iteration result more accurate, and to determine the pixel point with the maximum correlation between the three-dimensional image to be processed and the reference three-dimensional image.
Since the second-class singular points cannot be directly filtered and the feature points with the highest correlation in the first point set need to be reserved, the present application proposes that the specific steps of performing the method for filtering the second-class singular points as shown in fig. 7 include the following.
In step 701, a histogram of the correlation corresponding to each matching point is counted with reference to the first coordinate in the first point set.
In step 702, a target range is searched in the histogram with the maximum correlation as a reference, wherein the ratio of the number of matching points in the target range to the number of matching points in the first point set is within a specified ratio range. In the present application, for example, the specified proportion range is 70% to 80% of the pairing points of the first point set.
In step 703, matching points outside the target range are used as second-type singular points, and the second-type singular points are filtered out from the first point set to obtain a second point set.
In step 704, if the number of elements in the second point set is greater than the specified number of elements, the rigid transformation coefficient is updated according to equation (5) by applying the least square method based on the second point set.
The rigid transform coefficients = (a,C X ,C Y )..................(5)
in the formula (5), α is a parameter of the least square method, C X And C Y Is a user-provided parameter that limits the scope of the search for the second set of points.
In the application, only the pixel point with the maximum correlation in the first point set cannot be taken to update the second point set, other pixel points with high actual correlation but with low correlation caused by calculation errors are easily abandoned, so that the application selects the interval with the proportion range of the first point set being 70% -80% of the pairing point number to select, and the pixel point with the high correlation in the first point set can be extracted to the maximum extent.
In another embodiment, when the fusion processing is performed by sequentially iterating according to the sequence of the target directions, the third three-dimensional image to be fused is sequentially obtained according to the sequence of the target directions, and the method shown in fig. 8 is executed, which specifically includes the following steps:
in step 801, a target rigidity change coefficient is obtained based on a rigidity change coefficient between a third three-dimensional image to be fused and a reference three-dimensional image of the third three-dimensional image to be fused multiplied by a reference rigidity change coefficient in the last fusion processing, wherein the target rigidity change coefficient is a three-dimensional space change coefficient between the third three-dimensional image in the first scanning direction and the third three-dimensional image in the second scanning direction in the first fusion; the reference three-dimensional image is a fused image obtained in the last fusion processing, and the reference three-dimensional image is a third three-dimensional image in the first scanning direction in the first fusion processing.
In the embodiment of the present application, after the rigid transformation coefficient is calculated, the rigid transformation coefficient can only represent the rigid transformation relationship between the last to-be-processed third three-dimensional image and the reference three-dimensional image, but not the rigid transformation relationship between the target to-be-processed third three-dimensional image and the reference three-dimensional image, and therefore, the rigid transformation relationship between the last to-be-processed third three-dimensional image and the reference three-dimensional image needs to be transformed into the rigid transformation relationship between the target to-be-processed third three-dimensional image and the reference three-dimensional image by using formula (6).
T c =T*T p .......................(6)
In the formula (6), T c A rigid conversion relation between the target to-be-processed third three-dimensional image and the reference three-dimensional image is obtained, T is a three-dimensional space transformation coefficient between the scanning directions of the target to-be-processed third three-dimensional image and the previous third three-dimensional image, and T is p The rigid conversion relation between the third three-dimensional image to be processed and the reference three-dimensional image at the last time. And the current reference three-dimensional image is an image obtained by fusing the last to-be-processed third three-dimensional image and the reference three-dimensional image.
And (3) executing step 802 after obtaining the rigid conversion relation between the target to-be-processed third three-dimensional image and the reference three-dimensional image according to the formula (6).
In step 802, for each pixel point of the third three-dimensional image to be fused, determining a position coordinate of the pixel point mapped to a reference three-dimensional image of the third three-dimensional image to be fused by using a target rigid transformation coefficient; if the position coordinates are integers, go to step 803; if the position coordinates are non-integer, go to step 804.
In another embodiment, the determination of the position coordinates of the pixel points mapped into the reference three-dimensional image of the third three-dimensional image to be fused fills the reference three-dimensional image according to formula (7), where inv represents the pair T c And T p And (5) performing matrix inversion. The pixel at the position on the third three-dimensional image to be processed is the pixel on the reference three-dimensional image. The calculation is performed using bilinear transform interpolation for points whose coordinates are not integers (optional).
T=inv(T c )*inv(T p )....................(7)
In step 803, the pixel value of the pixel point is used as the pixel value of the position coordinate in the reference three-dimensional image.
In step 804, pixel values of the position coordinates are determined by interpolation.
Wherein the pixel value of the position coordinate is determined based on the formula (8).
I n =w·I s +(1-w)·I m
w=d/PointDis tance............................(8)
In formula (8), d is a distance from the position coordinate of the pixel point mapped to the reference three-dimensional image to the center point of the reference three-dimensional image;
the PointDistance is a distance between a position coordinate of the central point in the three-dimensional image to be fused and the central point of the reference three-dimensional image;
I n mapping the pixel points to pixel values in the reference three-dimensional image, I s Is the pixel value of the position coordinate in the reference three-dimensional image, I m And w is the weight of the pixel value of the pixel point.
In the application, each pixel point of the third three-dimensional image to be fused is directly mapped on the reference three-dimensional image in the three-dimensional space, so that a large error exists, and some pixel points of the third three-dimensional image to be fused are mapped on the reference three-dimensional image after calculation and have offset. After the coordinate of each pixel point of the third three-dimensional image to be fused is converted by the method shown in fig. 8, the error can be reduced as much as possible, and the accuracy of the fused image is ensured.
And after the pixel value of the position coordinate is determined, mapping the pixel point on the third three-dimensional image to be fused to the position in the reference three-dimensional image, and if the pixel point on the third three-dimensional image to be fused does not have the corresponding pixel point on the reference three-dimensional image, still using the pixel point on the reference three-dimensional image. And fusing the third three-dimensional image to be fused and the reference three-dimensional image, and then performing smoothing treatment to enable the fusion boundary region to be in natural transition, and finally realizing splicing and fusion of adjacent three-dimensional images.
By the splicing and fusing method, images in the azimuth are sequentially scanned in a mode shown in fig. 9 for splicing and fusing to obtain a three-dimensional panoramic image of the mammary gland area, and then the three-dimensional panoramic image of the mammary gland area is subjected to annular cutting processing shown in fig. 11 to obtain a three-dimensional panoramic image of the annular mammary gland area, and the three-dimensional panoramic image is displayed through a display screen.
In some possible embodiments, the various aspects of the generation method for acquiring an ultrasound breast three-dimensional panoramic image provided in the embodiments of the present application may also be implemented in the form of a program product including program code for causing a computer device to perform the steps of the method for data processing according to various exemplary embodiments of the present application described in the present specification when the program code runs on the computer device.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present application without departing from the spirit and scope of the application. Thus, if such modifications and variations of the present application fall within the scope of the claims of the present application and their equivalents, the present application is intended to include such modifications and variations as well.

Claims (10)

1. A generation method of an ultrasonic breast three-dimensional panorama is characterized in that a projection plane of a breast area is divided into a plurality of sector areas by taking a nipple as a center, each sector area corresponds to a scanning direction, and a breast part of which a projection point falls into the same sector area is used as a breast block, and the method comprises the following steps:
for each scanning orientation: carrying out ultrasonic imaging from the bottom of the breast block in the scanning direction to the nipple direction to obtain a first three-dimensional image in the scanning direction; the first three-dimensional image comprises a plurality of cross-section images; respectively carrying out noise reduction processing on the first three-dimensional image to obtain a second three-dimensional image; carrying out spatial position registration on different cross section images of the second three-dimensional image to obtain a third three-dimensional image of the scanning direction;
constructing a three-dimensional space transformation coefficient between third three-dimensional images of adjacent scanning directions;
and performing fusion processing on the third three-dimensional images of the adjacent scanning directions based on the three-dimensional space transformation coefficients of the adjacent scanning directions to obtain a three-dimensional panorama of the mammary gland area.
2. The method according to claim 1, wherein the spatial position registration of the different cross-sectional images of the second three-dimensional image to obtain a third three-dimensional image of the scanning orientation includes:
sequentially acquiring cross section images to be processed according to a scanning time sequence, and executing the following operations on the cross section images to be processed:
determining a two-dimensional space transformation coefficient between the cross-section image to be processed and a reference two-dimensional image;
and multiplying the cross section image to be processed by the two-dimensional space transformation coefficient to obtain a reference two-dimensional image of the next cross section image to be processed, and taking the multiplication result as the cross section image in the third three-dimensional image, wherein the first cross section image obtained by scanning time sequence arrangement is the reference two-dimensional image of the second cross section image.
3. The method according to claim 2, wherein the determining two-dimensional spatial transform coefficients between the cross-sectional image to be processed and the reference two-dimensional image specifically comprises:
respectively extracting characteristic points of the cross section image to be processed and the reference two-dimensional image;
performing registration operation on the characteristic points of the cross-section image to be processed and the reference two-dimensional image, and eliminating a first type of singular points in the cross-section image to be processed and eliminating a first type of singular points in the reference two-dimensional image;
and constructing a rigid transformation coefficient between the cross-section image to be processed and the reference two-dimensional image based on the residual feature points to serve as the two-dimensional space transformation coefficient.
4. The method according to any one of claims 1 to 3, wherein the scanning orientations are sorted according to a target direction, the target direction is a clockwise direction or a counterclockwise direction, and the fusing processing of the third three-dimensional images of the adjacent scanning orientations based on the three-dimensional spatial transform coefficients of the adjacent scanning orientations specifically includes:
sequentially carrying out fusion processing on the third three-dimensional images according to the sequence of the target direction until the fusion processing of the last third three-dimensional image is finished;
and fusing one third three-dimensional image into the fused subgraph obtained by the last fusion processing in each fusion processing.
5. The method according to claim 4, wherein constructing three-dimensional spatial transform coefficients between the third three-dimensional images at adjacent scanning orientations comprises:
the scanning directions are sequenced according to the target direction, and third three-dimensional images to be processed are sequentially obtained;
determining a rigid transformation coefficient between the third three-dimensional image to be processed and the reference three-dimensional image as an initial three-dimensional space transformation coefficient; the first three-dimensional image obtained by sequencing according to the target direction is a reference three-dimensional image of the second three-dimensional image, and the reference three-dimensional image of the third three-dimensional image to be processed is a fusion subgraph obtained by the last fusion processing from the second three-dimensional image;
and circularly executing the following operations until iterating for n times aiming at the third three-dimensional image to be processed, wherein n is a positive integer:
mapping the third three-dimensional image to be processed to the reference three-dimensional image by adopting the initial three-dimensional space transformation coefficient to obtain a feature point set of the third three-dimensional image to be processed on the reference three-dimensional image;
determining coordinate points of the feature points in the third three-dimensional image to be processed on the reference three-dimensional image based on the feature point set;
searching matching points of the characteristic points in a specified range by taking the coordinate point as a center in the reference three-dimensional image; determining the correlation degree between the characteristic points and the matching points;
if the correlation degree is greater than the correlation degree threshold value, the matching point is reserved to a first point set;
filtering out second singular points in the first point set to obtain a second point set;
if the number of elements in the second point set is greater than the designated element number, updating the three-dimensional space transformation coefficient based on the second point set, and if the number of elements in the second point set is not greater than the designated element number, keeping the three-dimensional space transformation coefficient of the previous iteration unchanged;
and the three-dimensional space transformation coefficient obtained after iteration for n times is the three-dimensional space transformation coefficient between the scanning directions of the third three-dimensional image to be processed and the last third three-dimensional image.
6. The method according to claim 5, wherein the filtering out a second type of singular points in the first point set to obtain a second point set specifically comprises:
taking the first coordinate in the first point set as a reference, and counting a histogram of the correlation degree corresponding to each matching point;
searching a target range in the histogram by taking the maximum correlation degree as a reference, wherein the ratio of the number of matching points in the target range to the number of matching points of the first point set is in a specified proportion range;
and taking the matching points outside the target range as the second-class singular points, and filtering the second-class singular points from the first point set to obtain the second point set.
7. The method according to claim 5, wherein the sequentially performing the fusion processing on the third three-dimensional images according to the sequence of the target directions specifically comprises:
sequentially acquiring a third three-dimensional image to be fused according to the sequence of the target direction, and executing the following operations:
obtaining a target rigidity change coefficient based on a three-dimensional space transformation coefficient during the last fusion processing of the three-dimensional space transformation coefficient between the third three-dimensional image to be fused and a reference three-dimensional image of the third three-dimensional image to be fused, wherein the target rigidity change coefficient is a three-dimensional space transformation coefficient between the third three-dimensional image in the first scanning direction and the third three-dimensional image in the second scanning direction during the first fusion; the reference three-dimensional image is a fused image obtained in the last fusion processing, and the reference three-dimensional image is a third three-dimensional image in the first scanning direction in the first fusion processing;
determining the position coordinates of each pixel point mapped to the reference three-dimensional image by adopting the target rigid transformation coefficient for each pixel point of the third three-dimensional image to be fused;
if the position coordinate is an integer, taking the pixel value of the pixel point as the pixel value of the position coordinate in the reference three-dimensional image;
and if the position coordinate is a non-integer, determining the pixel value of the position coordinate by adopting an interpolation mode.
8. The method according to claim 7, wherein the determining the pixel value of the position coordinate by interpolation includes:
determining pixel values for the location coordinates based on the following formula:
I n =w·I s +(1-w)·I m
w=d/PointDistance
wherein d is the distance from the position coordinate of the pixel point mapped to the reference three-dimensional image to the central point of the reference three-dimensional image;
the PointDistance is a distance between a position coordinate of the central point in the three-dimensional image to be fused and the central point of the reference three-dimensional image;
I n mapping the pixel points to pixel values in the reference three-dimensional image, I s And taking the pixel value of the position coordinate in the reference three-dimensional image as Im, wherein Im is the pixel value of the pixel point, and w is the weight.
9. The method according to claim 5, wherein the determining a rigid transformation coefficient between the third three-dimensional image to be processed and the reference three-dimensional image as an initial three-dimensional spatial transformation coefficient specifically comprises:
determining an overlapping area between the third three-dimensional image to be processed and the reference three-dimensional image;
extracting feature points of the overlapping area in the third three-dimensional image to be processed to serve as a first group of feature points;
extracting feature points of the overlapping area in the reference three-dimensional image to serve as a second group of feature points;
the following operations are executed in a loop until iteration is executed a times, wherein a is a positive integer:
randomly selecting m matching points from the first group of characteristic points and the second group of characteristic points;
determining a transformation matrix based on the m pairing points;
for each pairing point other than the m pairing points, determining a projection error between the pairing points based on the transformation matrix;
if the projection error is smaller than the error threshold value, adding the matching point into an inner point set; wherein the m pairing points are also included in the inner point set;
if the number of elements in the inner point set is higher than that of the elements in the optimal inner point set obtained by the last iteration, updating the optimal inner point set into the inner point set of the current iteration;
after iteration is performed for a times, the initial three-dimensional space transformation coefficient is calculated based on the optimal inner point set.
10. An ultrasound device comprising a display, a memory, and a processor, wherein:
the display is used for displaying the ultrasonic image;
the memory to store computer-executable instructions;
the processor is configured to perform the method of any of claims 1-9 based on the computer-executable instructions.
CN202111192345.8A 2021-10-13 2021-10-13 Generation method of ultrasonic breast three-dimensional panoramic image and ultrasonic equipment Pending CN115965570A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202111192345.8A CN115965570A (en) 2021-10-13 2021-10-13 Generation method of ultrasonic breast three-dimensional panoramic image and ultrasonic equipment
PCT/CN2022/109194 WO2023061000A1 (en) 2021-10-13 2022-07-29 Generation method for ultrasonic mammary gland three-dimensional panoramic image and ultrasonic device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111192345.8A CN115965570A (en) 2021-10-13 2021-10-13 Generation method of ultrasonic breast three-dimensional panoramic image and ultrasonic equipment

Publications (1)

Publication Number Publication Date
CN115965570A true CN115965570A (en) 2023-04-14

Family

ID=85888308

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111192345.8A Pending CN115965570A (en) 2021-10-13 2021-10-13 Generation method of ultrasonic breast three-dimensional panoramic image and ultrasonic equipment

Country Status (2)

Country Link
CN (1) CN115965570A (en)
WO (1) WO2023061000A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117351037B (en) * 2023-12-04 2024-02-09 合肥合滨智能机器人有限公司 Rotary and parallel moving type equidistant breast scanning track planning method

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090010507A1 (en) * 2007-07-02 2009-01-08 Zheng Jason Geng System and method for generating a 3d model of anatomical structure using a plurality of 2d images
CN101455576B (en) * 2007-12-12 2012-10-10 深圳迈瑞生物医疗电子股份有限公司 Ultrasonic wide-scene imaging method, device and system
CN107451983A (en) * 2017-07-18 2017-12-08 中山大学附属第六医院 The three-dimensional fusion method and system of CT images
CN111588464B (en) * 2019-02-20 2022-03-04 忞惪医疗机器人(苏州)有限公司 Operation navigation method and system
CN110675398B (en) * 2019-10-22 2022-05-17 深圳瀚维智能医疗科技有限公司 Mammary gland ultrasonic screening method and device and computer equipment
CN111275617B (en) * 2020-01-09 2023-04-07 云南大学 Automatic splicing method and system for ABUS breast ultrasound panorama and storage medium

Also Published As

Publication number Publication date
WO2023061000A1 (en) 2023-04-20

Similar Documents

Publication Publication Date Title
US10542955B2 (en) Method and apparatus for medical image registration
KR101805619B1 (en) Apparatus and method for creating optimal 2-dimensional medical image automatically from 3-dimensional medical image
CN107886508B (en) Differential subtraction method and medical image processing method and system
US7822246B2 (en) Method, a system and a computer program for integration of medical diagnostic information and a geometric model of a movable body
US8116548B2 (en) Method and system for detecting 3D anatomical structures using constrained marginal space learning
JP6273291B2 (en) Image processing apparatus and method
JP7023641B2 (en) Image processing equipment, image processing methods and programs
CN105451657A (en) System and method for navigating tomosynthesis stack including automatic focusing
KR20140032810A (en) Method and appartus of maching medical images
US9504450B2 (en) Apparatus and method for combining three dimensional ultrasound images
EP2827301A1 (en) Image generation device, method, and program
EP2601637B1 (en) System and method for multi-modality segmentation of internal tissue with live feedback
JP6772123B2 (en) Image processing equipment, image processing methods, image processing systems and programs
JP2021027982A (en) Image processing apparatus and image processing method
CN115965570A (en) Generation method of ultrasonic breast three-dimensional panoramic image and ultrasonic equipment
KR20130016942A (en) Method and apparatus for generating 3d volume panorama
JP5364009B2 (en) Image generating apparatus, image generating method, and program thereof
JP4653760B2 (en) Image segmentation of volume dataset
US20030128890A1 (en) Method of forming different images of an object to be examined
JP2019146926A (en) Image processing device, image processing method, and program
JP6934948B2 (en) Operation method of fluid analyzer and fluid analyzer and fluid analysis program
US8165375B2 (en) Method and system for registering CT data sets
CN111260606B (en) Diagnostic device and diagnostic method
US11138736B2 (en) Information processing apparatus and information processing method
JP6768415B2 (en) Image processing equipment, image processing methods and programs

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination