CN109325943B - Three-dimensional volume measurement method and device - Google Patents

Three-dimensional volume measurement method and device Download PDF

Info

Publication number
CN109325943B
CN109325943B CN201811050729.4A CN201811050729A CN109325943B CN 109325943 B CN109325943 B CN 109325943B CN 201811050729 A CN201811050729 A CN 201811050729A CN 109325943 B CN109325943 B CN 109325943B
Authority
CN
China
Prior art keywords
vois
voi
dimensional image
detected
dimensional
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811050729.4A
Other languages
Chinese (zh)
Other versions
CN109325943A (en
Inventor
王雅儒
唐艳红
许龙
向斌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sonoscape Medical Corp
Original Assignee
Sonoscape Medical Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sonoscape Medical Corp filed Critical Sonoscape Medical Corp
Priority to CN201811050729.4A priority Critical patent/CN109325943B/en
Publication of CN109325943A publication Critical patent/CN109325943A/en
Application granted granted Critical
Publication of CN109325943B publication Critical patent/CN109325943B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/103Detecting, measuring or recording devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
    • A61B5/107Measuring physical dimensions, e.g. size of the entire body or parts thereof
    • A61B5/1073Measuring volume, e.g. of limbs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • G06T2207/10012Stereo images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30096Tumor; Lesion

Abstract

The application discloses a three-dimensional volume measurement method and a device, wherein the method comprises the following steps: obtaining a three-dimensional image of an object to be detected; extracting the characteristics of the three-dimensional image by using a preset neural network to obtain the characteristic data of the three-dimensional image; performing convolution calculation on the characteristic data to obtain a plurality of primary interested spaces VOIs corresponding to the object to be detected; filtering and combining the plurality of primarily selected VOIs to obtain a target VOI of the object to be detected; segmenting the target VOI to obtain a MASK MASK of the target VOI; and calculating the three-dimensional volume of the object to be detected based on the MASK of the target VOI.

Description

Three-dimensional volume measurement method and device
Technical Field
The present disclosure relates to the field of image processing technologies, and in particular, to a three-dimensional volume measurement method and apparatus.
Background
The occurrence of thyroid diseases such as thyroiditis, thyroid adenoma, nodular goiter, thyroid carcinoma and the like is usually represented by changes of volume and form, and the thyroid volume is used as a quantitative index for evaluating the size of the thyroid and plays an important role in the occurrence, development, diagnosis, curative effect judgment and epidemiological research of the thyroid diseases.
In the existing thyroid three-dimensional volume measurement scheme, the maximum plane of the boundary-radial line of the tissue to be measured is selected semi-automatically or manually, and the volume of the body to be measured is obtained according to certain angle selection and system automatic integration, for example, the existing commonly used three-dimensional ultrasonic computer-aided virtual organ analysis technology is used for detecting the volume of the thyroid.
However, in the existing scheme, a doctor is usually required to manually assist a computer to outline and position the thyroid gland boundary, so that the operation complexity during thyroid gland volume measurement is high, and the measurement efficiency is low.
Disclosure of Invention
In view of this, an object of the present application is to provide a method and an apparatus for three-dimensional volume measurement, so as to solve the technical problem in the prior art that the three-dimensional measurement requires manual assistance to outline the boundary contour of an object, and the operation complexity is high, resulting in low measurement efficiency.
The application provides a three-dimensional volume measurement method, which comprises the following steps:
acquiring a three-dimensional image of an object to be detected;
extracting the characteristics of the three-dimensional image by using a preset neural network to obtain the characteristic data of the three-dimensional image;
performing convolution calculation on the feature data to obtain a plurality of primary interested spaces VOIs (volume of interest) corresponding to the object to be detected;
filtering and combining the plurality of primarily selected VOIs to obtain a target VOI of the object to be detected;
segmenting the target VOI to obtain a MASK MASK of the target VOI;
and calculating the three-dimensional volume of the object to be detected based on the MASK of the target VOI.
The above method, preferably, acquiring a three-dimensional image of an object to be measured, includes:
obtaining a two-dimensional image sequence of an object to be detected, wherein the two-dimensional image sequence comprises at least two-dimensional images of the object to be detected;
and aligning the two-dimensional images in the two-dimensional image sequence to stack a three-dimensional image of the object to be detected.
Preferably, the above method, filtering and merging the multiple primarily selected VOIs to obtain the target VOI of the object to be measured, includes:
selecting N VOIs with the category scores ranked at the top from the plurality of primary VOIs based on the category scores of the primary VOIs, wherein N is a positive integer greater than or equal to 1;
and filtering and combining the N VOIs to obtain the target VOI of the object to be detected.
Preferably, before the filtering and merging process is performed on the N VOIs, the method further includes:
performing regression processing on the N VOIs and corresponding feature data to obtain offset and scaling quantities of the N VOIs on the space;
adjusting the position and size of the N VOIs based on the offset and the amount of scaling.
The above method, preferably, further comprises:
and classifying the N VOIs and the corresponding characteristic data to obtain object categories of the N VOIs.
The application also provides a three-dimensional volume measuring device, including:
the image acquisition unit is used for acquiring a three-dimensional image of the object to be detected;
the characteristic extraction unit is used for extracting the characteristics of the three-dimensional image by using a preset neural network to obtain the characteristic data of the three-dimensional image;
the characteristic convolution unit is used for carrying out convolution calculation on the characteristic data to obtain a plurality of initially selected VOIs corresponding to the object to be detected;
the VOI processing unit is used for filtering and combining the plurality of initially selected VOIs to obtain a target VOI of the object to be detected;
the target segmentation unit is used for segmenting the target VOI to obtain a MASK MASK of the target VOI;
and the volume calculation unit is used for calculating the three-dimensional volume of the object to be detected based on the MASK MASK of the target VOI.
The above apparatus, preferably, the image obtaining unit includes:
the two-dimensional obtaining subunit is used for obtaining a two-dimensional image sequence of an object to be detected, wherein the two-dimensional image sequence comprises at least two-dimensional images of the object to be detected;
and the three-dimensional obtaining subunit is used for aligning the two-dimensional images in the two-dimensional image sequence so as to stack the three-dimensional image of the object to be detected.
In the above apparatus, preferably, the VOI processing unit includes:
a VOI selection subunit, configured to select, from the multiple initially selected VOIs, N VOIs with the category scores ranked first based on the category scores of the initially selected VOIs, where N is a positive integer greater than or equal to 1;
and the VOI processing subunit is used for filtering and combining the N VOIs to obtain the target VOI of the object to be detected.
The above apparatus, preferably, the apparatus further comprises:
the VOI adjusting unit is used for performing regression processing on the N VOIs and corresponding feature data to obtain offset and zoom quantity of the N VOIs on the space; adjusting the position and size of the N VOIs based on the offset and the amount of scaling.
The above apparatus, preferably, the apparatus further comprises:
and the category determining unit is used for classifying the N VOIs and the corresponding characteristic data to obtain the object categories of the N VOIs.
According to the technical scheme, after the three-dimensional image of the object to be measured such as the thyroid is obtained, the three-dimensional image is subjected to feature extraction including convolution calculation and the like to obtain the initially selected VOI of the object to be measured, the initially selected VOI is filtered and merged to obtain the determined target VOI of the object to be measured, the MASK of the target VOI is obtained after the VOI is segmented, and the three-dimensional volume of the object to be measured is calculated based on the MASK. According to the method and the device, the three-dimensional volume of the object to be measured is calculated through image acquisition and image processing, and the boundary contour of the object to be measured does not need to be manually assisted and sketched, so that the operation complexity is reduced, and the three-dimensional volume measurement efficiency is improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the provided drawings without creative efforts.
Fig. 1 is a flowchart illustrating an implementation of a three-dimensional volume measurement method according to an embodiment of the present disclosure;
FIGS. 2 to 5 are partial flow charts of a first embodiment of the present application;
fig. 6 is a schematic structural diagram of a three-dimensional volume measuring device according to a second embodiment of the present application;
fig. 7 to 8 are schematic partial structural views of a second embodiment of the present application;
fig. 9 to 10 are schematic structural diagrams of another embodiment of the present application;
fig. 11 and fig. 12 are flow charts of implementing three-dimensional volume measurement by a server according to a third embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1, a flowchart of an implementation of a three-dimensional volume measurement method provided in an embodiment of the present application is applied to an electronic device capable of image and data processing, such as a server or a computer.
In this embodiment, the method may include the steps of:
step 101: and acquiring a three-dimensional image of the object to be detected.
The object to be measured can be thyroid gland or thyroid nodule and other objects needing to measure three-dimensional volume.
It should be noted that the three-dimensional image of the object to be measured may be a three-dimensional image obtained by stacking a plurality of two-dimensional images, or may be a three-dimensional image directly acquired by a three-dimensional image acquisition device.
In an implementation manner, in order to improve accuracy and measurement efficiency of subsequent data processing, in this embodiment, preprocessing may be performed on data of the three-dimensional image, such as clipping, resampling to a uniform size, subtracting an average value, and performing numerical value normalization, where the resampling to the uniform size may be understood as: the larger three-dimensional image volume data is down-sampled in equal proportion, while the smaller three-dimensional image volume data is up-sampled in equal proportion.
Step 102: and performing feature extraction on the three-dimensional image by using a preset neural network to obtain feature data of the three-dimensional image.
The preset Neural network may be a convolutional Neural network cnn (probabilistic Neural network) for implementing abstract feature extraction.
Specifically, in the present embodiment, in step 102, a feature extraction network in a neural network, such as Resnet3D, FCN3D, UNet3D, etc., may be used to perform operations such as convolution, pooling, and nonlinear transformation on a three-dimensional image to extract a volume data feature map of the three-dimensional image, that is, feature data, which includes information such as size, shape, and spatial relationship of the volume data.
Step 103: and performing convolution calculation on the characteristic data to obtain a plurality of primary VOIs corresponding to the object to be detected.
The primary VOI comprises the category score and the position information of the VOI. The category score of the VOI is a value representing the probability that the VOI belongs to a preset category of the object to be measured, for example, an IoU belonging to a category of benign nodules of the thyroid or a category of malignant nodules of the thyroid is calculated in convolution calculation, the IoU is a ratio of the primary VOI ÷ target VOI/the primary VOI ÷ target VOI, and by calculating the intersection ratio IoU of the primary VOI and the target VOI, when IoU is greater than a certain threshold, such as 0.5, the category score belonging to the target VOI, such as a benign nodule or a malignant nodule, is 1, and otherwise is 0. And the position information of the VOI refers to three-dimensional position information of the VOI in a three-dimensional image.
In an implementation manner, in this embodiment, a convolution operation may be performed on the feature data by using a regional recommendation network rpn (region pro-social network), so as to obtain a plurality of initially selected VOIs of the object to be detected at corresponding positions on the three-dimensional image.
Step 104: and filtering and combining the plurality of initially selected VOIs to obtain the target VOI of the object to be detected.
The above multiple initially selected VOIs indicate the category and spatial position size of the object to be measured in the three-dimensional image, and in this embodiment, the initially selected VOIs are filtered and combined, so that a target VOI indicating the category and spatial position size of the object to be measured more accurately is obtained. The target VOI appears as a rectangular parallelepiped frame that at least approximately perfectly wraps the object to be measured.
Step 105: and segmenting the target VOI to obtain the MASK of the target VOI.
Specifically, in this embodiment, the target VOI may be segmented by using the segmentation subnet, so as to obtain the MASK of the target VOI.
The sub-network is used for segmenting an object to be detected, such as a thyroid nodule, the output of the sub-network is a MASK template representing a voxel type, the size of the MASK template is M, such as 1M 1, the MASK template corresponds to VOI voxels, the value of the voxel can represent the type of the object to be detected, for example, the voxel value is 0 to represent a non-nodule, the voxel value is 1 to represent a benign nodule, and the voxel value is 2 to represent a malignant nodule. In this embodiment, the target VOI is segmented by segmenting the subnets, so as to obtain a MASK capable of representing parameter information of the shape, size, volume, and the like of the object to be measured.
Then, in this embodiment, the MASK of the target VOI may be mapped to the volume data of the three-dimensional image, so as to obtain the MASK of the volume data in the entire three-dimensional image.
Step 106: and calculating the three-dimensional volume of the object to be detected based on the MASK of the target VOI.
Specifically, in this embodiment, automatic calculation of parameters such as volume may be performed on the MASK of the target VOI of the object to be measured, for example, binarizing the MASK of the target VOI, where the target portion belonging to the object to be measured is 1 and the non-target portion not belonging to the object to be measured is 0, and then accumulating the volume values and the like of the target portion belonging to 1, so as to obtain the three-dimensional volume of the object to be measured.
In addition, after the three-dimensional volume of the object to be measured is obtained in this embodiment, the three-dimensional volume is output and provided to the user, for example, the three-dimensional volume of the thyroid nodule is displayed to a doctor or a patient through a display screen to be used as a reference for subsequent treatment.
According to the technical scheme, after the three-dimensional image of the object to be measured, such as the thyroid is obtained, the three-dimensional image is subjected to feature extraction including convolution operation and the like to obtain the initially selected VOI of the object to be measured, the initially selected VOI is filtered and merged to obtain the determined target VOI of the object to be measured, and therefore MASK of the target VOI is obtained after the VOI is segmented, and the three-dimensional volume of the object to be measured is calculated based on the MASK. In the embodiment, the three-dimensional volume of the object to be measured is calculated through image acquisition and image processing, and the boundary profile of the object to be measured does not need to be manually assisted and sketched, so that the operation complexity is reduced, and the efficiency of three-dimensional volume measurement is improved.
In one implementation, step 101 may be implemented by the following steps, as shown in fig. 2:
step 201: a two-dimensional image sequence of the object to be measured is obtained.
The two-dimensional image sequence comprises two-dimensional images of at least two objects to be measured.
For example, in this embodiment, a spatially continuous thyroid two-dimensional image sequence may be acquired by a free arm of the B-mode ultrasound device, and the number of images in the two-dimensional image sequence and the image acquisition interval may be set according to actual requirements.
Step 202: and registering the two-dimensional images in the two-dimensional image sequence to stack a three-dimensional image of the object to be measured.
Specifically, since the free arm is used for scanning when the two-dimensional image sequence is acquired, the image positions in the two-dimensional image sequence may have a deviation, and therefore, in this embodiment, the images in the two-dimensional image sequence may be aligned based on the same planning point of the images, so that the two-dimensional images in the two-dimensional image sequence are consistent with the corresponding positions in space, for example, the sum of squares of pixel point differences corresponding to the images is minimized, and then the two-dimensional images in the two-dimensional image sequence are arranged according to the spatial positions, so as to stack the three-dimensional images.
In one implementation, step 104 may be implemented by the following steps, as shown in FIG. 3:
step 301: based on the category scores of the preliminary selection VOIs, the top N VOIs with category scores sorted are selected from the plurality of preliminary selection VOIs.
Wherein N is a positive integer greater than or equal to 1.
For example, in this embodiment, the initially selected VOIs are sorted from large to small according to their respective category scores, the top N VOIs are selected, and other VOIs are removed, and the N VOIs selected refer to: the VOI closest to the object under test.
Step 302: and filtering and combining the N VOIs to obtain the target VOI of the object to be detected.
In this embodiment, non-maximum suppression calculation may be performed on the N VOIs, so as to filter out unsuitable VOIs, and then merge the remaining VOIs, thereby obtaining a more accurate target VOI.
Specifically, in this embodiment, N VOIs may be sorted from large to small according to category scores, the VOI-MAX with the highest score is reserved, the remaining VOIs are traversed in a cycle, the sizes of the overlapping areas IOU (interaction Over union) of the VOIs and the VOI-MAX are sequentially calculated, if the IOU is greater than a preset threshold, the overlapping areas IOU are discarded, and otherwise, the overlapping areas IOU are reserved; then, the VOI with the highest score is selected from the remaining VOIs, the above steps are repeated, and the combination of the VOIs is finally completed, so that the bounding box of the object to be measured is obtained in this embodiment, that is, the VOI that just includes the object to be measured is included.
In one implementation, to improve the accuracy of the three-dimensional volume measurement, the spatial parameters of the VOI may be adjusted by performing a regression process on the VOI in this embodiment, as shown in fig. 4:
prior to step 302, the method further comprises:
step 303: and performing regression processing on the N VOIs and corresponding feature data to obtain offset and scaling quantities of the N VOIs on the space, and adjusting the positions and the sizes of the N VOIs on the basis of the offset and the scaling quantities.
The N VOIs and the corresponding feature data can be regressed by using the regression subnet in this embodiment, so that the offset and the zoom amount of the VOIs in space are obtained, and therefore, the cuboid frame in the VOIs is subjected to position movement and length, width and height adjustment, so that a more accurate spatial position of the VOIs is obtained, and further, a more accurate target VOI and a corresponding MASK can be obtained, so that the accuracy of three-dimensional volume measurement is improved.
In an implementation manner, in order to implement classification of an object to be measured, in this embodiment, the VOI may be classified, as shown in fig. 5, before step 302, the following steps may also be included:
step 304: and classifying the N VOIs and the corresponding characteristic data to obtain object classes of the N VOIs.
Specifically, in this embodiment, a classification subnet may be used to classify N VOIs and corresponding feature data, and the class information of the VOIs is obtained through classification, where the class information may be embodied by one vector, each number in the vector represents the probability of a class to which the VOI may belong, such as the probability of a thyroid benign nodule, a thyroid malignant nodule, or a non-nodule, and the class corresponding to the maximum value of the number is taken as the class to which the VOI belongs.
Referring to fig. 6, a schematic structural diagram of a three-dimensional volume measuring device according to a second embodiment of the present disclosure is shown, where the device may be disposed in an electronic device capable of performing image and data processing, such as a server or a computer.
In this embodiment, the apparatus may include the following structure:
an image obtaining unit 601, configured to obtain a three-dimensional image of the object to be measured.
The object to be measured can be thyroid gland or thyroid nodule and other objects needing to measure three-dimensional volume.
It should be noted that the three-dimensional image of the object to be measured may be a three-dimensional image obtained by stacking a plurality of two-dimensional images, or may be a three-dimensional image directly acquired by a three-dimensional image acquisition device.
In an implementation manner, in order to improve accuracy and measurement efficiency of subsequent data processing, in this embodiment, preprocessing may be performed on data of the three-dimensional image, such as clipping, resampling to a uniform size, subtracting an average value, and performing numerical value normalization, where the resampling to the uniform size may be understood as: the larger three-dimensional image volume data is down-sampled in equal proportion, while the smaller three-dimensional image volume data is up-sampled in equal proportion.
A feature extraction unit 602, configured to perform feature extraction on the three-dimensional image by using a preset neural network, so as to obtain feature data of the three-dimensional image.
The preset Neural network may be a convolutional Neural network cnn (probabilistic Neural network) for implementing abstract feature extraction.
Specifically, in this embodiment, the feature extraction unit 602 may first extract a volume data feature map of the three-dimensional image, that is, feature data, which includes information such as size, shape, and spatial relationship of the volume data, by performing operations such as convolution, pooling, and nonlinear transformation on the three-dimensional image using a feature extraction network in a neural network, for example, Resnet3D, FCN3D, UNet 3D.
And the feature convolution unit 603 is configured to perform convolution calculation on the feature data to obtain a plurality of initially selected VOIs corresponding to the object to be detected.
The primary VOI comprises the category score and the position information of the VOI. The class score of the VOI is a value representing the probability that the VOI belongs to a preset class of the object to be measured, for example, an IoU of a class belonging to a benign nodule of the thyroid or a class of a malignant nodule of the thyroid is calculated in convolution calculation, the IoU is a ratio of a primary VOI ÷ target VOI/primary VOI ÷ target VOI, and by calculating an intersection ratio IoU of the primary VOI and the target VOI, when IoU is greater than a certain threshold, for example, 0.5, the class score belonging to the target VOI, for example, the benign nodule or the malignant nodule, is 1, and otherwise is 0. The positional information of the VOI refers to three-dimensional positional information of the VOI in a three-dimensional image.
In an implementation manner, in this embodiment, the feature convolution unit 603 may perform convolution operation on the feature data by using a regional suggestion network rpn (region pro-social network), so as to obtain a plurality of initially selected VOIs of the object to be detected at corresponding positions on the three-dimensional image.
And the VOI processing unit 604 is configured to filter and combine the multiple primarily selected VOIs to obtain a target VOI of the object to be detected.
The above multiple initially selected VOIs indicate the category and spatial position size of the object to be measured in the three-dimensional image, and in this embodiment, the VOI processing unit 604 performs filtering and merging processing on the initially selected VOIs, thereby obtaining a target VOI indicating the category and spatial position size of the object to be measured more accurately. The target VOI appears as a rectangular parallelepiped frame that at least approximately perfectly wraps the object to be measured.
And an object segmentation unit 605, configured to segment the target VOI to obtain a MASK of the target VOI.
Specifically, in this embodiment, the target segmentation unit 605 may segment the target VOI by using the segmentation subnets to obtain a MASK of the target VOI.
The sub-network is used for segmenting an object to be detected, such as a thyroid nodule, the output of the sub-network is a MASK template representing a voxel type, the size of the MASK template is M, such as 1M 1, the MASK template corresponds to VOI voxels, the value of the voxel can represent the type of the object to be detected, for example, the voxel value is 0 to represent a non-nodule, the voxel value is 1 to represent a benign nodule, and the voxel value is 2 to represent a malignant nodule. In this embodiment, the target VOI is segmented by segmenting the subnets, so as to obtain a MASK capable of representing parameter information of the shape, size, volume, and the like of the object to be measured.
Then, in this embodiment, the MASK of the target VOI may be mapped to the volume data of the three-dimensional image, so as to obtain the MASK of the volume data in the entire three-dimensional image.
A volume calculation unit 606, configured to calculate a three-dimensional volume of the object to be measured based on the MASK of the target VOI.
Specifically, in this embodiment, the volume calculating unit 606 may perform automatic calculation of parameters such as volume on the MASK of the target VOI of the object to be measured, for example, binarize the MASK of the target VOI, where the target portion belonging to the object to be measured is 1 and the non-target portion not belonging to the object to be measured is 0, and then accumulate the volume values and the like of the target portion belonging to 1, so as to obtain the three-dimensional volume of the object to be measured.
In addition, after the three-dimensional volume of the object to be measured is obtained in this embodiment, the three-dimensional volume is output and provided to the user, for example, the three-dimensional volume of the thyroid nodule is displayed to a doctor or a patient through a display screen to be used as a reference for subsequent treatment.
According to the above technical solution, in the three-dimensional volume measuring device provided in the second embodiment of the present application, after the three-dimensional image of the object to be measured, such as the thyroid gland, is obtained, the three-dimensional image is subjected to feature extraction, such as convolution calculation, to obtain the initially selected VOI of the object to be measured, and then the initially selected VOI is filtered and merged to obtain the determined target VOI of the object to be measured, so that the MASK of the target VOI is obtained after the VOI is segmented, and the three-dimensional volume of the object to be measured is calculated based on the MASK. In the embodiment, the three-dimensional volume of the object to be measured is calculated through image acquisition and image processing, and the boundary profile of the object to be measured does not need to be manually assisted and sketched, so that the operation complexity is reduced, and the efficiency of three-dimensional volume measurement is improved.
In one implementation, the image obtaining unit 601 may be implemented by the following structure, as shown in fig. 7:
a two-dimensional obtaining subunit 611, configured to obtain a two-dimensional image sequence of the object to be measured.
The two-dimensional image sequence comprises two-dimensional images of at least two objects to be measured.
For example, in this embodiment, the two-dimensional obtaining subunit 701 may acquire a spatially continuous thyroid two-dimensional image sequence through a free arm of the B-mode ultrasound device, and the number of images in the two-dimensional image sequence and the image acquisition interval may be set according to actual requirements.
A three-dimensional obtaining subunit 612, configured to perform registration on the two-dimensional images in the two-dimensional image sequence, so as to stack a three-dimensional image of the object to be measured.
Specifically, since the two-dimensional image sequence is obtained by free arm scanning during the acquisition, there may be a deviation in the image positions in the two-dimensional image sequence, and therefore, in this embodiment, the three-dimensional obtaining subunit 702 may align the images in the two-dimensional image sequence based on the same planning point of the images, so that the two-dimensional images in the two-dimensional image sequence are consistent with the corresponding positions in space, for example, the sum of squares of pixel point differences corresponding to the images is minimized, and then the two-dimensional images in the two-dimensional image sequence are arranged according to the space positions, so as to stack the three-dimensional images.
In one implementation, the VOI processing unit 604 may be implemented by the following structure, as shown in fig. 8:
a VOI selecting subunit 641, configured to select, from the multiple initially selected VOIs, N VOIs with the top-ranked category scores based on the category scores of the initially selected VOIs.
Wherein N is a positive integer greater than or equal to 1.
For example, in this embodiment, the VOI selection subunit 641 may sort the initially selected VOIs from large to small according to their respective category scores, select the top N VOIs, and remove other VOIs, where the N VOIs selected refer to: the VOI closest to the object under test.
And a VOI processing subunit 642 configured to filter and combine the N VOIs to obtain a target VOI of the object to be detected.
In this embodiment, non-maximum suppression calculation may be performed on the N VOIs, so as to filter out unsuitable VOIs, and then merge the remaining VOIs, thereby obtaining a more accurate target VOI.
Specifically, in this embodiment, the VOI processing subunit 642 may first sort the N VOIs from large to small according to the category scores, reserve the VOI-MAX with the highest score, then cycle through the remaining VOIs, sequentially calculate the sizes of the overlapping areas IOU (interaction Over unit) of the VOIs-MAX, and discard the VOIs if the IOUs are greater than a preset threshold, otherwise reserve the VOIs; then, the VOI with the highest score is selected from the remaining VOIs, the above steps are repeated, and the combination of the VOIs is finally completed, so that the bounding box of the object to be measured is obtained in this embodiment, that is, the VOI that just includes the object to be measured is included.
In an implementation manner, in order to improve the accuracy of the three-dimensional volume measurement, in this embodiment, the spatial parameters of the VOI are adjusted by performing regression processing on the VOI, and specifically, as shown in fig. 9, the apparatus in this embodiment may further include the following structure:
a VOI adjusting unit 607, configured to perform regression processing on the N VOIs and the corresponding feature data to obtain spatial offsets and scaling amounts of the N VOIs; adjusting the position and size of the N VOIs based on the offset and the amount of scaling.
In this embodiment, the VOI adjusting unit 607 may utilize the regression subnet to perform regression processing on the N VOIs and the corresponding feature data, so as to obtain the offset and the zoom amount of the VOI in the space, and thus, perform position shifting and length, width, and height adjustment on the rectangular frame in the VOI, thereby obtaining a more accurate spatial position of the VOI, and further obtaining a more accurate target VOI and a corresponding MASK, thereby improving the accuracy of three-dimensional volume measurement.
In an implementation manner, in order to implement classification of an object to be measured, in this embodiment, the VOI may be classified, and as shown in fig. 10, the apparatus in this embodiment may further include the following structure:
a category determining unit 608, configured to perform classification processing on the N VOIs and the corresponding feature data to obtain object categories to which the N VOIs belong.
Specifically, in this embodiment, the category determining unit 608 may utilize a classification subnet to classify the N VOIs and the corresponding feature data, and obtain category information of the VOIs through classification, where the category information may be embodied by a vector, each number in the vector represents a probability of a category to which the VOI may belong, such as a probability of a thyroid benign nodule, a thyroid malignant nodule, or a non-nodule, and the category corresponding to the maximum value of the number is taken as the category to which the VOI belongs.
A third embodiment of the present application further provides a server, configured to implement the technical solution in the foregoing embodiments, where the server in this embodiment includes a memory and a processor, where:
the memory is used for storing the application program and data generated by the running of the application program;
a processor for executing an application program to implement the functions of: obtaining a three-dimensional image of an object to be detected; extracting the characteristics of the three-dimensional image by using a preset neural network to obtain the characteristic data of the three-dimensional image; performing convolution calculation on the characteristic data to obtain a plurality of primary interested spaces VOIs corresponding to the object to be detected; filtering and combining the plurality of primarily selected VOIs to obtain a target VOI of the object to be detected; segmenting the target VOI to obtain a MASK MASK of the target VOI; and calculating the three-dimensional volume of the object to be detected based on the MASK of the target VOI.
Taking the object to be measured as the thyroid gland as an example, and referring to the volume measurement flow charts in fig. 11 and fig. 12, the following describes an implementation scheme of the server in the present embodiment in performing three-dimensional volume measurement on the thyroid gland:
1) and acquiring a spatially continuous thyroid two-dimensional image sequence, wherein the number of images in the sequence and the sampling interval are according to specific requirements.
2) Registering a two-dimensional image sequence, wherein the image scanning uses free arm scanning, so that adjacent image positions in the image sequence generate deviation, sequence images of the same planning point need to be aligned, the corresponding positions of the sequence images are kept consistent in space, and the sum of squares of differences of corresponding pixel points of the images is generally minimized; and then, the images are arranged according to the spatial positions and stacked into a three-dimensional image. In addition, the data of the three-dimensional image can be preprocessed, including cutting, resampling to a uniform size (for larger volume data, equal proportion down-sampling is needed, and for smaller volume data, equal proportion up-sampling is performed), subtracting a mean value, normalizing a numerical value, and the like.
3) The preprocessed volume data of the three-dimensional image is input into a CNN network, and in the CNN network, first, a volume data feature map of the three-dimensional image is extracted by operations such as convolution, pooling, nonlinear transformation, and the like using a feature extraction network (e.g., Resnet3D, FCN3D, UNet3D, and the like), and the feature map includes high-level information such as the size, shape, and spatial relationship of the volume data. The three-dimensional abstract features with less data amount are obtained by performing calculation such as convolution, pooling and nonlinear transformation through the feature extraction network.
4) Obtaining N VOIs with highest scores by using an RPN, wherein the RPN is a part of a whole network model, the RPN can continuously carry out convolution operation on the extracted feature information to obtain a plurality of VOIs at corresponding positions of the image, the VOIs comprise category scores and position information of the VOIs, and the N VOIs with highest scores are selected;
and then, the selected VOI and the abstract characteristics are sent into a classification subnet and a regression subnet together for classification and regression. The classification information of each VOI can be obtained through classification, the classification information is a vector, each number in the vector represents the probability of the class (benign nodule, malignant nodule or non-nodule) to which the VOI belongs, the class corresponding to the maximum value is taken as the class to which the VOI belongs, the regression can obtain the offset and the zoom amount of the VOI in space, and then small position movement and length, width and height adjustment are carried out on a rectangular frame of the VOI, so that the more accurate position of the target in the frame can be obtained.
5) For a single target such as a certain nodule, a plurality of corresponding VOIs meeting the standard (the contact ratio of the VOI and the real target) are finally obtained through a network model, non-maximum inhibition calculation is carried out, excessive regressed VOIs are filtered, and the accurate VOI is obtained. Specifically, the VOIs are sorted from high to low according to the score (i.e., probability), the VOI _ MAX with the highest score is retained, the rest VOIs are traversed in a loop, the IOU sizes of the VOIs and the VOI _ MAX are calculated in sequence, and if the IOU is larger than a set threshold, the VOI size is discarded, otherwise, the VOI size is retained. And selecting the VOI with the highest score from the remaining VOIs, repeating the steps, and finally combining the VOIs. This step results in a bounding box for each nodule, i.e., one nodule corresponds to one VOI that just included it
6) Since the VOI is a rectangular solid including the nodules of the object, in order to obtain the parameter information of the shape, size, volume, etc. of the object itself, each accurate VOI volume data is sent to a segmentation subnet for segmentation, and a Mask of each object is obtained. The segmentation sub-network is used for segmenting the nodes, the output of the segmentation sub-network is a mask template which represents a voxel type, the size of the mask template is M, the mask template corresponds to each VOI voxel respectively, the value of the voxel is 0 and represents a non-node, the value of the voxel is 1 and represents a benign node, and the value of the voxel is 2 and represents a malignant node. The mask is mapped to the volume data, and the mask of the whole three-dimensional image volume data can be obtained.
7) The parameters such as volume and the like are automatically calculated for each nodule, and for example, the obtained mask is binarized, and the target portion is 1 and the non-target portion is 0, and the results are accumulated.
8) Outputting automatic measurement results
It should be noted that, in the present specification, the embodiments are all described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments may be referred to each other.
Finally, it should also be noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The above detailed description of the three-dimensional volume measurement method and apparatus provided by the present invention is provided to enable those skilled in the art to make and use the present invention, and the above description of the disclosed embodiments is provided to enable those skilled in the art to make and use the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (8)

1. A method of three-dimensional volumetric measurement, comprising:
acquiring a three-dimensional image of an object to be detected;
extracting the characteristics of the three-dimensional image by using a preset neural network to obtain the characteristic data of the three-dimensional image;
performing convolution calculation on the feature data to obtain a plurality of primary interested spaces VOIs corresponding to the object to be detected, wherein the primary interested spaces VOIs comprise category scores and position information of the VOIs, and the category scores of the VOIs represent probability values of preset categories belonging to the object to be detected;
selecting N VOIs with the category scores ranked at the top from the plurality of primary VOIs based on the category scores of the primary VOIs, wherein N is a positive integer greater than or equal to 1;
filtering and combining the N VOIs to obtain a target VOI of the object to be detected;
utilizing a segmentation subnet to segment the target VOI to obtain a MASK MASK of the target VOI;
and automatically calculating the three-dimensional volume of the object to be detected based on the MASK of the target VOI.
2. The method of claim 1, wherein the acquiring a three-dimensional image of the object to be measured comprises:
obtaining a two-dimensional image sequence of an object to be detected, wherein the two-dimensional image sequence comprises at least two-dimensional images of the object to be detected;
and aligning the two-dimensional images in the two-dimensional image sequence to stack a three-dimensional image of the object to be detected.
3. The method of claim 1, wherein prior to filtering and merging the N VOIs, the method further comprises:
performing regression processing on the N VOIs and corresponding feature data to obtain offset and scaling quantities of the N VOIs on the space;
adjusting the position and size of the N VOIs based on the offset and the amount of scaling.
4. The method of claim 1, further comprising:
and classifying the N VOIs and the corresponding characteristic data to obtain object categories of the N VOIs.
5. A three-dimensional volumetric measuring device, comprising:
the image acquisition unit is used for acquiring a three-dimensional image of the object to be detected;
the characteristic extraction unit is used for extracting the characteristics of the three-dimensional image by using a preset neural network to obtain the characteristic data of the three-dimensional image;
the characteristic convolution unit is used for performing convolution calculation on the characteristic data to obtain a plurality of primary VOIs corresponding to the object to be detected, wherein the primary VOIs comprise VOI category scores and position information, and the VOI category scores represent probability values of preset categories of the object to be detected;
the VOI processing unit is used for filtering and combining the plurality of initially selected VOIs to obtain a target VOI of the object to be detected;
the target segmentation unit is used for segmenting the target VOI by utilizing a segmentation sub-network to obtain a MASK MASK of the target VOI;
the volume calculation unit is used for automatically calculating the three-dimensional volume of the object to be detected based on the MASK MASK of the target VOI;
wherein the VOI processing unit includes:
a VOI selection subunit, configured to select, from the multiple initially selected VOIs, N VOIs with the category scores ranked first based on the category scores of the initially selected VOIs, where N is a positive integer greater than or equal to 1;
and the VOI processing subunit is used for filtering and combining the N VOIs to obtain the target VOI of the object to be detected.
6. The apparatus according to claim 5, wherein the image obtaining unit comprises:
the two-dimensional obtaining subunit is used for obtaining a two-dimensional image sequence of an object to be detected, wherein the two-dimensional image sequence comprises at least two-dimensional images of the object to be detected;
and the three-dimensional obtaining subunit is used for aligning the two-dimensional images in the two-dimensional image sequence so as to stack the three-dimensional image of the object to be detected.
7. The apparatus of claim 5, further comprising:
the VOI adjusting unit is used for performing regression processing on the N VOIs and corresponding feature data to obtain offset and zoom quantity of the N VOIs on the space; adjusting the position and size of the N VOIs based on the offset and the amount of scaling.
8. The apparatus of claim 5, further comprising:
and the category determining unit is used for classifying the N VOIs and the corresponding characteristic data to obtain the object categories of the N VOIs.
CN201811050729.4A 2018-09-10 2018-09-10 Three-dimensional volume measurement method and device Active CN109325943B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811050729.4A CN109325943B (en) 2018-09-10 2018-09-10 Three-dimensional volume measurement method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811050729.4A CN109325943B (en) 2018-09-10 2018-09-10 Three-dimensional volume measurement method and device

Publications (2)

Publication Number Publication Date
CN109325943A CN109325943A (en) 2019-02-12
CN109325943B true CN109325943B (en) 2021-06-18

Family

ID=65263983

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811050729.4A Active CN109325943B (en) 2018-09-10 2018-09-10 Three-dimensional volume measurement method and device

Country Status (1)

Country Link
CN (1) CN109325943B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112288685A (en) * 2020-07-20 2021-01-29 深圳市智影医疗科技有限公司 Acid-fast bacillus detection method and device, terminal device and readable storage medium

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7466848B2 (en) * 2002-12-13 2008-12-16 Rutgers, The State University Of New Jersey Method and apparatus for automatically detecting breast lesions and tumors in images
CN107452002A (en) * 2016-05-31 2017-12-08 百度在线网络技术(北京)有限公司 A kind of image partition method and device
CN106780460B (en) * 2016-12-13 2019-11-08 杭州健培科技有限公司 A kind of Lung neoplasm automatic checkout system for chest CT images
CN107464250B (en) * 2017-07-03 2020-12-04 深圳市第二人民医院 Automatic breast tumor segmentation method based on three-dimensional MRI (magnetic resonance imaging) image
CN107292884B (en) * 2017-08-07 2020-09-29 杭州深睿博联科技有限公司 Method and device for identifying edema and hematoma in MRI (magnetic resonance imaging) image
CN107582058A (en) * 2017-10-19 2018-01-16 武汉大学 A kind of method of the intelligent diagnostics malignant tumour of magnetic resonance prostate infusion image
CN108171694B (en) * 2017-12-28 2021-05-14 开立生物医疗科技(武汉)有限公司 Method, system and equipment for detecting nodule based on convolutional neural network

Also Published As

Publication number Publication date
CN109325943A (en) 2019-02-12

Similar Documents

Publication Publication Date Title
CN111027547B (en) Automatic detection method for multi-scale polymorphic target in two-dimensional image
WO2020215985A1 (en) Medical image segmentation method and device, electronic device and storage medium
CN102208105B (en) Medical Image Processing
CN110021025B (en) Region-of-interest matching and displaying method, device, equipment and storage medium
EP2916738B1 (en) Lung, lobe, and fissure imaging systems and methods
US20080159604A1 (en) Method and system for imaging to identify vascularization
Hogeweg et al. Clavicle segmentation in chest radiographs
US8285013B2 (en) Method and apparatus for detecting abnormal patterns within diagnosis target image utilizing the past positions of abnormal patterns
CN108062749B (en) Identification method and device for levator ani fissure hole and electronic equipment
CN116664559B (en) Machine vision-based memory bank damage rapid detection method
CN110136153A (en) A kind of image processing method, equipment and storage medium
CN108052909B (en) Thin fiber cap plaque automatic detection method and device based on cardiovascular OCT image
CN112215217B (en) Digital image recognition method and device for simulating doctor to read film
CN109447963A (en) A kind of method and device of brain phantom identification
CN111340756A (en) Medical image lesion detection and combination method, system, terminal and storage medium
CN110929728A (en) Image region-of-interest dividing method, image segmentation method and device
CN109325943B (en) Three-dimensional volume measurement method and device
CN112215878B (en) X-ray image registration method based on SURF feature points
CN114677322A (en) Milk cow body condition automatic scoring method based on attention-guided point cloud feature learning
CN111724356B (en) Image processing method and system for CT image pneumonia recognition
Devaki et al. Study of computed tomography images of the lungs: A survey
Liu et al. Automated binocular vision measurement of food dimensions and volume for dietary evaluation
Junior et al. Evaluating margin sharpness analysis on similar pulmonary nodule retrieval
CN111325282A (en) Mammary gland X-ray image identification method and device suitable for multiple models
JP6745633B2 (en) Image processing apparatus, image processing method, and program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant