WO2011052602A1 - Ultrasonic imaging device, ultrasonic imaging method and program for ultrasonic imaging - Google Patents

Ultrasonic imaging device, ultrasonic imaging method and program for ultrasonic imaging Download PDF

Info

Publication number
WO2011052602A1
WO2011052602A1 PCT/JP2010/068988 JP2010068988W WO2011052602A1 WO 2011052602 A1 WO2011052602 A1 WO 2011052602A1 JP 2010068988 W JP2010068988 W JP 2010068988W WO 2011052602 A1 WO2011052602 A1 WO 2011052602A1
Authority
WO
WIPO (PCT)
Prior art keywords
region
similarity
distribution
interest
imaging apparatus
Prior art date
Application number
PCT/JP2010/068988
Other languages
French (fr)
Japanese (ja)
Inventor
裕也 増井
東 隆
Original Assignee
株式会社 日立メディコ
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 株式会社 日立メディコ filed Critical 株式会社 日立メディコ
Priority to CN201080046798.3A priority Critical patent/CN102596050B/en
Priority to EP10826734A priority patent/EP2494924A1/en
Priority to US13/503,858 priority patent/US8867813B2/en
Priority to JP2011538440A priority patent/JP5587332B2/en
Publication of WO2011052602A1 publication Critical patent/WO2011052602A1/en

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/08Detecting organic movements or changes, e.g. tumours, cysts, swellings
    • A61B8/0833Detecting organic movements or changes, e.g. tumours, cysts, swellings involving detecting or locating foreign bodies or organic structures
    • A61B8/085Detecting organic movements or changes, e.g. tumours, cysts, swellings involving detecting or locating foreign bodies or organic structures for locating body or organic structures, e.g. tumours, calculi, blood vessels, nodules
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/52Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/5269Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves involving detection or reduction of artifacts
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • G06T7/0014Biomedical image inspection using an image reference approach
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10132Ultrasound image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30096Tumor; Lesion

Definitions

  • the present invention relates to an ultrasonic imaging method and an ultrasonic imaging apparatus that can clearly identify a tissue boundary when imaging a living body using ultrasonic waves.
  • an elastic modulus distribution of a tissue is estimated based on a change amount of a small area of a diagnostic moving image (B-mode image).
  • B-mode image diagnostic moving image
  • a method of displaying the hardness after converting it into a color map is known.
  • the acoustic impedance and the elastic modulus may not differ greatly with respect to the surrounding tissue. I cannot grasp the boundary with other organizations.
  • Patent Document 2 proposes a technique that makes it possible to identify a tissue boundary in which acoustic impedance and elastic modulus are not significantly different from the surroundings by creating a scalar field image directly from a motion vector of a diagnostic moving image. Has been.
  • An object of the present invention is to provide an ultrasonic imaging apparatus capable of discriminating a noise region where an echo signal is weak.
  • the ultrasonic imaging apparatus of the present invention includes a transmission unit that transmits ultrasonic waves toward a target, a reception unit that receives ultrasonic waves coming from the target, and a reception signal of the reception unit to process two or more frames. And a processing unit for generating an image.
  • the processing unit sets a region of interest at a predetermined position or a position received from the operator by using one frame of the generated two or more frames as a reference frame.
  • Another frame is set as a comparison frame, a search area wider than the region of interest is set at a predetermined position or a position received from the operator, and a plurality of candidates for the movement destination of the region of interest within the search region Set candidate areas for.
  • the similarity between the image characteristic values in the region of interest and the candidate region is calculated for each candidate region, and the similarity distribution over the entire search region is obtained. This makes it possible to determine whether the region of interest is a noise region based on the similarity distribution.
  • the processing unit obtains a statistic that compares the minimum value of the similarity and the overall value of the similarity in the similarity distribution, and determines the reliability of the region of interest based on the statistic. Specifically, for example, the processing unit determines the reliability of the region of interest by calculating the above-described statistic using the minimum value, average value, and standard deviation of the similarity, and comparing the calculated statistic with a threshold value. Is possible.
  • the processing unit can generate a vector connecting the position corresponding to the region of interest in the comparison frame and the position of the candidate region having the minimum similarity, and the vector for the region of interest determined to have low reliability. Is replaced with zero or a predetermined vector. Thereby, an error vector can be removed and the accuracy of the vector can be improved.
  • the processing unit can calculate the average, minimum value, and standard deviation of the similarity for the similarity distribution, and use the degree of separation obtained by dividing the difference between the average and the minimum value by the standard deviation as the statistic. Further, for example, it is also possible to calculate the average and standard deviation of the similarity for the similarity distribution, and use the coefficient of variation obtained by dividing the standard deviation by the average as the statistic.
  • Threshold values to be compared with the above-mentioned statistics can be obtained as follows. For example, a plurality of regions of interest are set, a statistic is obtained for each, a histogram distribution indicating the frequency of the obtained statistic value is obtained, and the median, average value, or histogram distribution in the histogram distribution is a plurality of mountain shapes Is used, the statistic of the minimum value of the valley between the mountains is used as the threshold value.
  • the smoothing process uses, for example, a method in which a filter having a predetermined size is set in the similarity distribution and the process of smoothing the distribution in the filter is repeated while moving the filter by a predetermined amount.
  • the size of the filter can be determined as follows. A vector connecting the position corresponding to the region of interest in the comparison frame and the position of the candidate region having the minimum similarity in the similarity distribution before the smoothing process is generated in advance for a plurality of regions of interest. Of the generated vectors, the maximum vector length is set as the filter size.
  • the processing unit obtains a similarity distribution for a region of interest set near the living body tumor boundary, generates a similarity distribution image having the similarity as an image characteristic value, and corresponds to the region of interest on the similarity distribution image.
  • a first processing means for setting a one-dimensional area of a predetermined length in a plurality of different directions centered on a position; a second processing means for calculating the sum of similarities in the one-dimensional area for each set direction; and similarity Third processing means for calculating the ratio between the similarity sum in the direction in which the sum is minimum and the similarity sum of the one-dimensional area in the direction orthogonal thereto, and fourth processing for determining the degree of tumor invasion based on the ratio Means.
  • the ratio calculated by the third processing means is smaller than a predetermined value set in advance, it is possible to obtain the boundary line by determining that the target pixel is a point constituting the boundary line.
  • the following ultrasonic imaging method is provided. That is, an ultrasonic wave is transmitted toward the target, and a reception signal obtained by receiving the ultrasonic wave coming from the target is processed to generate an image of two or more frames, and a reference frame and a comparison frame are selected from the image. A region of interest is set in the reference frame, a search region wider than the region of interest is set in the comparison frame, a plurality of candidate regions that are candidates for movement of the region of interest are set in the search region, The ultrasonic imaging method calculates the similarity of the image characteristic values in the candidate area for each candidate area and obtains the similarity distribution over the entire search area.
  • the following ultrasound imaging program is provided. That is, a first step of selecting a reference frame and a comparison frame from two or more frames of ultrasound images, a second step of setting a region of interest in the reference frame, and a search region wider than the region of interest in the comparison frame.
  • the present invention it is possible to determine whether the region of interest is a noisy region based on the similarity distribution. As a result, the generation of error vectors is suppressed, and highly accurate vector estimation is possible even in the penetration limit region. The accuracy of the scalar field image converted from the estimated motion vector field is improved, and more appropriate boundary detection is possible.
  • FIG. 1 is a block diagram showing an example system configuration of an ultrasound imaging apparatus according to Embodiment 1.
  • FIG. 5 is a flowchart showing a processing procedure for image generation by the ultrasonic imaging apparatus according to the first embodiment. The flowchart which shows the detail of the block matching process of step 24 of FIG. The figure explaining the block matching process of step 24 of FIG. 2 with the phantom of a two-layer structure.
  • A The figure which shows the example of a B mode image produced
  • FIG. 5A is a diagram showing an example of an SAD distribution image obtained by setting the ROI at the position (3) in FIG. 5B, and FIG. 5B is obtained by setting the ROI at the position (5) in FIG.
  • the figure which shows the example of a SAD distribution image (c) is the histogram of the SAD value shown to Fig. (A), (d) is the histogram of the SAD value shown to Fig. (B).
  • FIG. 4 is an explanatory diagram showing the definition of the degree of separation on the histogram of SAD values in the first embodiment.
  • the flowchart which shows the detail of a process in the case of performing the process of step 25 of FIG. 2 using a degree of separation.
  • the flowchart which shows the detail of a process in the case of performing the process of step 25 of FIG. 2 using a variation coefficient.
  • the figure which shows the example of a vector distribution image which removed the false vector at step 25 of FIG. 10 is a flowchart of processing for removing noise when calculating the SAD distribution according to the second embodiment.
  • FIG. 9 is a flowchart illustrating processing for obtaining an infiltration degree according to the third embodiment.
  • (A)-(h) Explanatory drawing which shows the area
  • FIG. (c) is a figure which shows the image which applied the Laplacian filter to the SAD value distribution shown in FIG. (A)
  • (d) is the Laplacian filter in the SAD value distribution shown in FIG. The figure which shows the image which applied.
  • FIG. 1 shows a system configuration of the ultrasonic imaging apparatus of the present embodiment.
  • This apparatus has an ultrasonic boundary detection function.
  • this apparatus includes an ultrasonic probe (probe) 1, a user interface 2, a transmission beam former 3, a control system 4, a transmission / reception changeover switch 5, a reception beam former 6, and an envelope detection unit 7.
  • a scan converter 8 a processing unit 10, a parameter setting unit 11, a combining unit 12, and a display unit 13.
  • the ultrasonic probe 1 in which ultrasonic elements are arranged one-dimensionally transmits an ultrasonic beam (ultrasonic pulse) to a living body and receives an echo signal (received signal) reflected from the living body.
  • a transmission signal having a delay time adjusted to the transmission focus is output by the transmission beamformer 3 and sent to the ultrasonic probe 1 via the transmission / reception changeover switch 5.
  • the ultrasonic beam reflected or scattered in the living body and returned to the ultrasonic probe 1 is converted into an electric signal by the ultrasonic probe 1 and received by the receiving beam former 6 via the transmission / reception switch 5. Sent as.
  • the receive beamformer 6 is a complex beamformer that mixes two received signals that are 90 degrees out of phase, and performs dynamic focus that adjusts the delay time according to the reception timing under the control of the control system 4. Outputs real and imaginary RF signals.
  • the RF signal is detected by the envelope detector 7 and then converted into a video signal, which is input to the scan converter 8 and converted into image data (B-mode image data).
  • image data B-mode image data
  • the ultrasonic boundary detection process is realized by the processing unit 10.
  • the processing unit 10 includes a CPU 10a and a memory 10b.
  • the CPU 10a executes a program stored in the memory 10b in advance, the following processing is performed to detect the boundary of the subject tissue. That is, based on the image data of two or more frames output from the scan converter 8, the processing unit 10 first creates a motion vector field. Next, the generated motion vector field is converted into a scalar field. The original image data and the corresponding motion vector field or scalar field are combined by the combining unit 12 and then displayed on the display unit 13.
  • the parameter setting unit 11 performs parameter setting for signal processing in the processing unit 10 and selection setting of a display image in the synthesis unit 12. These parameters are input from the user interface 2 by an operator (device operator).
  • parameters for signal processing for example, setting of a region of interest on a desired frame m and setting of a search region on a frame m + ⁇ different from the frame m can be received from an operator.
  • selection setting of the display image for example, the selection setting of whether the original image and the vector field image (or scalar image) are combined into one image and displayed on the display, or two or more moving images are displayed side by side. Can be received from the operator.
  • FIG. 2 shows a flowchart of an example of boundary detection processing and image processing in the processing unit 10 and the combining unit 12 of the present invention.
  • the processing unit 10 first acquires a measurement signal from the scan converter 8 and performs normal signal processing to create a B-mode moving image (steps 21 and 22).
  • two frames of a desired frame and a frame having a different time are extracted from the B-mode moving image (step 23).
  • the desired frame and the next frame are extracted.
  • a motion vector field is calculated from the two frames (step 24).
  • the motion vector field is calculated based on the block matching method.
  • a noise removal process is performed on the calculated motion vector field (step 25), and the noise-removed motion vector field is converted to a scalar field (step 26).
  • step 27 the processing for one image is completed.
  • FIG. 3 is a flowchart showing detailed processing in step 24, and FIG. 4 is a diagram for explaining block matching processing in step 24.
  • the block matching process for calculating the motion vector field in step 24 will be specifically described with reference to FIGS.
  • 1 frame.
  • the processing unit 10 sets a region of interest ROI (region of interest: reference block) 31 having a predetermined number of pixels N as shown in FIG. 4 (step 51).
  • the luminance distribution of the pixels included in the ROI 31 is represented as P m (i 0 , j 0 ).
  • i 0 and j 0 indicate the position of the pixel in the ROI 31.
  • the processing unit 10 sets a search area 32 of a predetermined size at a position corresponding to the ROI 31 of the frame m and its vicinity at the frame m + ⁇ th (step 52).
  • the setting of the ROI 31 will be described with respect to a configuration in which the processing unit 10 sequentially sets the ROI 31 over the entire image of the frame m and sets a search area 32 of a predetermined size centered on the ROI 31.
  • 10 is a ROI 31 having a predetermined position and size, and a search region 32 having a predetermined size is set in the vicinity thereof, and a region of interest (ROI) and a search region received from an operator in the parameter setting unit 11
  • the processing unit 10 can set the ROI 31 and the search area 32.
  • the search area 32 is divided into a plurality of movement candidate areas 33 having the same size as the ROI 31.
  • the processing unit 10 calculates the movement candidate area 33 having the highest similarity to the luminance of the ROI 31 and selects it as the movement destination area.
  • an index representing the degree of similarity a sum of absolute differences, a mean square, a cross-correlation value, or the like can be used.
  • the sum of absolute differences is used as an example will be described below.
  • the luminance distribution of the pixels included in the movement candidate area 33 in the search area 32 is represented as P m + ⁇ (i, j). i and j indicate the position of the pixel in the movement candidate area 33.
  • the processing unit 10 calculates the sum of absolute differences SAD (sum of absolute difference) between the luminance distribution P m + ⁇ (i, j) of the pixel of the ROI 31 and the luminance distribution P m (i 0 , j 0 ) of the movement candidate region 33. (Step 53).
  • SAD is defined by the following equation (1).
  • the processing unit 10 obtains the SAD value with the ROI 31 for all the movement candidate areas 33 in the search area 32, and determines the movement candidate area 33 having the smallest SAD value in the obtained SAD distribution as the movement destination area. Then, a motion vector connecting the position of the ROI 31 and the position of the movement candidate area 33 with the minimum SAD value is determined (step 54).
  • the processing unit 10 determines the motion vector for the entire image of the frame m by repeating the above process while moving the ROI 31 over the entire image of the frame m (step 55).
  • a motion vector field motion vector distribution image
  • FIGS. 5A and 5B are examples of B-mode images and motion vector distribution images obtained by the above processing.
  • the B-mode image in FIG. 5A is obtained by superposing the gel base material phantoms 41 and 42 in two layers and fixing the ultrasonic probe to the upper phantom 41 and moving it laterally.
  • the region of the upper ⁇ ⁇ corresponding to the upper phantom 41 to which the ultrasonic probe 1 is fixed is relatively stationary, and the lower phantom 42 is a vector field indicating lateral movement. (Horizontal arrow).
  • the arrow points obliquely upward, the direction is not constant, and there is a phenomenon that the motion vector is disturbed. This phenomenon is caused by a decrease in the S / N ratio (SNR) of detection sensitivity as the distance from the probe 1 increases, and indicates a penetration limit. That is, an erroneous vector is generated in a low SNR region far from the probe 1.
  • SNR S / N ratio
  • FIGS. 6A and 6B show the respective movement candidate areas 33 of the search area 32 of the next frame obtained by setting the ROI 31 at the positions (3) and (5) in FIG.
  • FIG. 6 is a diagram (SAD distribution diagram) showing SAD values as densities of respective movement candidate regions 33;
  • FIGS. 6A and 6B each show a search area 32 as a whole, and the search area 32 is divided into 21 ⁇ 21 movement candidate areas 33.
  • the block size of the movement candidate area 33 is 30 ⁇ 30 pixels, the search area 32 is 50 ⁇ 50 pixels, and the movement candidate area 33 is moved pixel by pixel within the search area 32.
  • the search area 32 is set so that the position of the movement candidate area 33 corresponding to the position of the ROI 31 is located at the center of the search area 32.
  • the SAD value of the movement candidate region 33 at a position shifted to the right side from the center of the search region 32 is minimum. Therefore, it can be seen that a right lateral vector is determined in step 24 for the position (3) in FIG. As can be seen from FIG. 5 (b), the position (3) is slightly below the boundary between the two-layer phantoms 41 and 42 (the phantom 42 side), and a vector in the horizontal direction is displayed.
  • the SAD values are distributed around the movement candidate region 33 having the smallest SAD value in the horizontal direction, that is, in the direction along the boundary between the two-layer phantoms 41 and 42. It can be seen that a small region (SAD value valley) is formed. This phenomenon is that, as shown in FIG. 6B, the boundary can be directly detected from the SAD distribution of FIG. 6A without creating all the motion vector fields while moving the ROI 31 to the entire region of the frame m. It suggests.
  • the SAD value distribution of FIG. 6B is the position (5) in the region where the motion vector is disturbed in FIG. 5B
  • the movement candidate region 33 with the smallest SAD value is the noise. It can be seen that the value is uniformly spread over a wide area near the probe 1 where the value is small, and the motion vector is upward and easily disturbed. It can also be seen that the region with the small SAD value (the valley of the SAD value) that should be formed around the movement candidate region 33 with the smallest SAD value is buried in the entire noise fluctuation and cannot be recognized.
  • the SAD value in which the movement candidate region 33 having the smallest SAD value is clearly smaller than the surrounding region. Therefore, in the SAD distribution image of the search area 32 in which the ROI 31 is set at the noisy position (5) as shown in FIG. 6B, the movement candidate area 33 having the smallest SAD value is obtained.
  • the noisy ROI 31 is determined using the phenomenon that it cannot be clearly recognized. For the noisy ROI 31, a process for removing the corresponding motion vector is performed.
  • the processing unit 10 determines, for example, from the SAD distribution of the search region 32 as shown in FIGS. 6A and 6B when the ROI 31 is set at a predetermined position in step 25 of FIG. Histograms showing the distribution of SAD values are created as shown in FIGS.
  • the SAD minimum value is sufficiently separated from the SAD value having a high frequency in the histogram distribution as shown in FIG. 6C. . That is, the minimum SAD value and the range of SAD values with high frequency are sufficiently separated.
  • the histogram when the ROI 31 is set at the position (5) in FIG. 5B does not have much frequency difference with respect to the SAD value distribution as shown in FIG.
  • the histogram distribution is broad. It has spread. For this reason, the SAD minimum value is included in the range of the SAD value having a high frequency, and the separation between the minimum SAD value and the range of the SAD value having a high frequency is insufficient.
  • the reliability of the signal of the ROI 31 corresponding to this search region 32 (less noise)
  • the reliability of the motion vector determined in the search area 32 can be determined.
  • a region with low reliability can be determined as a low SNR region, and the reliability of the corresponding motion vector can also be determined.
  • an index is used to determine the degree of separation between the SAD minimum value and the frequent SAD value in the histogram of the SAD value distribution.
  • FIG. 7 shows the concept of definition of the degree of separation.
  • the degree of separation is a value corresponding to the distance between the distribution average of the histogram and the minimum value, and is defined by the following equation (2).
  • equation (2) in order to avoid the influence by the difference in distribution, normalization is performed with the standard deviation.
  • FIG. 8 shows a flow of processing for discriminating a noisy region when the degree of separation is used as an index and removing a motion vector.
  • This process specifically shows the process of step 25 in FIG. 2, and is performed for all ROIs 31 set in frame m in step 51 of FIG.
  • the processing unit 10 first determines the target ROI 31 (step 81), and the SAD value of all the movement candidate regions 33 in the search region 32 set and calculated in steps 52 and 53 in FIG. 3 for the determined ROI 31. Is used to calculate the mean, minimum value, and standard deviation of the SAD values according to statistical processing (step 82). Then, the degree of separation defined by the above equation (2) is obtained (step 83). This is repeated for all ROIs 31 (step 84).
  • the motion vector obtained in step 54 in FIG. 3 is replaced with 0 (step 85).
  • a region with low reliability can be determined from the motion vector image, and the vector (erroneous vector) can be removed.
  • a predetermined threshold value or the median value of the distribution of the degree of separation obtained for all ROIs 31 in Step 84 can be used.
  • a histogram of the obtained degree of separation is generated and a plurality of frequency peaks are formed, between the mountain located on the side with the lowest degree of separation and the mountain located on the side with the higher degree of separation. It is also possible to use the degree of separation of the valley positions as a threshold value.
  • FIG. 9 shows an image of the degree of separation obtained for all ROIs 31 in step 83.
  • 33 ⁇ 51 ROIs 31 are set in the frame m, and the separation degree of each ROI 31 is indicated by the density.
  • the degree of separation is low in the low SNR region below the frame m, and it can be seen that the degree of separation reflects the reliability of motion vector estimation.
  • the degree of separation is used.
  • another index can be used as an index for determining the degree of separation between the SAD minimum value and the SAD value having a high frequency.
  • another index can be used.
  • a coefficient of variation can be used. The coefficient of variation is defined by the following equation, and is a statistic obtained by standardizing standard deviations on the average, and indicates the magnitude of distribution variation (that is, difficulty in separating minimum values).
  • FIG. 10 shows a flow of processing for removing a vector in a noisy area when the coefficient of variation is used as an index.
  • the target ROI 31 is determined in the same manner as in the processing flow of FIG. 8 (step 81), and all the movement candidates in the search area 32 set and calculated in steps 52 and 53 of FIG.
  • the average of the SAD values and the standard deviation are calculated according to statistical processing (step 101).
  • the coefficient of variation defined by the above equation (3) is obtained (step 102).
  • step 84 For the ROI 31 whose coefficient of variation obtained in step 102 is larger than a predetermined value, the motion vector obtained in step 54 of FIG. 3 is replaced with 0 (step 85).
  • a predetermined threshold value or the median value of the distribution of variation coefficients obtained for all ROIs 31 in step 84 can be used.
  • a histogram of the obtained variation coefficient is generated and the frequency exhibits two peaks, it is also effective to adopt the minimum value of the valley between the two peaks as the predetermined value.
  • FIG. 11 shows the variation coefficient obtained for all ROIs 31 in step 102 as an image.
  • the coefficient of variation of each ROI 31 is shown by the concentration.
  • the coefficient of variation increases in the low SNR region at the bottom of the frame m, and it can be seen that the coefficient of variation reflects the reliability of motion vector estimation.
  • FIGS. 12A and 12B show examples of motion vector distribution images before and after removing an erroneous vector.
  • FIG. 12A shows the same motion vector field before error vector removal as FIG. 4B
  • FIG. 12B obtains the coefficient of variation distribution from the SAD distribution by the processing of FIG.
  • An ROI having a median value as a threshold and a coefficient of variation larger than that is determined to have low reliability, and the motion vector is set to 0 (stationary state).
  • the lower region where the motion vector is disturbed is clearly removed and replaced with a stationary state.
  • the lower region can be determined as a penetration region (that is, a region with low reliability) where an ultrasonic echo cannot be accurately obtained.
  • the motion vector distribution is converted into a scalar distribution by steps 26 and 27 in FIG.
  • the distribution image (or B-mode image) is synthesized and displayed.
  • the motion vector in the region with low reliability is removed to make it stationary.
  • the present invention is not limited to this processing method.
  • instead of setting the motion vector to a static state it is possible to use a processing method that maintains the state of the motion vector obtained previously for the same region as it is.
  • FIG. 13 is a processing flow for removing noise during the calculation of the SAD distribution of the second embodiment.
  • noise removal processing (steps 132 and 133) is added to the SAD calculation processing of steps 51 to 54 of FIG. 3 of the first embodiment.
  • 14A, 14B, and 14C show examples of the SAD distribution at each processing stage in FIG.
  • the processing unit 10 first performs the same processing as steps 51 to 53 in FIG. 3 of the first embodiment to obtain the SAD value distribution.
  • An example of the obtained SAD distribution image is shown in FIG.
  • the ROI 31 is at the position (4) in FIG. 5B, and the SAD value of the upper movement candidate area 33 is affected by noise even though the phantom 42 originally moves relatively in the horizontal direction. Since the motion vector is determined as it is, erroneous detection of the vector occurs. In order to avoid such erroneous detection, the processing unit 10 performs a smoothing process (low-pass filter LPF (low pass filter) process) on the SAD distribution image obtained in steps 51 to 53 (step 132).
  • LPF low pass filter
  • a filter having a predetermined size is applied to the SAD distribution image, and the high frequency component of the SAD distribution in the filter is cut and smoothed. This process is repeated while moving the filter by a predetermined amount.
  • smoothing the SAD value distribution image the change in the SAD value due to the movement of the phantom 42 is removed because it is steep, whereas the change in the AD value due to noise is gradual and can be extracted. .
  • An SAD value distribution image obtained by the smoothing process is shown in FIG.
  • step 133 the difference between the original SAD value distribution at step 53 and the smoothed SAD value distribution at step 132 is obtained (step 133).
  • the original SAD value distribution by the movement of the phantom from which the fluctuation of the SAD value due to noise is removed can be obtained.
  • the obtained distribution is shown in FIG.
  • step 54 of FIG. 3 is performed, the movement candidate area 33 having the smallest SAD value is determined as a movement destination, and a motion vector is determined. After the motion vector is determined, a motion vector distribution image is generated by the process of step 56 in FIG. 3 of the first embodiment. Further, in step 25 of FIG. 2 in the first embodiment, it is possible to further perform processing such as removing a vector with low reliability of the motion vector distribution.
  • the motion vector can be determined using the SAD value distribution from which the SAD value fluctuation due to noise is removed by the processing of the second embodiment, the reliability of the motion vector can be improved.
  • LPF is used.
  • the present invention is not limited to this. If the spatial frequency of the distribution of SAD values due to movement of the subject (phantom) is high (that is, a more complicated shape), the band pass is used. It is also effective to apply a filter.
  • the size of one side of the filter used in the filter processing in step 132 can be determined as follows. That is, for the SAD distribution that has not been smoothed, the motion vector field is created in advance by performing step 54 in FIG. Set as the size of one side.
  • a processing method for directly determining a tissue boundary using the SAD value distribution obtained in step 53 of FIG. 3 of the first embodiment and determining an infiltration degree of a living tumor with respect to a normal tissue Will be described.
  • the movement candidate area 33 of the search area 32 is simply referred to as an area 33.
  • the ROI 31 is also referred to as a target pixel.
  • the region along the boundary of the tissue of the subject shows high brightness in the B-mode image because the tissue similarity is high. For this reason, the SAD has a feature that the region 33 along the boundary of the tissue of the subject has a smaller value than the region 33 along the orthogonal direction of the boundary. On the other hand, when the invasion of the living tumor progresses, the boundary becomes unclear, so the SAD value of the region 33 along the boundary increases. Using this, the degree of infiltration is determined.
  • FIG. 15 shows a processing flow of the processing unit 10 of the third embodiment.
  • FIGS. 16A to 16H show eight patterns of the target direction and the region 33 selected on the SAD value distribution image correspondingly.
  • the processing unit 10 sets a pixel of interest (ROI) 31 at the boundary position of the tissue to be investigated in the desired frame m of the B-mode image, sets a search area 32 in the frame m + ⁇ , and calculates the SAD value distribution of the search area 32.
  • ROI pixel of interest
  • Obtain step 151).
  • the frame selection method and the SAD value distribution calculation method are performed in the same manner as steps 21 to 23 in FIG. 2 and steps 51 to 53 in FIG.
  • an area 33 passing through the center of the search area 32 and positioned along a predetermined target direction (horizontal direction) 151 is selected (step 63), and the selected area 33 is selected.
  • the sum of the SAD values is obtained (step 64).
  • the region 33 positioned along the orthogonal direction (vertical direction) 152 of the target direction 151 is selected, and the sum of the SAD values of the selected region 33 is also obtained.
  • steps 63 and 64 are repeated until all the eight patterns shown in FIGS. 16A to 16H are performed (step 62).
  • the sum of the SAD values of the region 33 located along a predetermined target direction (direction inclined about 30 ° counterclockwise with respect to the horizontal direction) 151 is obtained, and the orthogonal direction 152 is obtained.
  • the sum of the SAD values of the region 33 located along is obtained.
  • the target directions 151 of about 45 °, about 60 °, 90 °, about 120 °, about 135 °, and about 150 ° counterclockwise with respect to the horizontal direction;
  • the sum of the SAD values of the region 33 located along the orthogonal direction 152 is obtained.
  • the target direction 151 that minimizes the sum of the calculated SAD values of each target direction 151 is selected (step 65).
  • the direction of the selected target direction 151 is the direction of the tissue boundary. Thereby, the boundary can be directly detected without obtaining a motion vector.
  • a direction 152 orthogonal to the selected target direction 151 is selected (step 66).
  • the ratio of the sum of the SAD values in the selected target direction 151 and the SAD sum in the direction 152 orthogonal to the selected target direction 151 (the SAD value sum in the target direction / the SAD value sum in the orthogonal direction) is calculated (step 67).
  • the degree of infiltration When the degree of infiltration is low and the boundary is clear, the SAD sum in the boundary direction (selected target direction 151) is small and the SAD sum in the orthogonal direction 152 is large, so a small value is obtained as the ratio.
  • the degree of infiltration increases and the boundary becomes unclear, the SAD sum in the boundary direction (selected target direction 151) increases, so the ratio increases. Therefore, the degree of infiltration can be evaluated using the ratio as a parameter. Specifically, for example, the degree of infiltration is determined by comparing a plurality of predetermined reference values and ratios, and the determination result is displayed.
  • the target pixel (ROI 31) can be identified as a point constituting a boundary line and the boundary can be displayed.
  • the direction-dependent filter is a filter having a function of determining a direction in which the density change in the one-dimensional direction is the smallest in the filter range (search region 32) of the processing pixel.
  • FIGS. 16A to 16H show region selection patterns in the target direction 151 and the orthogonal direction 152 in the search region 32 composed of the 5 ⁇ 5 region 33 for easy illustration.
  • an area selection pattern is set corresponding to the number of areas 33 in the search area 32.
  • FIGS. 17A and 17B show the SAD distribution of the search region 32 by setting the ROI 31 at the positions (1) and (2) in FIG. 5B.
  • the position (1) is a position inside the phantom 41 that is relatively stationary with respect to the probe 1.
  • the position (2) is located in the vicinity of the boundary between the phantom 41 and the phantom 42 moving laterally relative thereto.
  • the processing for obtaining the SAD distribution at the positions (1) and (2) is performed in the same manner as steps 21 to 23 in FIG. 2 and steps 51 to 53 in FIG.
  • a Laplacian filter that performs spatial quadratic differentiation is applied to the obtained SAD distribution image (FIGS. 17A and 17B), a portion where the fluctuation of the SAD value is greatly emphasized due to the edge enhancement effect. Images of 17 (c) and (d) are obtained, respectively.
  • the boundary can be extracted directly by performing Laplacian processing on the SAD distribution. Therefore, steps 54 and 55 in FIG. 3 for determining and imaging the motion vector of the first embodiment, and steps 25 to 55 in FIG. 2 in which the motion vector is denoised and converted into a scalar distribution to estimate the boundary. Since 26 can be omitted, the amount of processing can be greatly reduced.
  • the present invention can be applied to medical ultrasonic diagnostic / treatment apparatuses and apparatuses that measure distortion and deviation using electromagnetic waves including ultrasonic waves in general.
  • Ultrasonic probe (probe) 2 User interface 3: Transmission beamformer 4: Control system 5: Transmission / reception changeover switch 6: Reception beamformer 7: Envelope detector 8: Scan converter, 10: processing unit, 10a: CPU, 10b: memory, 11: parameter setting unit, 12: synthesis unit, 13: display unit.

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Molecular Biology (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Animal Behavior & Ethology (AREA)
  • Surgery (AREA)
  • Biophysics (AREA)
  • Pathology (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Vascular Medicine (AREA)
  • Quality & Reliability (AREA)
  • Multimedia (AREA)
  • Ultra Sonic Daignosis Equipment (AREA)
  • Image Analysis (AREA)

Abstract

Provided is an ultrasonic imaging device capable of identifying a noise region in which the echo signal is weak. A reference frame and a comparison frame are selected from an image having two or more frames obtained by processing a received signal. A region of interest is set in the reference frame, a search region wider than the region of interest is set in the comparison frame, a plurality of candidate regions that are candidates for the destination of movement of the region of interest are set in the search region, the similarity between image characteristic values within the region of interest and within the candidate region is calculated for each of the candidate regions, and the similarity distribution over the entire search region is found. Consequently, it becomes possible to identify, on the basis of the similarity distribution, whether the region of interest is a noise region. For example, a statistic for comparing the minimum value of the similarity and all values of the similarity in the similarity distribution is found. The reliability of the region of interest can be assessed by comparing the statistic and a threshold value.

Description

超音波イメージング装置、超音波イメージング方法および超音波イメージング用プログラムUltrasonic imaging apparatus, ultrasonic imaging method, and ultrasonic imaging program
 本発明は、超音波により生体のイメージングを行う際に組織境界を明瞭に識別することが可能な超音波撮像方法および超音波撮像装置に関する技術である。 The present invention relates to an ultrasonic imaging method and an ultrasonic imaging apparatus that can clearly identify a tissue boundary when imaging a living body using ultrasonic waves.
 医療画像診断に用いられる超音波撮像装置においては、例えば特許文献1に記載されているように、診断動画像(Bモード画像)の小領域の変化量に基づいて組織の弾性係数分布を推定して、硬さをカラーマップに変換して表示する方法が知られている。しかしながら、例えば腫瘍の辺縁部の場合、周囲の組織に対して、音響インピーダンスも弾性率も大きく異ならないことがあり、このような場合には、診断動画像においても弾性画像においても腫瘍と周囲の組織との境界を把握できない。 In an ultrasonic imaging apparatus used for medical image diagnosis, for example, as described in Patent Document 1, an elastic modulus distribution of a tissue is estimated based on a change amount of a small area of a diagnostic moving image (B-mode image). Thus, a method of displaying the hardness after converting it into a color map is known. However, for example, in the case of the peripheral part of a tumor, the acoustic impedance and the elastic modulus may not differ greatly with respect to the surrounding tissue. I cannot grasp the boundary with other organizations.
 そこで、特許文献2に記載の技術では、診断動画像の動きベクトルから直接スカラー場画像を作成することにより、音響インピーダンスも弾性率も周囲と大きく異ならない組織の境界を識別可能にする手法が提案されている。 Therefore, the technique described in Patent Document 2 proposes a technique that makes it possible to identify a tissue boundary in which acoustic impedance and elastic modulus are not significantly different from the surroundings by creating a scalar field image directly from a motion vector of a diagnostic moving image. Has been.
特開2004-135929号公報JP 2004-135929 A 特開2008-79792号公報JP 2008-79792 A
 従来の特許文献2に記載の技術では、2つの診断画像データについてブロックマッチング処理を行うことにより動きベクトルの推定を行っているが、推定処理の際に画像データのノイズの影響により誤りベクトルが発生する。そのため境界の識別度が劣化するという問題があった。特にエコー信号が微弱になるペネトレーション限界領域ではベクトルの推定精度が大きく劣化する。 In the technique disclosed in Patent Document 2, motion vector estimation is performed by performing block matching processing on two diagnostic image data, but an error vector is generated due to the influence of noise in the image data during the estimation processing. To do. Therefore, there has been a problem that the degree of discrimination of the boundary deteriorates. Particularly in the penetration limit region where the echo signal is weak, the vector estimation accuracy is greatly degraded.
 本発明の目的は、エコー信号が微弱なノイズ領域の判別が可能な超音波イメージング装置を提供することにある。 An object of the present invention is to provide an ultrasonic imaging apparatus capable of discriminating a noise region where an echo signal is weak.
 上記目的を達成するために、本発明の第1の態様によれば、以下のような超音波イメージング装置が提供される。すなわち、本発明の超音波イメージング装置は、対象に向かって超音波を送信する送信部と、対象から到来する超音波を受信する受信部と、受信部の受信信号を処理して2フレーム以上の画像を生成する処理部とを有する。処理部は、生成した2フレーム以上の画像のうちの1のフレームを基準フレームとし、予め定めた位置、または、操作者から受け付けた位置に関心領域を設定する。別の1のフレームを比較フレームとし、関心領域よりも広い探索領域を予め定めた位置、または、操作者から受け付けた位置に設定し、探索領域内に前記関心領域の移動先の候補である複数の候補領域を設定する。関心領域内と候補領域内の画像特性値の類似度を候補領域ごとに計算して、探索領域全体にわたる類似度の分布を求める。これにより、類似度の分布に基づき、関心領域がノイズ領域かどうか判別することが可能になる。 In order to achieve the above object, according to the first aspect of the present invention, the following ultrasonic imaging apparatus is provided. That is, the ultrasonic imaging apparatus of the present invention includes a transmission unit that transmits ultrasonic waves toward a target, a reception unit that receives ultrasonic waves coming from the target, and a reception signal of the reception unit to process two or more frames. And a processing unit for generating an image. The processing unit sets a region of interest at a predetermined position or a position received from the operator by using one frame of the generated two or more frames as a reference frame. Another frame is set as a comparison frame, a search area wider than the region of interest is set at a predetermined position or a position received from the operator, and a plurality of candidates for the movement destination of the region of interest within the search region Set candidate areas for. The similarity between the image characteristic values in the region of interest and the candidate region is calculated for each candidate region, and the similarity distribution over the entire search region is obtained. This makes it possible to determine whether the region of interest is a noise region based on the similarity distribution.
 例えば、処理部は、類似度の分布において類似度の最小値と類似度の全体の値とを比較する統計量を求め、当該統計量により関心領域の信頼度を判定する構成とする。具体的には例えば、処理部は、類似度の最小値、平均値および標準偏差を用いて上述の統計量を求め、求めた統計量と閾値とを比較することにより関心領域の信頼度を判定することが可能である。 For example, the processing unit obtains a statistic that compares the minimum value of the similarity and the overall value of the similarity in the similarity distribution, and determines the reliability of the region of interest based on the statistic. Specifically, for example, the processing unit determines the reliability of the region of interest by calculating the above-described statistic using the minimum value, average value, and standard deviation of the similarity, and comparing the calculated statistic with a threshold value. Is possible.
 例えば、処理部は、比較フレームにおける関心領域に対応する位置と、類似度が最小の候補領域の位置とを結ぶベクトルを生成することができ、信頼度が低いと判定された関心領域についてはベクトルをゼロ、もしくは、所定のベクトルに置き換える。これにより、誤りベクトルを除去等することができ、ベクトルの精度を向上させることができる。 For example, the processing unit can generate a vector connecting the position corresponding to the region of interest in the comparison frame and the position of the candidate region having the minimum similarity, and the vector for the region of interest determined to have low reliability. Is replaced with zero or a predetermined vector. Thereby, an error vector can be removed and the accuracy of the vector can be improved.
 例えば、処理部は、類似度分布について類似度の平均、最小値及び標準偏差を算出し、平均と最小値の差分を標準偏差で除算した分離度を統計量として用いることができる。また、例えば、類似度分布について類似度の平均及び標準偏差を算出し、標準偏差を平均で除算した変動係数を統計量として用いることも可能である。 For example, the processing unit can calculate the average, minimum value, and standard deviation of the similarity for the similarity distribution, and use the degree of separation obtained by dividing the difference between the average and the minimum value by the standard deviation as the statistic. Further, for example, it is also possible to calculate the average and standard deviation of the similarity for the similarity distribution, and use the coefficient of variation obtained by dividing the standard deviation by the average as the statistic.
 前述した統計量と比較する閾値は以下のようにして求めることができる。例えば、複数の関心領域を設定し、それぞれについて統計量を求め、求めた統計量の値の頻度を示すヒストグラム分布を求め、ヒストグラム分布における中央値、平均値、または、ヒストグラム分布が複数の山形状を呈する場合には、山の間の谷の最小値の統計量を閾値として用いる。 Threshold values to be compared with the above-mentioned statistics can be obtained as follows. For example, a plurality of regions of interest are set, a statistic is obtained for each, a histogram distribution indicating the frequency of the obtained statistic value is obtained, and the median, average value, or histogram distribution in the histogram distribution is a plurality of mountain shapes Is used, the statistic of the minimum value of the valley between the mountains is used as the threshold value.
 また、類似度分布を平滑化処理した平滑化後類似度分布を生成し、平滑化処理前の類似度分布から平滑化後類似度分布を差し引いた差分類似度分布を求めることが可能である。これにより、類似度分布からノイズによる類似度の変動を除去できる。 It is also possible to generate a smoothed similarity distribution obtained by smoothing the similarity distribution, and obtain a difference similarity distribution obtained by subtracting the smoothed similarity distribution from the similarity distribution before the smoothing process. Thereby, the fluctuation | variation of the similarity by noise can be removed from similarity distribution.
 上記平滑化処理は、例えば、類似度分布に、予め定めた大きさのフィルタを設定し、フィルタ内の分布を平滑化する処理を、フィルタを所定量移動させながら繰り返す手法を用いる。フィルタの大きさは、以下のように定めることができる。比較フレームにおける関心領域に対応する位置と、平滑化処理前の類似度分布において類似度が最小の候補領域の位置とを結ぶベクトルを、複数の関心領域について予め生成する。生成したベクトルのうち、最大のベクトル長をフィルタの大きさとする。 The smoothing process uses, for example, a method in which a filter having a predetermined size is set in the similarity distribution and the process of smoothing the distribution in the filter is repeated while moving the filter by a predetermined amount. The size of the filter can be determined as follows. A vector connecting the position corresponding to the region of interest in the comparison frame and the position of the candidate region having the minimum similarity in the similarity distribution before the smoothing process is generated in advance for a plurality of regions of interest. Of the generated vectors, the maximum vector length is set as the filter size.
 上記類似度分布をラプラシアンフィルタでフィルタリング処理して輪郭強調分布を作成し、輪郭強調分布において連続する輪郭線を抽出することにより、対象の境界を求めることも可能である。 It is also possible to obtain the boundary of the target by filtering the similarity distribution with a Laplacian filter to create a contour enhancement distribution and extracting continuous contour lines in the contour enhancement distribution.
 処理部を次の構成にすることにより、腫瘍の浸潤度を判定することも可能である。すなわち、処理部は、生体腫瘍境界付近に設定された関心領域について類似度分布を求め、類似度を画像特性値とする類似度分布画像を生成し、類似度分布画像上で関心領域に対応する位置を中心として複数の異なる方向に所定長の1次元領域を設定する第1処理手段と、1次元領域内の類似度の総和を、設定した方向ごとに計算する第2処理手段と、類似度総和が最小となる方向の類似度総和と、それと直交する方向の1次元領域の類似度総和との比率を計算する第3処理手段と、比率に基づいて腫瘍の浸潤度を判定する第4処理手段と、を有する構成とする。 It is also possible to determine the degree of tumor invasion by configuring the processing section as follows. That is, the processing unit obtains a similarity distribution for a region of interest set near the living body tumor boundary, generates a similarity distribution image having the similarity as an image characteristic value, and corresponds to the region of interest on the similarity distribution image. A first processing means for setting a one-dimensional area of a predetermined length in a plurality of different directions centered on a position; a second processing means for calculating the sum of similarities in the one-dimensional area for each set direction; and similarity Third processing means for calculating the ratio between the similarity sum in the direction in which the sum is minimum and the similarity sum of the one-dimensional area in the direction orthogonal thereto, and fourth processing for determining the degree of tumor invasion based on the ratio Means.
 第3処理手段で計算された比率が予め設定した一定値よりも小さい場合には、その着目画素が境界線を構成する点であると判断することにより、境界線を求めることも可能である。 When the ratio calculated by the third processing means is smaller than a predetermined value set in advance, it is possible to obtain the boundary line by determining that the target pixel is a point constituting the boundary line.
 また、本発明の第2の態様によれば、以下のような超音波イメージング方法が提供される。すなわち、対象に向かって超音波を送信し、対象から到来する超音波を受信して得た受信信号を処理して2フレーム以上の画像を生成し、画像から基準フレームと比較フレームとを選択し、基準フレームに関心領域を設定し、比較フレームに前記関心領域よりも広い探索領域を設定し、探索領域内に前記関心領域の移動先の候補である複数の候補領域を設定し、関心領域内と候補領域内の画像特性値の類似度を候補領域ごとに計算して、探索領域全体にわたる類似度の分布を求める超音波イメージング方法である。 Further, according to the second aspect of the present invention, the following ultrasonic imaging method is provided. That is, an ultrasonic wave is transmitted toward the target, and a reception signal obtained by receiving the ultrasonic wave coming from the target is processed to generate an image of two or more frames, and a reference frame and a comparison frame are selected from the image. A region of interest is set in the reference frame, a search region wider than the region of interest is set in the comparison frame, a plurality of candidate regions that are candidates for movement of the region of interest are set in the search region, The ultrasonic imaging method calculates the similarity of the image characteristic values in the candidate area for each candidate area and obtains the similarity distribution over the entire search area.
 また、本発明の第3の態様によれば、以下のような超音波イメージング用プログラムが提供される。すなわち、コンピュータに、2フレーム以上の超音波画像から基準フレームと比較フレームとを選択する第1のステップ、基準フレームに関心領域を設定する第2のステップ、比較フレームに関心領域よりも広い探索領域を設定し、探索領域内に関心領域の移動先の候補である複数の候補領域を設定する第3のステップ、関心領域内と候補領域内の画像特性値の類似度を候補領域ごとに計算して、探索領域全体にわたる類似度の分布を求める第4のステップ、を実行させるための超音波イメージング用プログラムである。 Also, according to the third aspect of the present invention, the following ultrasound imaging program is provided. That is, a first step of selecting a reference frame and a comparison frame from two or more frames of ultrasound images, a second step of setting a region of interest in the reference frame, and a search region wider than the region of interest in the comparison frame The third step of setting a plurality of candidate areas that are candidates for the movement destination of the region of interest in the search area, and calculating the similarity between the image characteristic values in the region of interest and the candidate region for each candidate region And a fourth step of obtaining a similarity distribution over the entire search region.
 本発明によれば、類似度分布に基づき関心領域がノイズの多い領域かどうかを判断することが可能になる。これにより、誤りベクトルの発生を抑制し、且つペネトレーション限界領域でも高精度なベクトル推定が可能になる。推定した動きベクトル場から変換されるスカラー場画像の精度が向上し、より適切な境界検出が可能となる。 According to the present invention, it is possible to determine whether the region of interest is a noisy region based on the similarity distribution. As a result, the generation of error vectors is suppressed, and highly accurate vector estimation is possible even in the penetration limit region. The accuracy of the scalar field image converted from the estimated motion vector field is improved, and more appropriate boundary detection is possible.
実施形態1の超音波イメージング装置のシステム構成例を示すブロック図。1 is a block diagram showing an example system configuration of an ultrasound imaging apparatus according to Embodiment 1. FIG. 実施形態1の超音波イメージング装置による画像生成の処理手順を示すフローチャート。5 is a flowchart showing a processing procedure for image generation by the ultrasonic imaging apparatus according to the first embodiment. 図2のステップ24のブロックマッチング処理の詳細を示すフローチャート。The flowchart which shows the detail of the block matching process of step 24 of FIG. 図2のステップ24のブロックマッチング処理を2層構造のファントムで説明する図。The figure explaining the block matching process of step 24 of FIG. 2 with the phantom of a two-layer structure. (a)実施形態1の超音波イメージング装置で生成したBモード画像例を示す図、(b)実施形態1の超音波イメージング装置で生成した動きベクトル分布画像例を示す図。(A) The figure which shows the example of a B mode image produced | generated with the ultrasonic imaging apparatus of Embodiment 1, (b) The figure which shows the example of a motion vector distribution image produced | generated with the ultrasonic imaging apparatus of Embodiment 1. FIG. (a)は図5(b)の位置(3)にROIを設定した得たSAD分布画像例を示す図、(b)は図5(b)の位置(5)にROIを設定した得たSAD分布画像例を示す図、(c)は図(a)に示したSAD値のヒストグラム、(d)は図(b)に示したSAD値のヒストグラム。FIG. 5A is a diagram showing an example of an SAD distribution image obtained by setting the ROI at the position (3) in FIG. 5B, and FIG. 5B is obtained by setting the ROI at the position (5) in FIG. The figure which shows the example of a SAD distribution image, (c) is the histogram of the SAD value shown to Fig. (A), (d) is the histogram of the SAD value shown to Fig. (B). 実施形態1における分離度の定義をSAD値のヒストグラム上で示す説明図。FIG. 4 is an explanatory diagram showing the definition of the degree of separation on the histogram of SAD values in the first embodiment. 図2のステップ25の処理を、分離度を用いて行う場合の処理の詳細を示すフローチャート。The flowchart which shows the detail of a process in the case of performing the process of step 25 of FIG. 2 using a degree of separation. 図8の処理で求めた分離度の分布を画像化した例を示す図。The figure which shows the example which imaged distribution of the separation degree calculated | required by the process of FIG. 図2のステップ25の処理を、変動係数を用いて行う場合の処理の詳細を示すフローチャート。The flowchart which shows the detail of a process in the case of performing the process of step 25 of FIG. 2 using a variation coefficient. 図10の処理で求めた変動係数の分布を画像化した例を示す図。The figure which shows the example which imaged the distribution of the variation coefficient calculated | required by the process of FIG. (a)図2のステップ24で生成した動きベクトル分布画像例を示す図、(b)。図2のステップ25で誤ベクトルを除去したベクトル分布画像例を示す図。(A) The figure which shows the example of a motion vector distribution image produced | generated by step 24 of FIG. 2, (b). The figure which shows the example of a vector distribution image which removed the false vector at step 25 of FIG. 実施形態2のSAD分布の演算の際にノイズを除去する処理のフローチャート。10 is a flowchart of processing for removing noise when calculating the SAD distribution according to the second embodiment. (a)実施形態2のノイズ除去前のSAD分布画像の例を示す図、(b)SAD分布画像を平滑化(LPF)処理して得たSAD分布画像の例を示す図、(c)ノイズ除去後のSAD分布画像の例を示す図。(A) The figure which shows the example of the SAD distribution image before noise removal of Embodiment 2, (b) The figure which shows the example of the SAD distribution image obtained by smoothing (LPF) the SAD distribution image, (c) Noise The figure which shows the example of the SAD distribution image after removal. 実施形態3の浸潤度を求める処理を示すフローチャート。9 is a flowchart illustrating processing for obtaining an infiltration degree according to the third embodiment. (a)~(h)図15の処理で用いるSAD分布の領域選択パターンを示す説明図。(A)-(h) Explanatory drawing which shows the area | region selection pattern of SAD distribution used by the process of FIG. (a)は図5(b)の位置(1)にROIを設定して得たSAD分布画像を示す図、(b)は図5(b)の位置(2)にROIを設定して得たSAD分布画像を示す図、(c)は図(a)に示したSAD値分布にラプラシアンフィルタを適用した画像を示す図、(d)は図(b)に示したSAD値分布にラプラシアンフィルタを適用した画像を示す図。(A) is a diagram showing an SAD distribution image obtained by setting the ROI at the position (1) in FIG. 5 (b), and (b) is obtained by setting the ROI at the position (2) in FIG. 5 (b). The figure which shows the SAD distribution image, (c) is a figure which shows the image which applied the Laplacian filter to the SAD value distribution shown in FIG. (A), (d) is the Laplacian filter in the SAD value distribution shown in FIG. The figure which shows the image which applied.
 本発明の一実施形態の超音波イメージング装置について以下説明する。
 (実施形態1)
 図1に本実施形態の超音波イメージング装置のシステム構成を示す。本装置は、超音波境界検出機能を備えている。図1のように、本装置は、超音波探触子(プローブ)1、ユーザインタフェース2、送波ビームフォーマ3、制御系4、送受切り替えスイッチ5、受波ビームフォーマ6、包絡線検波部7、スキャンコンバータ8、処理部10、パラメータ設定部11、合成部12および表示部13を備えて構成される。
An ultrasonic imaging apparatus according to an embodiment of the present invention will be described below.
(Embodiment 1)
FIG. 1 shows a system configuration of the ultrasonic imaging apparatus of the present embodiment. This apparatus has an ultrasonic boundary detection function. As shown in FIG. 1, this apparatus includes an ultrasonic probe (probe) 1, a user interface 2, a transmission beam former 3, a control system 4, a transmission / reception changeover switch 5, a reception beam former 6, and an envelope detection unit 7. , A scan converter 8, a processing unit 10, a parameter setting unit 11, a combining unit 12, and a display unit 13.
 超音波素子が一次元に配列された超音波探触子1は、生体に超音波ビーム(超音波パルス)を送信し、生体から反射されたエコー信号(受波信号)を受信する。制御系4の制御下で、送波焦点に合わせた遅延時間をもつ送波信号が送波ビームフォーマ3により出力され、送受切り替えスイッチ5を介して超音波探触子1に送られる。生体内で反射又は散乱されて超音波探触子1に戻った超音波ビームは、超音波探触子1によって電気信号に変換され、送受切り替えスイッチ5を介し受波ビームフォーマ6に受波信号として送られる。 The ultrasonic probe 1 in which ultrasonic elements are arranged one-dimensionally transmits an ultrasonic beam (ultrasonic pulse) to a living body and receives an echo signal (received signal) reflected from the living body. Under the control of the control system 4, a transmission signal having a delay time adjusted to the transmission focus is output by the transmission beamformer 3 and sent to the ultrasonic probe 1 via the transmission / reception changeover switch 5. The ultrasonic beam reflected or scattered in the living body and returned to the ultrasonic probe 1 is converted into an electric signal by the ultrasonic probe 1 and received by the receiving beam former 6 via the transmission / reception switch 5. Sent as.
 受波ビームフォーマ6は、90度位相がずれた2つの受波信号をミキシングする複素ビームフォーマであり、制御系4の制御下で受信タイミングに応じて遅延時間を調整するダイナミックフォーカスを行って、実部と虚部のRF信号を出力する。このRF信号は包絡線検波部7によって検波されてからビデオ信号に変換され、スキャンコンバータ8に入力されて画像データ(Bモード画像データ)に変換される。以上説明した構成は、周知の超音波イメージング装置の構成と同じである。 The receive beamformer 6 is a complex beamformer that mixes two received signals that are 90 degrees out of phase, and performs dynamic focus that adjusts the delay time according to the reception timing under the control of the control system 4. Outputs real and imaginary RF signals. The RF signal is detected by the envelope detector 7 and then converted into a video signal, which is input to the scan converter 8 and converted into image data (B-mode image data). The configuration described above is the same as the configuration of a known ultrasonic imaging apparatus.
 本発明の装置では、超音波境界検出処理を処理部10により実現する。処理部10は、CPU10aとメモリ10bとを有し、あらかじめメモリ10bに格納しておいたプログラムをCPU10aが実行することにより、以下の処理を行い被検体組織の境界を検出する。すなわち、スキャンコンバータ8から出力される2フレーム以上の画像データに基づいて、処理部10においてまず動きベクトル場を作成する。次に作成された動きベクトル場をスカラー場に変換処理する。そして、元の画像データと、それに対応する動きベクトル場またはスカラー場とを合成部12にて合成処理した後、表示部13にて表示する。 In the apparatus of the present invention, the ultrasonic boundary detection process is realized by the processing unit 10. The processing unit 10 includes a CPU 10a and a memory 10b. When the CPU 10a executes a program stored in the memory 10b in advance, the following processing is performed to detect the boundary of the subject tissue. That is, based on the image data of two or more frames output from the scan converter 8, the processing unit 10 first creates a motion vector field. Next, the generated motion vector field is converted into a scalar field. The original image data and the corresponding motion vector field or scalar field are combined by the combining unit 12 and then displayed on the display unit 13.
 パラメータ設定部11では、処理部10での信号処理のためのパラメータや合成部12での表示画像の選択設定等を行う。これらのパラメータは、オペレータ(装置操作者)によりユーザインタフェース2から入力される。信号処理のためのパラメータとしては、例えば、所望のフレームm上の関心領域の設定や、フレームmとは異なるフレームm+Δ上の探索領域の設定をオペレータから受け付けることが可能である。表示画像の選択設定としては、例えば、元画像とベクトル場画像(またはスカラー画像)とを1画像に合成してディスプレイに表示するか、あるいは2画像以上の動画像を並べて表示するかの選択設定を操作者から受け付けることが可能である。 The parameter setting unit 11 performs parameter setting for signal processing in the processing unit 10 and selection setting of a display image in the synthesis unit 12. These parameters are input from the user interface 2 by an operator (device operator). As parameters for signal processing, for example, setting of a region of interest on a desired frame m and setting of a search region on a frame m + Δ different from the frame m can be received from an operator. As the selection setting of the display image, for example, the selection setting of whether the original image and the vector field image (or scalar image) are combined into one image and displayed on the display, or two or more moving images are displayed side by side. Can be received from the operator.
 図2に、本発明の処理部10及び合成部12における境界検出処理および画像処理の一例のフローチャートを示す。処理部10は、まず、スキャンコンバータ8から計測信号を取得し、通常の信号処理を施してBモード動画像を作成する(ステップ21,22)。次にBモード動画像から所望のフレームとそれとは異なる時間のフレームとの2フレームを抽出する(ステップ23)。例えば所望フレームと次フレームの2フレームを抽出する。そして2つのフレームから動きベクトル場を算出する(ステップ24)。動きベクトル場の算出は、ブロックマッチング方法に基づいて実施する。算出した動きベクトル場に対してノイズ除去処理を行い(ステップ25)、ノイズ除去した動きベクトル場をスカラー場に変換する(ステップ26)。そしてスカラー場画像と、動きベクトル場画像あるいはBモード画像とを合成表示して1画像分の処理を終了する(ステップ27)。ステップ23において所望フレームとして、時系列で順次異なるフレームを選択して上記ステップ21~27の処理を繰り返し、合成画像を連続表示することにより、合成画像の動画を表示することが可能となる。 FIG. 2 shows a flowchart of an example of boundary detection processing and image processing in the processing unit 10 and the combining unit 12 of the present invention. The processing unit 10 first acquires a measurement signal from the scan converter 8 and performs normal signal processing to create a B-mode moving image (steps 21 and 22). Next, two frames of a desired frame and a frame having a different time are extracted from the B-mode moving image (step 23). For example, the desired frame and the next frame are extracted. Then, a motion vector field is calculated from the two frames (step 24). The motion vector field is calculated based on the block matching method. A noise removal process is performed on the calculated motion vector field (step 25), and the noise-removed motion vector field is converted to a scalar field (step 26). Then, the scalar field image and the motion vector field image or B-mode image are combined and displayed, and the processing for one image is completed (step 27). By selecting different frames sequentially in time series as desired frames in step 23 and repeating the processing of steps 21 to 27 described above, and continuously displaying the composite image, it is possible to display a moving image of the composite image.
 図3は、ステップ24の詳細な処理を示すフローチャートであり、図4は、ステップ24のブロックマッチング処理を説明する図である。図3および図4を用いて、上記ステップ24の動きベクトル場算出のためのブロックマッチング処理を具体的に説明する。ここでは、ステップ23においてm番目とm+Δ番目のフレームが選択されているとする。例えばΔ=1フレームとする。まず、処理部10は、フレームmにおいて、所定のピクセル数Nの関心領域ROI(region of interest:基準ブロック)31を図4のように設定する(ステップ51)。ROI31に含まれるピクセルの輝度分布をPm(i、j)と表す。i、jは、ROI31内の当該ピクセルの位置を示す。次に処理部10は、フレームm+Δ番目において、フレームmのROI31に対応する位置およびその近傍に所定の大きさの探索領域32を設定する(ステップ52)。ここでは、ROI31の設定は、処理部10がフレームmの画像全体に渡って順次ROI31を設定し、それを中心とした所定の大きさの探索領域32を設定する構成について説明するが、処理部10が予め定められた位置および大きさのROI31と、その近傍に予め定められた大きさの探索領域32を設定することや、パラメータ設定部11においてオペレータから受け付けた関心領域(ROI)および探索領域に、処理部10がROI31および探索領域32を設定することも可能である。 FIG. 3 is a flowchart showing detailed processing in step 24, and FIG. 4 is a diagram for explaining block matching processing in step 24. The block matching process for calculating the motion vector field in step 24 will be specifically described with reference to FIGS. Here, it is assumed that the mth and m + Δth frames are selected in step 23. For example, Δ = 1 frame. First, in the frame m, the processing unit 10 sets a region of interest ROI (region of interest: reference block) 31 having a predetermined number of pixels N as shown in FIG. 4 (step 51). The luminance distribution of the pixels included in the ROI 31 is represented as P m (i 0 , j 0 ). i 0 and j 0 indicate the position of the pixel in the ROI 31. Next, the processing unit 10 sets a search area 32 of a predetermined size at a position corresponding to the ROI 31 of the frame m and its vicinity at the frame m + Δth (step 52). Here, the setting of the ROI 31 will be described with respect to a configuration in which the processing unit 10 sequentially sets the ROI 31 over the entire image of the frame m and sets a search area 32 of a predetermined size centered on the ROI 31. 10 is a ROI 31 having a predetermined position and size, and a search region 32 having a predetermined size is set in the vicinity thereof, and a region of interest (ROI) and a search region received from an operator in the parameter setting unit 11 In addition, the processing unit 10 can set the ROI 31 and the search area 32.
 探索領域32は、ROI31と等しい大きさの複数の移動候補領域33に区切られている。処理部10は、ROI31の輝度に対して最も類似度の高い移動候補領域33を計算により求め、移動先領域として選択する。類似度を表す指標としては、差分絶対値和、二乗平均、相互相関値等を用いることができる。ここでは、一例として差分絶対値和を用いる場合について以下説明する。 The search area 32 is divided into a plurality of movement candidate areas 33 having the same size as the ROI 31. The processing unit 10 calculates the movement candidate area 33 having the highest similarity to the luminance of the ROI 31 and selects it as the movement destination area. As an index representing the degree of similarity, a sum of absolute differences, a mean square, a cross-correlation value, or the like can be used. Here, the case where the sum of absolute differences is used as an example will be described below.
 探索領域32内の移動候補領域33に含まれるピクセルの輝度分布をPm+Δ(i、j)と表す。i、jは、移動候補領域33内の当該ピクセルの位置を示す。処理部10は、ROI31のピクセルの輝度分布Pm+Δ(i、j)と移動候補領域33の輝度分布Pm(i、j)との差分絶対値和SAD(sum of absolute difference)を計算する(ステップ53)。ここでSADは次式(1)により定義される。
Figure JPOXMLDOC01-appb-M000001
 処理部10は、探索領域32のすべての移動候補領域33について、ROI31とのSAD値を求め、求められたSAD分布の中でSAD値が最小となる移動候補領域33を移動先領域と判定し、ROI31の位置と最小SAD値の移動候補領域33の位置とを結ぶ動きベクトルを決定する(ステップ54)。
The luminance distribution of the pixels included in the movement candidate area 33 in the search area 32 is represented as P m + Δ (i, j). i and j indicate the position of the pixel in the movement candidate area 33. The processing unit 10 calculates the sum of absolute differences SAD (sum of absolute difference) between the luminance distribution P m + Δ (i, j) of the pixel of the ROI 31 and the luminance distribution P m (i 0 , j 0 ) of the movement candidate region 33. (Step 53). Here, SAD is defined by the following equation (1).
Figure JPOXMLDOC01-appb-M000001
The processing unit 10 obtains the SAD value with the ROI 31 for all the movement candidate areas 33 in the search area 32, and determines the movement candidate area 33 having the smallest SAD value in the obtained SAD distribution as the movement destination area. Then, a motion vector connecting the position of the ROI 31 and the position of the movement candidate area 33 with the minimum SAD value is determined (step 54).
 そして処理部10は、ROI31をフレームmの画像全体に渡って移動させながら、上記処理を繰り返すことにより、フレームmの画像全体について動きベクトルを決定する(ステップ55)。決定したベクトルを例えば矢印で示す画像を生成することにより、動きベクトル場(動きベクトル分布画像)を得る。 Then, the processing unit 10 determines the motion vector for the entire image of the frame m by repeating the above process while moving the ROI 31 over the entire image of the frame m (step 55). A motion vector field (motion vector distribution image) is obtained by generating an image in which the determined vector is indicated by an arrow, for example.
 図5(a)、(b)は、上記処理で得られたBモード画像例と、動きベクトル分布画像の例である。図5(a)のBモード画像は、ゲル基材ファントム41、42を2層に重ね、上側のファントム41に超音波プローブを固定して横方向に移動させながら撮像したものである。図5(a)のBモード画像をフレームmとし、次フレーム(フレームm+Δ、Δ=1フレーム)とのブロックマッチング処理(ステップ24)により求めた動きベクトルが図5(b)である。 FIGS. 5A and 5B are examples of B-mode images and motion vector distribution images obtained by the above processing. The B-mode image in FIG. 5A is obtained by superposing the gel base material phantoms 41 and 42 in two layers and fixing the ultrasonic probe to the upper phantom 41 and moving it laterally. FIG. 5B shows the motion vector obtained by the block matching process (step 24) with the next frame (frame m + Δ, Δ = 1 frame), where the B-mode image in FIG.
 図5(b)のように、超音波プローブ1を固定した上側ファントム41に対応する上側1/3程度の領域は相対的に静止状態であり、下側のファントム42は横移動を示すベクトル場(横向きの矢印)となっている。しかしながら図5(b)の下側ファントム42の下部1/3程度の領域は、矢印が斜め上方を指し、向きも一定ではなく、動きベクトルが乱れている現象が見られる。この現象は、プローブ1からの距離が遠くなる程、検出感度のSN比(SNR)が低下する事に起因しており、ペネトレーション限界を示している。すなわち、プローブ1から遠い低SNR領域で誤ベクトルが発生している。 As shown in FIG. 5B, the region of the upper 上 側 corresponding to the upper phantom 41 to which the ultrasonic probe 1 is fixed is relatively stationary, and the lower phantom 42 is a vector field indicating lateral movement. (Horizontal arrow). However, in the region of the lower 領域 of the lower phantom 42 in FIG. 5B, the arrow points obliquely upward, the direction is not constant, and there is a phenomenon that the motion vector is disturbed. This phenomenon is caused by a decrease in the S / N ratio (SNR) of detection sensitivity as the distance from the probe 1 increases, and indicates a penetration limit. That is, an erroneous vector is generated in a low SNR region far from the probe 1.
 図6を用いて、上記ステップ25の動きベクトル場のノイズ除去処理について説明する。 The noise removal process of the motion vector field in step 25 will be described with reference to FIG.
 図6(a)、(b)は、図5(b)の位置(3)及び(5)にROI31を各々設定してステップ24で求めた次フレームの探索領域32の各移動候補領域33のSAD値を、それぞれの移動候補領域33の濃度として示した図(SAD分布図)である。図6(a),(b)は、それぞれ全体が探索領域32を示し、探索領域32は、21×21個の移動候補領域33に区分けされている。移動候補領域33のブロックサイズは30×30ピクセル、探索領域32は50×50ピクセルとし、移動候補領域33を探索領域32内で1ピクセルずつ移動させた。探索領域32は、ROI31の位置に対応する移動候補領域33の位置が、探索領域32の中心に位置するように設定している。 FIGS. 6A and 6B show the respective movement candidate areas 33 of the search area 32 of the next frame obtained by setting the ROI 31 at the positions (3) and (5) in FIG. FIG. 6 is a diagram (SAD distribution diagram) showing SAD values as densities of respective movement candidate regions 33; FIGS. 6A and 6B each show a search area 32 as a whole, and the search area 32 is divided into 21 × 21 movement candidate areas 33. The block size of the movement candidate area 33 is 30 × 30 pixels, the search area 32 is 50 × 50 pixels, and the movement candidate area 33 is moved pixel by pixel within the search area 32. The search area 32 is set so that the position of the movement candidate area 33 corresponding to the position of the ROI 31 is located at the center of the search area 32.
 図6(a)のSAD分布では、探索領域32の中心から右側真横にずれた位置の移動候補領域33のSAD値が最小になっている。よって、図5(b)の位置(3)については、ステップ24において右横向きのベクトルが決定されることが分かる。図5(b)から確認されるように、位置(3)は2層ファントム41,42の境界の少し下側(ファントム42側)付近であり、右横向きのベクトルが表示されている。 In the SAD distribution of FIG. 6A, the SAD value of the movement candidate region 33 at a position shifted to the right side from the center of the search region 32 is minimum. Therefore, it can be seen that a right lateral vector is determined in step 24 for the position (3) in FIG. As can be seen from FIG. 5 (b), the position (3) is slightly below the boundary between the two-layer phantoms 41 and 42 (the phantom 42 side), and a vector in the horizontal direction is displayed.
 ここで図6(a)のSAD値の空間分布に着目すると、SAD値が最小の移動候補領域33の周囲に、横方向、すなわち2層ファントム41,42の境界に沿った方向にSAD値の小さい領域(SAD値の谷)が形成されていることがわかる。この現象は、図6(b)のように、ROI31をフレームmの全領域に移動させながら全ての動きベクトル場を作成せずとも、図6(a)のSAD分布から境界を直接検出できる事を示唆している。 If attention is paid to the spatial distribution of the SAD values in FIG. 6A, the SAD values are distributed around the movement candidate region 33 having the smallest SAD value in the horizontal direction, that is, in the direction along the boundary between the two- layer phantoms 41 and 42. It can be seen that a small region (SAD value valley) is formed. This phenomenon is that, as shown in FIG. 6B, the boundary can be directly detected from the SAD distribution of FIG. 6A without creating all the motion vector fields while moving the ROI 31 to the entire region of the frame m. It suggests.
 一方、図6(b)のSAD値分布は、図5(b)の動きベクトルが乱れた領域内の位置(5)のものであるため、SAD値が最小になる移動候補領域33は、ノイズ値が小さくなるプローブ1に近い上部の広い領域に一様に広がっており、動きベクトルは上向きで乱れやすくなることが分かる。また、SAD値が最小の移動候補領域33の周囲に形成されているはずのSAD値の小さい領域(SAD値の谷)も、全体のノイズ変動に埋もれてしまい認識することができないことがわかる。 On the other hand, since the SAD value distribution of FIG. 6B is the position (5) in the region where the motion vector is disturbed in FIG. 5B, the movement candidate region 33 with the smallest SAD value is the noise. It can be seen that the value is uniformly spread over a wide area near the probe 1 where the value is small, and the motion vector is upward and easily disturbed. It can also be seen that the region with the small SAD value (the valley of the SAD value) that should be formed around the movement candidate region 33 with the smallest SAD value is buried in the entire noise fluctuation and cannot be recognized.
 本発明では、図6(a)のように、ノイズの小さい位置(3)にROI31を設定したSAD分布画像においては、SAD値が最小の移動候補領域33が周辺領域よりも明らかに小さいSAD値をとるため、はっきりと認識でき、逆に図6(b)のように、ノイズの多い位置(5)にROI31を設定した探索領域32のSAD分布画像において、SAD値が最小の移動候補領域33が、はっきり認識できない、という現象を利用して、ノイズの多いROI31を判別する。ノイズの多いROI31については、対応する動きベクトルを除去する処理を行う。 In the present invention, as shown in FIG. 6A, in the SAD distribution image in which the ROI 31 is set at the position (3) where the noise is small, the SAD value in which the movement candidate region 33 having the smallest SAD value is clearly smaller than the surrounding region. Therefore, in the SAD distribution image of the search area 32 in which the ROI 31 is set at the noisy position (5) as shown in FIG. 6B, the movement candidate area 33 having the smallest SAD value is obtained. However, the noisy ROI 31 is determined using the phenomenon that it cannot be clearly recognized. For the noisy ROI 31, a process for removing the corresponding motion vector is performed.
 処理部10は上述の判別のために、例えば、図2のステップ25において、所定の位置にROI31を設定した場合の図6(a),(b)のような探索領域32のSAD分布から、SAD値の分布を示すヒストグラムを図6(c)及び(d)のように作成する。図5(b)の位置(3)にROI31を設定した場合のヒストグラムは、図6(c)のようにSAD最小値が、ヒストグラム分布中の頻度の大きいSAD値からで充分に分離している。すなわち、最小のSAD値と、頻度の大きいSAD値の範囲とが十分に離れている。一方、図5(b)の位置(5)にROI31を設定した場合のヒストグラムは、図6(d)のように、SAD値の分布に対して頻度の差があまりなく、ヒストグラム分布がブロードに広がっている。このため、SAD最小値が、頻度の大きいSAD値の範囲に含まれており、最小のSAD値と頻度の大きいSAD値の範囲との分離は不充分である。 For the above-described determination, the processing unit 10 determines, for example, from the SAD distribution of the search region 32 as shown in FIGS. 6A and 6B when the ROI 31 is set at a predetermined position in step 25 of FIG. Histograms showing the distribution of SAD values are created as shown in FIGS. When the ROI 31 is set at the position (3) in FIG. 5B, the SAD minimum value is sufficiently separated from the SAD value having a high frequency in the histogram distribution as shown in FIG. 6C. . That is, the minimum SAD value and the range of SAD values with high frequency are sufficiently separated. On the other hand, the histogram when the ROI 31 is set at the position (5) in FIG. 5B does not have much frequency difference with respect to the SAD value distribution as shown in FIG. 6D, and the histogram distribution is broad. It has spread. For this reason, the SAD minimum value is included in the range of the SAD value having a high frequency, and the separation between the minimum SAD value and the range of the SAD value having a high frequency is insufficient.
 このような特性から、探索領域32のSAD分布のヒストグラムのSAD最小値と、頻度の大きいSAD値とを比較することにより、この探索領域32に対応するROI31の信号の信頼度(ノイズの少なさ)、および、その探索領域32において決定された動きベクトルの信頼度を判別することができる。これにより、信頼度の低い領域を、低SNR領域として判別できるとともに、対応する動きベクトルの信頼度も判別することができる。 From such characteristics, by comparing the SAD minimum value of the histogram of the SAD distribution in the search region 32 with the SAD value having a high frequency, the reliability of the signal of the ROI 31 corresponding to this search region 32 (less noise) ) And the reliability of the motion vector determined in the search area 32 can be determined. As a result, a region with low reliability can be determined as a low SNR region, and the reliability of the corresponding motion vector can also be determined.
 本発明では、SAD値分布のヒストグラムにおいて、SAD最小値と頻度の大きいSAD値との分離度を判断するために、指標を用いる。まず、分離度パラメータを指標とする場合の処理方法をまず説明する。図7に分離度の定義の概念を示す。分離度は、ヒストグラムの分布平均と最小値との距離に相当する値であり、次式(2)で定義する。
Figure JPOXMLDOC01-appb-M000002
 式(2)においては、分布の差異による影響を回避するため、標準偏差で規格化を施している。
In the present invention, an index is used to determine the degree of separation between the SAD minimum value and the frequent SAD value in the histogram of the SAD value distribution. First, a processing method in the case where the separability parameter is used as an index will be described first. FIG. 7 shows the concept of definition of the degree of separation. The degree of separation is a value corresponding to the distance between the distribution average of the histogram and the minimum value, and is defined by the following equation (2).
Figure JPOXMLDOC01-appb-M000002
In Formula (2), in order to avoid the influence by the difference in distribution, normalization is performed with the standard deviation.
 図8に、分離度を指標とする場合のノイズの大きな領域を判別し、動きベクトルを除去する処理のフローを示す。この処理は、図2のステップ25の処理を具体的に示すものであり、図3のステップ51でフレームmで設定した全ROI31について行う。処理部10は、最初に対象とするROI31を決定し(ステップ81)、決定したROI31に対して図3のステップ52、53で設定および演算された探索領域32の全移動候補領域33のSAD値を用いて、SAD値の平均、最小値、標準偏差を統計処理に従い計算する(ステップ82)。そして上式(2)で定義された分離度を求める(ステップ83)。これを全ROI31について繰り返す(ステップ84)。ステップ84で求めた分離度が所定値よりも低いROI31については、ノイズが大きく、動きベクトルの信頼性が低い領域であるので、図3のステップ54で求めた動きベクトルを0に置き換える(ステップ85)。これにより、動きベクトル画像から、信頼性の低い領域を判別し、ベクトル(誤ベクトル)を除去することができる。 FIG. 8 shows a flow of processing for discriminating a noisy region when the degree of separation is used as an index and removing a motion vector. This process specifically shows the process of step 25 in FIG. 2, and is performed for all ROIs 31 set in frame m in step 51 of FIG. The processing unit 10 first determines the target ROI 31 (step 81), and the SAD value of all the movement candidate regions 33 in the search region 32 set and calculated in steps 52 and 53 in FIG. 3 for the determined ROI 31. Is used to calculate the mean, minimum value, and standard deviation of the SAD values according to statistical processing (step 82). Then, the degree of separation defined by the above equation (2) is obtained (step 83). This is repeated for all ROIs 31 (step 84). Since the ROI 31 whose degree of separation obtained in step 84 is lower than a predetermined value is a region where the noise is large and the reliability of the motion vector is low, the motion vector obtained in step 54 in FIG. 3 is replaced with 0 (step 85). ). As a result, a region with low reliability can be determined from the motion vector image, and the vector (erroneous vector) can be removed.
 上記ステップ85において、分離度が低いかどうか判定するための所定値としては、予め定めた閾値や、ステップ84で全ROI31について求めた分離度の分布の中央値を用いることができる。もしくは、求めた分離度のヒストグラムを生成し、頻度の山が複数形成される場合、分離度が最も低い側に位置する山と、それよりも分離度が高い側に位置する山との間の谷の位置の分離度を閾値として用いることも可能である。 As the predetermined value for determining whether or not the degree of separation is low in Step 85, a predetermined threshold value or the median value of the distribution of the degree of separation obtained for all ROIs 31 in Step 84 can be used. Alternatively, when a histogram of the obtained degree of separation is generated and a plurality of frequency peaks are formed, between the mountain located on the side with the lowest degree of separation and the mountain located on the side with the higher degree of separation. It is also possible to use the degree of separation of the valley positions as a threshold value.
 ステップ83で全ROI31について求めた分離度を画像にしたものを図9に示す。図9においてフレームmには33×51個のROI31が設定され、各ROI31の分離度が濃度により示されている。図9のように分離度は、フレームmの下部の低SNR領域で低くなっており、分離度が動きベクトル推定の信頼性を反映していることが分かる。 FIG. 9 shows an image of the degree of separation obtained for all ROIs 31 in step 83. In FIG. 9, 33 × 51 ROIs 31 are set in the frame m, and the separation degree of each ROI 31 is indicated by the density. As shown in FIG. 9, the degree of separation is low in the low SNR region below the frame m, and it can be seen that the degree of separation reflects the reliability of motion vector estimation.
 上記図8の処理では、分離度を用いたが、SAD最小値と頻度の大きいSAD値との分離度を判断する指標としては、他の指標を用いることも可能である。例えば、変動係数を使用することが可能である。変動係数は次式で定義され、標準偏差を平均で規格化した統計量であり、分布のばらつきの大きさ(すなわち最小値の分離し難さ)を表す。
Figure JPOXMLDOC01-appb-M000003
In the process of FIG. 8, the degree of separation is used. However, as an index for determining the degree of separation between the SAD minimum value and the SAD value having a high frequency, another index can be used. For example, a coefficient of variation can be used. The coefficient of variation is defined by the following equation, and is a statistic obtained by standardizing standard deviations on the average, and indicates the magnitude of distribution variation (that is, difficulty in separating minimum values).
Figure JPOXMLDOC01-appb-M000003
 図10に変動係数を指標とする場合のノイズの大きな領域のベクトルを除去する処理のフローを示す。この処理は、図8の処理フローと同様に対象とするROI31を決定し(ステップ81)、決定したROI31に対して図3のステップ52、53で設定および演算された探索領域32の全移動候補領域33のSAD値を用いて、SAD値の平均と、標準偏差を統計処理に従い計算する(ステップ101)。そして上式(3)で定義された変動係数を求める(ステップ102)。これを全ROI31について繰り返す(ステップ84)。ステップ102で求めた変動係数が所定値よりも大きいROI31については、図3のステップ54で求めた動きベクトルを0に置き換える(ステップ85)。これにより、ノイズの大きいROI31を判別し、動きベクトル画像から、信頼性の低い領域を判別し、ベクトル(誤ベクトル)を除去することができる。 FIG. 10 shows a flow of processing for removing a vector in a noisy area when the coefficient of variation is used as an index. In this process, the target ROI 31 is determined in the same manner as in the processing flow of FIG. 8 (step 81), and all the movement candidates in the search area 32 set and calculated in steps 52 and 53 of FIG. Using the SAD values in the region 33, the average of the SAD values and the standard deviation are calculated according to statistical processing (step 101). Then, the coefficient of variation defined by the above equation (3) is obtained (step 102). This is repeated for all ROIs 31 (step 84). For the ROI 31 whose coefficient of variation obtained in step 102 is larger than a predetermined value, the motion vector obtained in step 54 of FIG. 3 is replaced with 0 (step 85). As a result, it is possible to determine the ROI 31 having a large noise, determine an unreliable region from the motion vector image, and remove the vector (false vector).
 上記ステップ85において、変動係数が高いかどうか判定するための所定値としては、予め定めた閾値や、ステップ84で全ROI31について求めた変動係数の分布の中央値を用いることができる。もしくは、求めた変動係数のヒストグラムを生成し、頻度が二つの山を呈する場合、二つの山の間の谷の最小値を所定値として採用することも有効である。 As the predetermined value for determining whether or not the variation coefficient is high in step 85, a predetermined threshold value or the median value of the distribution of variation coefficients obtained for all ROIs 31 in step 84 can be used. Alternatively, when a histogram of the obtained variation coefficient is generated and the frequency exhibits two peaks, it is also effective to adopt the minimum value of the valley between the two peaks as the predetermined value.
 図11に、ステップ102で全ROI31について求めた変動係数を画像にしたものを示す。図11において各ROI31の変動係数が濃度により示されている。図11のように変動係数は、フレームmの下部の低SNR領域で大きくなっており、変動係数が動きベクトル推定の信頼性を反映していることが分かる。 FIG. 11 shows the variation coefficient obtained for all ROIs 31 in step 102 as an image. In FIG. 11, the coefficient of variation of each ROI 31 is shown by the concentration. As shown in FIG. 11, the coefficient of variation increases in the low SNR region at the bottom of the frame m, and it can be seen that the coefficient of variation reflects the reliability of motion vector estimation.
 図12(a)、(b)に誤ベクトル除去前と除去後の動きベクトル分布画像の例を示す。図12(a)は、図4(b)と同じ誤ベクトル除去前の動きベクトル場であり、図12(b)は、図10の処理によりSAD分布から変動係数分布を求め、変動係数分布の中央値を閾値として、それより変動係数が大きいROIは信頼性が低いと判定し、動きベクトルを0(静止状態)に設定したものである。図12(a)、(b)を対比すると、動きベクトルが乱れている下側の領域は、誤ベクトルが明らかに除去され、静止状態に置き換えられている。すなわち、下側の領域は、超音波エコーが正確に得られないペネトレーション領域(すなわち信頼性の低い領域)と判定できている。 FIGS. 12A and 12B show examples of motion vector distribution images before and after removing an erroneous vector. FIG. 12A shows the same motion vector field before error vector removal as FIG. 4B, and FIG. 12B obtains the coefficient of variation distribution from the SAD distribution by the processing of FIG. An ROI having a median value as a threshold and a coefficient of variation larger than that is determined to have low reliability, and the motion vector is set to 0 (stationary state). When comparing FIGS. 12A and 12B, the lower region where the motion vector is disturbed is clearly removed and replaced with a stationary state. In other words, the lower region can be determined as a penetration region (that is, a region with low reliability) where an ultrasonic echo cannot be accurately obtained.
 上述の図8および図10の処理により、動きベクトル分布の誤りベクトルが除去されたならば、図2のステップ26,27により、動きベクトル分布をスカラー分布に変換し、スカラー分布画像と、動きベクトル分布画像(またはBモード画像)とを合成表示する。 If the error vector of the motion vector distribution is removed by the processing of FIG. 8 and FIG. 10 described above, the motion vector distribution is converted into a scalar distribution by steps 26 and 27 in FIG. The distribution image (or B-mode image) is synthesized and displayed.
 なお、図8および図10の処理では、信頼性が低い領域の動きベクトルを除去し静止状態にしたが、本発明は、この処理方法に限られるものではない。例えば、動きベクトルを静止状態に設定する代わりに、同じ領域について前に得た動きベクトルの状態をそのまま保持する処理方法にすることも可能である。 In the processing of FIGS. 8 and 10, the motion vector in the region with low reliability is removed to make it stationary. However, the present invention is not limited to this processing method. For example, instead of setting the motion vector to a static state, it is possible to use a processing method that maintains the state of the motion vector obtained previously for the same region as it is.
 (実施形態2)
 実施形態1では、SAD分布を求めた後に、ROI31が低SNR領域かどうかをSAD分布により判別し、低SNR領域の場合には動きベクトルを除去等するものであったが、本実施形態では、SAD分布を演算する際に、ノイズを除去し、ノイズ除去後の検出感度の高いSAD分布を用いて信頼性の高い動きベクトルを求める。
(Embodiment 2)
In the first embodiment, after obtaining the SAD distribution, whether or not the ROI 31 is in the low SNR region is determined by the SAD distribution, and in the case of the low SNR region, the motion vector is removed. When calculating the SAD distribution, noise is removed, and a highly reliable motion vector is obtained using the SAD distribution with high detection sensitivity after noise removal.
 図13は、本実施形態2のSAD分布の演算の際にノイズを除去する処理フローである。図13の処理は、実施形態1の図3のステップ51~54のSAD演算処理において、ノイズ除去処理(ステップ132,133)を追加している。図14(a)、(b)、(c)は、図13の各処理段階のSAD分布の例を示している。 FIG. 13 is a processing flow for removing noise during the calculation of the SAD distribution of the second embodiment. In the processing of FIG. 13, noise removal processing (steps 132 and 133) is added to the SAD calculation processing of steps 51 to 54 of FIG. 3 of the first embodiment. 14A, 14B, and 14C show examples of the SAD distribution at each processing stage in FIG.
 図13のように、処理部10は、まず実施形態1の図3のステップ51~53と同じ処理を行い、SAD値分布を求める。求めたSAD分布画像の例を図14(a)に示す。ROI31は、図5(b)の位置(4)であり、ファントム42は、本来は相対的に横方向に移動しているにもかかわらず、ノイズの影響で上側の移動候補領域33のSAD値が最小となるため、このまま動きベクトルを決定すると、ベクトルの誤検出が生じる。こうした誤検出を回避するため、処理部10は、ステップ51~53で得られたSAD分布画像に対して平滑化処理(ローパスフィルタLPF(low pass filter)処理)を施す(ステップ132)。平滑化処理は、例えば、所定の大きさのフィルタをSAD分布画像にかけ、フィルタ内のSAD分布の高周波成分をカットして平滑化する。この処理をフィルタを所定量ずつ移動させながら繰り返す。このようにSAD値分布画像の平滑化により、ファントム42の移動によるSAD値の変化は急峻であるため除去されるのに対し、ノイズによるAD値の変動は緩やであるため抽出することができる。平滑化処理により得られたSAD値分布画像を図14(b)に示す。 As shown in FIG. 13, the processing unit 10 first performs the same processing as steps 51 to 53 in FIG. 3 of the first embodiment to obtain the SAD value distribution. An example of the obtained SAD distribution image is shown in FIG. The ROI 31 is at the position (4) in FIG. 5B, and the SAD value of the upper movement candidate area 33 is affected by noise even though the phantom 42 originally moves relatively in the horizontal direction. Since the motion vector is determined as it is, erroneous detection of the vector occurs. In order to avoid such erroneous detection, the processing unit 10 performs a smoothing process (low-pass filter LPF (low pass filter) process) on the SAD distribution image obtained in steps 51 to 53 (step 132). In the smoothing process, for example, a filter having a predetermined size is applied to the SAD distribution image, and the high frequency component of the SAD distribution in the filter is cut and smoothed. This process is repeated while moving the filter by a predetermined amount. As described above, by smoothing the SAD value distribution image, the change in the SAD value due to the movement of the phantom 42 is removed because it is steep, whereas the change in the AD value due to noise is gradual and can be extracted. . An SAD value distribution image obtained by the smoothing process is shown in FIG.
 次に元のステップ53のSAD値分布と、ステップ132の平滑化処理後のSAD値分布の差分を求める(ステップ133)。これにより、ノイズによるSAD値の変動が除去された、ファントムの移動による本来のSAD値分布を得ることができる。得られた分布を図14(c)に示す。得られたSAD値分布を用いて、図3のステップ54を行い、最小のSAD値の移動候補領域33を移動先と判定し、動きベクトルを決定する。動きベクトル決定後は、実施形態1の図3のステップ56の処理により動きベクトル分布画像を生成する。また、実施形態1の図2のステップ25により、動きベクトル分布の信頼性の低いベクトルを除去等する処理をさらに行うことも可能である。 Next, the difference between the original SAD value distribution at step 53 and the smoothed SAD value distribution at step 132 is obtained (step 133). Thereby, the original SAD value distribution by the movement of the phantom from which the fluctuation of the SAD value due to noise is removed can be obtained. The obtained distribution is shown in FIG. Using the obtained SAD value distribution, step 54 of FIG. 3 is performed, the movement candidate area 33 having the smallest SAD value is determined as a movement destination, and a motion vector is determined. After the motion vector is determined, a motion vector distribution image is generated by the process of step 56 in FIG. 3 of the first embodiment. Further, in step 25 of FIG. 2 in the first embodiment, it is possible to further perform processing such as removing a vector with low reliability of the motion vector distribution.
 上述したように本実施形態2の処理により、ノイズによるSAD値変動が除去されたSAD値分布を用いて動きベクトルを決定することができるため、動きベクトルの信頼性を向上させることができる。 As described above, since the motion vector can be determined using the SAD value distribution from which the SAD value fluctuation due to noise is removed by the processing of the second embodiment, the reliability of the motion vector can be improved.
 なお、上記平滑化処理では、LPFを用いたが、これに限定されず、被検体(ファントム)の移動によるSAD値の分布の空間周波数が高い(すなわちより複雑な形状)場合には、バンドパスフィルタを適用することも有効である。 In the above smoothing process, LPF is used. However, the present invention is not limited to this. If the spatial frequency of the distribution of SAD values due to movement of the subject (phantom) is high (that is, a more complicated shape), the band pass is used. It is also effective to apply a filter.
 ステップ132のフィルタ処理において用いるフィルタの一辺の大きさは、次のように定めることができる。すなわち、平滑化処理していないSAD分布について、予め図3のステップ54を行って動きベクトル場を作成し、その動きベクトル場の最大ベクトル長を求め、最大ベクトル長をローパスフィルタあるいはバンドパスフィルタの一辺の大きさとして設定する。 The size of one side of the filter used in the filter processing in step 132 can be determined as follows. That is, for the SAD distribution that has not been smoothed, the motion vector field is created in advance by performing step 54 in FIG. Set as the size of one side.
 (実施形態3)
 次に、実施形態3として、実施形態1の図3のステップ53で求めたSAD値分布を用いて、直接的に組織の境界を求めるとともに、生体腫瘍の正常組織に対する浸潤度を判定する処理方法について説明する。なお、本実施形態3においては、探索領域32の移動候補領域33を単に領域33と呼ぶ。また、ROI31を注目画素とも呼ぶ。
(Embodiment 3)
Next, as a third embodiment, a processing method for directly determining a tissue boundary using the SAD value distribution obtained in step 53 of FIG. 3 of the first embodiment and determining an infiltration degree of a living tumor with respect to a normal tissue Will be described. In the third embodiment, the movement candidate area 33 of the search area 32 is simply referred to as an area 33. The ROI 31 is also referred to as a target pixel.
 被検体の組織の境界に沿った領域は、組織の類似度が高いため、Bモード画像で近い輝度を示す。このため、SADは、被検体の組織の境界に沿った領域33の方が、境界の直交方向に沿った領域33よりも小さい値を示す特徴がある。一方、生体腫瘍の浸潤が進むと、境界が不明瞭になるため、境界に沿った領域33のSAD値が大きくなる。これを利用して、浸潤度を判定する。 The region along the boundary of the tissue of the subject shows high brightness in the B-mode image because the tissue similarity is high. For this reason, the SAD has a feature that the region 33 along the boundary of the tissue of the subject has a smaller value than the region 33 along the orthogonal direction of the boundary. On the other hand, when the invasion of the living tumor progresses, the boundary becomes unclear, so the SAD value of the region 33 along the boundary increases. Using this, the degree of infiltration is determined.
 図15に本実施形態3の処理部10の処理フローを示す。また、図16(a)~(h)には、対象方向と、それに対応してSAD値分布画像上で選択される領域33の8つのパターンを示す。 FIG. 15 shows a processing flow of the processing unit 10 of the third embodiment. FIGS. 16A to 16H show eight patterns of the target direction and the region 33 selected on the SAD value distribution image correspondingly.
 まず、処理部10は、Bモード画像の所望フレームmにおいて調査したい組織の境界位置に注目画素(ROI)31を設定し、フレームm+Δに探索領域32を設定し、探索領域32のSAD値分布を求める(ステップ151)。フレームの選択方法およびSAD値分布の演算方法は、実施形態1の図2のステップ21~23ならびに図3のステップ51~53と同様に行う。 First, the processing unit 10 sets a pixel of interest (ROI) 31 at the boundary position of the tissue to be investigated in the desired frame m of the B-mode image, sets a search area 32 in the frame m + Δ, and calculates the SAD value distribution of the search area 32. Obtain (step 151). The frame selection method and the SAD value distribution calculation method are performed in the same manner as steps 21 to 23 in FIG. 2 and steps 51 to 53 in FIG.
 このSAD値分布において、図16(a)のように探索領域32の中心を通り、所定の対象方向(水平方向)151に沿って位置する領域33を選択し(ステップ63)、選択した領域33のSAD値の和を求める(ステップ64)。同様に、対象方向151の直交方向(垂直方向)152に沿って位置する領域33を選択し、選択した領域33のSAD値の和も求める。 In this SAD value distribution, as shown in FIG. 16A, an area 33 passing through the center of the search area 32 and positioned along a predetermined target direction (horizontal direction) 151 is selected (step 63), and the selected area 33 is selected. The sum of the SAD values is obtained (step 64). Similarly, the region 33 positioned along the orthogonal direction (vertical direction) 152 of the target direction 151 is selected, and the sum of the SAD values of the selected region 33 is also obtained.
 このステップ63,64の処理を、図16(a)~(h)の8つのパターンについてすべて行うまで繰り返す(ステップ62)。図16(b)のパターンでは、所定の対象方向(水平方向に対して反時計回りに約30°傾斜した方向)151に沿って位置する領域33のSAD値の和を求め、その直交方向152に沿って位置する領域33のSAD値の和を求める。 The processes in steps 63 and 64 are repeated until all the eight patterns shown in FIGS. 16A to 16H are performed (step 62). In the pattern of FIG. 16B, the sum of the SAD values of the region 33 located along a predetermined target direction (direction inclined about 30 ° counterclockwise with respect to the horizontal direction) 151 is obtained, and the orthogonal direction 152 is obtained. The sum of the SAD values of the region 33 located along is obtained.
 図16(c)~(h)のパターンでは、水平方向に対して反時計回りに約45°、約60°、90°、約120°、約135°、約150°の対象方向151と、その直交方向152に沿って位置する領域33のSAD値の和をそれぞれ求める。 In the patterns of FIGS. 16C to 16H, the target directions 151 of about 45 °, about 60 °, 90 °, about 120 °, about 135 °, and about 150 ° counterclockwise with respect to the horizontal direction; The sum of the SAD values of the region 33 located along the orthogonal direction 152 is obtained.
 求めた各対象方向151のSAD値の和が最小となる対象方向151を選択する(ステップ65)。この選択した対象方向151の方向が組織の境界の方向である。これにより、動きベクトルを求めずに直接的に境界を検出することができる。 The target direction 151 that minimizes the sum of the calculated SAD values of each target direction 151 is selected (step 65). The direction of the selected target direction 151 is the direction of the tissue boundary. Thereby, the boundary can be directly detected without obtaining a motion vector.
 次に、選択した対象方向151に直交する方向152を選択する(ステップ66)。選択した対象方向151のSAD値の和と、それに直交する方向152のSAD和の比率(対象方向のSAD値和/直交方向のSAD値和)を算出する(ステップ67)。 Next, a direction 152 orthogonal to the selected target direction 151 is selected (step 66). The ratio of the sum of the SAD values in the selected target direction 151 and the SAD sum in the direction 152 orthogonal to the selected target direction 151 (the SAD value sum in the target direction / the SAD value sum in the orthogonal direction) is calculated (step 67).
 浸潤度が低く、境界が明確な場合には、境界方向(選択した対象方向151)のSAD和は小さく、直交方向152のSAD和は大きくなるので、比率としては小さな値が得られる。一方、浸潤度が大きくなり境界が不明瞭になるにつれて、境界方向(選択した対象方向151)のSAD和が増大するため、比率は増大して行く。したがって、比率をパラメータとして浸潤度を評価する事ができる。具体的には例えば、予め定めておいた複数の基準値と比率とを対比して、浸潤の度合いを判定し、判定結果を表示する。 When the degree of infiltration is low and the boundary is clear, the SAD sum in the boundary direction (selected target direction 151) is small and the SAD sum in the orthogonal direction 152 is large, so a small value is obtained as the ratio. On the other hand, as the degree of infiltration increases and the boundary becomes unclear, the SAD sum in the boundary direction (selected target direction 151) increases, so the ratio increases. Therefore, the degree of infiltration can be evaluated using the ratio as a parameter. Specifically, for example, the degree of infiltration is determined by comparing a plurality of predetermined reference values and ratios, and the determination result is displayed.
 なお、比率が予め設定した一定値よりも小さい場合には、その注目画素(ROI31)が境界線を構成する点であると識別して境界を表示することも可能である。 When the ratio is smaller than a predetermined value, the target pixel (ROI 31) can be identified as a point constituting a boundary line and the boundary can be displayed.
 なお、各方向のSAD値の和を求める際には、方向依存フィルタを用いることができる。方向依存フィルタは、処理画素のフィルタ範囲(探索領域32)において1次元方向の濃度変化が最小の方向を判定する機能を有するフィルタである。 Note that a direction-dependent filter can be used when calculating the sum of the SAD values in each direction. The direction-dependent filter is a filter having a function of determining a direction in which the density change in the one-dimensional direction is the smallest in the filter range (search region 32) of the processing pixel.
 図16(a)~(h)では、図示を容易にするために、5×5の領域33からなる探索領域32における対象方向151および直交方向152の領域選択パターンを示しているが、実際の処理では、探索領域32の領域33の数に対応して領域選択パターンを設定する。 FIGS. 16A to 16H show region selection patterns in the target direction 151 and the orthogonal direction 152 in the search region 32 composed of the 5 × 5 region 33 for easy illustration. In the processing, an area selection pattern is set corresponding to the number of areas 33 in the search area 32.
 (実施形態4)
 実施形態4として、探索領域32のSAD値分布から、動きベクトルを求めずに、直接的に境界を検出する他の方法を説明する。ここでは、2次微分に対応する強調処理を行うラプラシアンフィルタを適用する。
(Embodiment 4)
As a fourth embodiment, another method for directly detecting a boundary without obtaining a motion vector from the SAD value distribution in the search region 32 will be described. Here, a Laplacian filter that performs enhancement processing corresponding to second-order differentiation is applied.
 図17(a)及び(b)に、図5(b)の位置(1)と(2)にROI31を設定し、探索領域32のSAD分布を示す。位置(1)は、プローブ1に対して相対的に静止しているファントム41の内部の位置である。位置(2)は、ファントム41とそれに対して相対的に横移動しているファントム42の境界付近に位置する。位置(1)、(2)のSAD分布を求める処理は、実施形態1の図2のステップ21~23および図3のステップ51~53と同様に行う。得られたSAD分布画像(図17(a)及び(b))に対して、空間2次微分を行うラプラシアンフィルタを適用すると、輪郭強調効果により、SAD値の変動の大きい部分が強調された図17(c)及び(d)の画像がそれぞれ得られる。 FIGS. 17A and 17B show the SAD distribution of the search region 32 by setting the ROI 31 at the positions (1) and (2) in FIG. 5B. The position (1) is a position inside the phantom 41 that is relatively stationary with respect to the probe 1. The position (2) is located in the vicinity of the boundary between the phantom 41 and the phantom 42 moving laterally relative thereto. The processing for obtaining the SAD distribution at the positions (1) and (2) is performed in the same manner as steps 21 to 23 in FIG. 2 and steps 51 to 53 in FIG. When a Laplacian filter that performs spatial quadratic differentiation is applied to the obtained SAD distribution image (FIGS. 17A and 17B), a portion where the fluctuation of the SAD value is greatly emphasized due to the edge enhancement effect. Images of 17 (c) and (d) are obtained, respectively.
 探索領域32中に境界が存在する場合には、図17(d)に示すように、境界に沿ったSAD値の変動の大きな領域が強調されて筋状に抽出された輪郭強調分布が生成される。よって、輪郭強調分布において連続する輪郭線の領域(移動候補領域33)を2値化処理により抽出表示することにより、SAD値のラプラシアン画像から境界を検出することができる。一方、探索領域32中に境界が存在しない場合には、図17(c)のように、SAD値の変動の大きな領域は、中心の領域(ROI31の位置に対応する移動候補領域33)のみであり、周囲から孤立しており、連続した輪郭線にはならない。よって、境界は、この位置にはないことがわかる。 When there is a boundary in the search region 32, as shown in FIG. 17D, a region with a large variation in SAD value along the boundary is emphasized and a contour enhancement distribution extracted in a streak shape is generated. The Therefore, by extracting and displaying a continuous contour region (movement candidate region 33) in the contour enhancement distribution by binarization processing, the boundary can be detected from the Laplacian image of the SAD value. On the other hand, when there is no boundary in the search region 32, as shown in FIG. 17C, the region where the SAD value varies greatly is only the central region (the movement candidate region 33 corresponding to the position of the ROI 31). Yes, it is isolated from the surroundings and does not become a continuous outline. Therefore, it can be seen that the boundary is not at this position.
 上記処理によれば、SAD分布にラプラシアン処理を施して直接的に境界を抽出可能となる。よって、実施形態1の動きベクトルを決定して画像化するための図3のステップ54、55、ならびに、動きベクトルのノイズ除去やスカラー分布への変換を行い、境界推定する図2のステップ25~26を省略することができるため、処理量を大幅に削減できる。 According to the above processing, the boundary can be extracted directly by performing Laplacian processing on the SAD distribution. Therefore, steps 54 and 55 in FIG. 3 for determining and imaging the motion vector of the first embodiment, and steps 25 to 55 in FIG. 2 in which the motion vector is denoised and converted into a scalar distribution to estimate the boundary. Since 26 can be omitted, the amount of processing can be greatly reduced.
 本発明は、医用超音波診断・治療装置、ならびに、超音波を含めた電磁波一般を用いて歪みやずれを計測する装置全般に適用可能である。 The present invention can be applied to medical ultrasonic diagnostic / treatment apparatuses and apparatuses that measure distortion and deviation using electromagnetic waves including ultrasonic waves in general.
 1:超音波探触子(プローブ)、2:ユーザインタフェース、3:送波ビームフォーマ、4:制御系、5:送受切り替えスイッチ、6:受波ビームフォーマ、7:包絡線検波部、8:スキャンコンバータ、10:処理部、10a:CPU,10b:メモリ、11:パラメータ設定部、12:合成部、13:表示部。 1: Ultrasonic probe (probe) 2: User interface 3: Transmission beamformer 4: Control system 5: Transmission / reception changeover switch 6: Reception beamformer 7: Envelope detector 8: Scan converter, 10: processing unit, 10a: CPU, 10b: memory, 11: parameter setting unit, 12: synthesis unit, 13: display unit.

Claims (14)

  1.  対象に向かって超音波を送信する送信部と、前記対象から到来する超音波を受信する受信部と、前記受信部の受信信号を処理して2フレーム以上の画像を生成する処理部とを有し、
     前記処理部は、前記生成した2フレーム以上の画像のうち、1のフレームを基準フレームとし、予め定めた位置、または、操作者から受け付けた位置に関心領域を設定し、別の1のフレームを比較フレームとし、前記関心領域よりも広い探索領域を予め定めた位置、または、操作者から受け付けた位置に設定し、前記探索領域内に前記関心領域の移動先の候補である複数の候補領域を設定し、前記関心領域内と前記候補領域内の画像特性値の類似度を前記候補領域ごとに計算して、前記探索領域全体にわたる前記類似度の分布を求めることを特徴とする超音波イメージング装置。
    A transmission unit that transmits ultrasonic waves toward the target; a reception unit that receives ultrasonic waves coming from the target; and a processing unit that generates an image of two or more frames by processing a reception signal of the reception unit. And
    The processing unit sets a region of interest at a predetermined position or a position received from an operator from one of the generated two or more frames as a reference frame, and sets another frame as a reference frame. As a comparison frame, a search area wider than the region of interest is set to a predetermined position or a position received from an operator, and a plurality of candidate regions that are candidates for the destination of the region of interest are included in the search region. An ultrasound imaging apparatus characterized by setting and calculating a similarity between image characteristic values in the region of interest and the candidate region for each candidate region to obtain a distribution of the similarity over the entire search region .
  2.  請求項1に記載の超音波イメージング装置において、前記処理部は、前記類似度の分布において前記類似度の最小値と前記類似度の全体の値とを比較する統計量を求め、当該統計量により前記関心領域の信頼度を判定することを特徴とする超音波イメージング装置。 The ultrasonic imaging apparatus according to claim 1, wherein the processing unit obtains a statistic for comparing the minimum value of the similarity and the overall value of the similarity in the similarity distribution, and uses the statistic to calculate the statistic. An ultrasound imaging apparatus, wherein the reliability of the region of interest is determined.
  3.  請求項2に記載の超音波イメージング装置において、前記処理部は、前記類似度の最小値、平均値および標準偏差を用いて前記統計量を求め、求めた前記統計量と閾値とを比較することにより前記関心領域の信頼度を判定することを特徴とする超音波イメージング装置。 The ultrasonic imaging apparatus according to claim 2, wherein the processing unit obtains the statistic using the minimum value, average value, and standard deviation of the similarity, and compares the obtained statistic with a threshold value. And determining the reliability of the region of interest.
  4.  請求項3に記載の超音波イメージング装置において、前記処理部は、前記比較フレームにおける前記関心領域に対応する位置と、前記類似度が最小の前記候補領域の位置とを結ぶベクトルを生成し、前記信頼度が低いと判定された前記関心領域については前記ベクトルをゼロ、もしくは、所定のベクトルに置き換えることを特徴とする超音波イメージング装置。 The ultrasound imaging apparatus according to claim 3, wherein the processing unit generates a vector connecting a position corresponding to the region of interest in the comparison frame and a position of the candidate region having the minimum similarity, An ultrasonic imaging apparatus, wherein the vector of interest is determined to have a low reliability, and the vector is replaced with zero or a predetermined vector.
  5.  請求項2に記載の超音波イメージング装置において、前記処理部は、前記類似度分布について類似度の平均、最小値及び標準偏差を算出し、前記平均と前記最小値の差分を前記標準偏差で除算した分離度を前記統計量として求めることを特徴とする超音波イメージング装置。 The ultrasound imaging apparatus according to claim 2, wherein the processing unit calculates an average, a minimum value, and a standard deviation of the similarity for the similarity distribution, and divides a difference between the average and the minimum value by the standard deviation. An ultrasonic imaging apparatus characterized in that the degree of separation obtained is obtained as the statistic.
  6.  請求項2に記載の超音波イメージング装置において、前記処理部は、前記類似度分布について類似度の平均及び標準偏差を算出し、前記標準偏差を前記平均で除算した変動係数を前記統計量として求めることを特徴とする超音波イメージング装置。 3. The ultrasound imaging apparatus according to claim 2, wherein the processing unit calculates an average of the similarity and a standard deviation for the similarity distribution, and obtains a coefficient of variation obtained by dividing the standard deviation by the average as the statistic. An ultrasonic imaging apparatus characterized by that.
  7.  請求項3に記載の超音波イメージング装置において、
     前記処理部は、複数の前記関心領域を設定し、それぞれについて前記統計量を求め、求めた統計量の値の頻度を示すヒストグラム分布を求め、
    前記ヒストグラム分布における中央値、平均値、または、ヒストグラム分布が複数の山形状を呈する場合には、山の間の谷の最小値の統計量を前記閾値として用いることを特徴とする超音波イメージング装置。
    The ultrasonic imaging apparatus according to claim 3.
    The processing unit sets a plurality of regions of interest, obtains the statistic for each, obtains a histogram distribution indicating the frequency of the obtained statistic value,
    An ultrasonic imaging apparatus characterized in that, when the median value, average value, or histogram distribution in the histogram distribution has a plurality of mountain shapes, a statistic of a minimum value of a valley between mountains is used as the threshold value. .
  8.  請求項1に記載の超音波イメージング装置において、
     前記処理部は、前記類似度分布を平滑化処理した平滑化後類似度分布を生成し、平滑化処理前の前記類似度分布から前記平滑化後類似度分布を差し引いた差分類似度分布を求めることを特徴とする超音波イメージング装置。
    The ultrasound imaging apparatus according to claim 1,
    The processing unit generates a smoothed similarity distribution obtained by smoothing the similarity distribution, and obtains a difference similarity distribution obtained by subtracting the smoothed similarity distribution from the similarity distribution before the smoothing process. An ultrasonic imaging apparatus characterized by that.
  9.  請求項8に記載の超音波イメージング装置において、前記平滑化処理は、前記類似度分布に、予め定めた大きさのフィルタを設定し、該フィルタ内の分布を平滑化する処理を、フィルタを所定量移動させながら繰り返すものであり、
     前記処理部は、前記比較フレームにおける前記関心領域に対応する位置と、前記平滑化処理前の類似度分布において前記類似度が最小の前記候補領域の位置とを結ぶベクトルを、複数の前記関心領域について生成し、生成したベクトルのうち、最大のベクトル長を前記フィルタの大きさとすることを特徴とする超音波イメージング装置。
    9. The ultrasonic imaging apparatus according to claim 8, wherein the smoothing process includes a process of setting a filter of a predetermined size in the similarity distribution and smoothing the distribution in the filter. It repeats while moving quantitatively,
    The processing unit uses a vector connecting a position corresponding to the region of interest in the comparison frame and a position of the candidate region having the minimum similarity in the similarity distribution before the smoothing process as a plurality of the regions of interest. And the maximum vector length among the generated vectors is the size of the filter.
  10.  請求項1に記載の超音波イメージング装置において、前記処理部は、前記類似度分布をラプラシアンフィルタでフィルタリング処理して輪郭強調分布を作成し、前記輪郭強調分布において連続する輪郭線を抽出することにより、前記対象の境界を求めることを特徴とする超音波イメージング装置。 The ultrasonic imaging apparatus according to claim 1, wherein the processing unit creates a contour enhancement distribution by filtering the similarity distribution with a Laplacian filter, and extracts a continuous contour line in the contour enhancement distribution. An ultrasonic imaging apparatus for obtaining a boundary of the object.
  11.  請求項1に記載の超音波イメージング装置において、前記処理部は、生体腫瘍境界付近に設定された前記関心領域について前記類似度分布を求め、該類似度を画像特性値とする類似度分布画像を生成し、該類似度分布画像上で前記関心領域に対応する位置を中心として複数の異なる方向に所定長の1次元領域を設定する第1処理手段と、
     前記1次元領域内の前記類似度の総和を、設定した方向ごとに計算する第2処理手段と、
     前記類似度総和が最小となる方向の類似度総和と、それと直交する方向の1次元領域の類似度総和との比率を計算する第3処理手段と、
     前記比率に基づいて腫瘍の浸潤度を判定する第4処理手段と、
    を有することを特徴とする超音波イメージング装置。
    The ultrasonic imaging apparatus according to claim 1, wherein the processing unit obtains the similarity distribution for the region of interest set in the vicinity of a living body tumor boundary, and obtains a similarity distribution image having the similarity as an image characteristic value. Generating and setting a one-dimensional region of a predetermined length in a plurality of different directions around the position corresponding to the region of interest on the similarity distribution image;
    A second processing means for calculating the sum of the similarities in the one-dimensional region for each set direction;
    A third processing means for calculating a ratio between a similarity sum in a direction in which the similarity sum is minimum and a similarity sum of a one-dimensional region in a direction orthogonal thereto;
    Fourth processing means for determining the degree of tumor invasion based on the ratio;
    An ultrasonic imaging apparatus comprising:
  12.  請求項1に記載の超音波イメージング装置において、前記処理部は、前記第3処理手段で計算された前記比率が予め設定した一定値よりも小さい場合には、その関心領域が境界線を構成する点であると判断することを特徴とする超音波イメージング装置。 2. The ultrasound imaging apparatus according to claim 1, wherein when the ratio calculated by the third processing unit is smaller than a predetermined value, the processing unit forms a boundary line. 3. An ultrasonic imaging apparatus characterized in that it is determined to be a point.
  13.  対象に向かって超音波を送信し、前記対象から到来する超音波を受信して得た受信信号を処理して2フレーム以上の画像を生成し、
     前記画像から基準フレームと比較フレームとを選択し、
     前記基準フレームに関心領域を設定し、
     前記比較フレームに前記関心領域よりも広い探索領域を設定し、前記探索領域内に前記関心領域の移動先の候補である複数の候補領域を設定し、
     前記関心領域内と前記候補領域内の画像特性値の類似度を前記候補領域ごとに計算して、前記探索領域全体にわたる前記類似度の分布を求めることを特徴とする超音波イメージング方法。
    Transmitting an ultrasonic wave toward the object, processing the reception signal obtained by receiving the ultrasonic wave coming from the object, and generating an image of two or more frames,
    Select a reference frame and a comparison frame from the image,
    A region of interest is set in the reference frame;
    A search area wider than the region of interest is set in the comparison frame, and a plurality of candidate regions that are candidates for movement of the region of interest are set in the search region,
    An ultrasonic imaging method, wherein similarity between image characteristic values in the region of interest and the candidate region is calculated for each candidate region, and the similarity distribution over the entire search region is obtained.
  14.  コンピュータに、
     2フレーム以上の超音波画像から基準フレームと比較フレームとを選択する第1のステップ、
     前記基準フレームに関心領域を設定する第2のステップ、
     前記比較フレームに前記関心領域よりも広い探索領域を設定し、前記探索領域内に前記関心領域の移動先の候補である複数の候補領域を設定する第3のステップ、
     前記関心領域内と前記候補領域内の画像特性値の類似度を前記候補領域ごとに計算して、前記探索領域全体にわたる前記類似度の分布を求める第4のステップ、
    を実行させるための超音波イメージング用プログラム。
    On the computer,
    A first step of selecting a reference frame and a comparison frame from an ultrasonic image of two or more frames;
    A second step of setting a region of interest in the reference frame;
    A third step of setting a search region wider than the region of interest in the comparison frame, and setting a plurality of candidate regions that are candidates for the destination of the region of interest in the search region;
    A fourth step of calculating a similarity between image characteristic values in the region of interest and the candidate region for each candidate region, and obtaining a distribution of the similarity over the entire search region;
    Ultrasound imaging program for running
PCT/JP2010/068988 2009-10-27 2010-10-26 Ultrasonic imaging device, ultrasonic imaging method and program for ultrasonic imaging WO2011052602A1 (en)

Priority Applications (4)

Application Number Priority Date Filing Date Title
CN201080046798.3A CN102596050B (en) 2009-10-27 2010-10-26 Ultrasonic imaging device and ultrasonic imaging method
EP10826734A EP2494924A1 (en) 2009-10-27 2010-10-26 Ultrasonic imaging device, ultrasonic imaging method and program for ultrasonic imaging
US13/503,858 US8867813B2 (en) 2009-10-27 2010-10-26 Ultrasonic imaging device, ultrasonic imaging method and program for ultrasonic imaging
JP2011538440A JP5587332B2 (en) 2009-10-27 2010-10-26 Ultrasonic imaging apparatus and program for ultrasonic imaging

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2009-246734 2009-10-27
JP2009246734 2009-10-27

Publications (1)

Publication Number Publication Date
WO2011052602A1 true WO2011052602A1 (en) 2011-05-05

Family

ID=43922027

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2010/068988 WO2011052602A1 (en) 2009-10-27 2010-10-26 Ultrasonic imaging device, ultrasonic imaging method and program for ultrasonic imaging

Country Status (5)

Country Link
US (1) US8867813B2 (en)
EP (1) EP2494924A1 (en)
JP (1) JP5587332B2 (en)
CN (1) CN102596050B (en)
WO (1) WO2011052602A1 (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2013061664A1 (en) * 2011-10-28 2013-05-02 日立アロカメディカル株式会社 Ultrasound imaging apparatus, ultrasound imaging method and ultrasound imaging program
US20130170721A1 (en) * 2011-12-29 2013-07-04 Samsung Electronics Co., Ltd. Method and apparatus for processing ultrasound image
CN103536305A (en) * 2012-07-11 2014-01-29 通用电气公司 Systems and methods for performing image type recognition
KR20140063440A (en) * 2012-11-15 2014-05-27 톰슨 라이센싱 Method for superpixel life cycle management
WO2014080833A1 (en) * 2012-11-21 2014-05-30 株式会社東芝 Ultrasonic diagnostic device, image processing device, and image processing method
WO2014103501A1 (en) * 2012-12-28 2014-07-03 興和株式会社 Image processing device, image processing method, image processing program, and recording medium storing said program
JP2018000261A (en) * 2016-06-27 2018-01-11 株式会社日立製作所 Ultrasonic imaging apparatus, ultrasonic imaging method, and coupling state evaluation apparatus
KR101819028B1 (en) * 2011-07-11 2018-01-16 삼성전자주식회사 Method and apparatus for processing a ultrasound image

Families Citing this family (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103093418B (en) * 2013-02-21 2015-08-26 深圳市晶日盛科技有限公司 A kind of digital image scaling method of improvement
WO2014155272A1 (en) * 2013-03-28 2014-10-02 Koninklijke Philips N.V. Real-time quality control for acquisition of 3d ultrasound images
JP5651258B1 (en) * 2014-02-27 2015-01-07 日立アロカメディカル株式会社 Ultrasonic diagnostic apparatus and program
US9686470B2 (en) * 2014-05-30 2017-06-20 Apple Inc. Scene stability detection
WO2017211757A1 (en) * 2016-06-10 2017-12-14 Koninklijke Philips N.V. Using reflected shear waves for monitoring lesion growth in thermal ablations
CN108225496B (en) * 2016-12-15 2020-02-21 重庆川仪自动化股份有限公司 Radar level meter echo signal automatic testing device, method and system
US11611773B2 (en) * 2018-04-06 2023-03-21 Qatar Foundation For Education, Science And Community Development System of video steganalysis and a method for the detection of covert communications
KR102661955B1 (en) * 2018-12-12 2024-04-29 삼성전자주식회사 Method and apparatus of processing image
CN109875607A (en) * 2019-01-29 2019-06-14 中国科学院苏州生物医学工程技术研究所 Infiltrate tissue testing method, apparatus and system
JP6579727B1 (en) * 2019-02-04 2019-09-25 株式会社Qoncept Moving object detection device, moving object detection method, and moving object detection program
CN113556979A (en) * 2019-03-19 2021-10-26 奥林巴斯株式会社 Ultrasonic observation device, method for operating ultrasonic observation device, and program for operating ultrasonic observation device
CN116823829B (en) * 2023-08-29 2024-01-09 深圳微创心算子医疗科技有限公司 Medical image analysis method, medical image analysis device, computer equipment and storage medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002222410A (en) * 2001-01-25 2002-08-09 Hitachi Medical Corp Image diagnostic device
JP2004057275A (en) * 2002-07-25 2004-02-26 Hitachi Medical Corp Image diagnostic device
JP2004121834A (en) * 2002-09-12 2004-04-22 Hitachi Medical Corp Movement tracing method for biological tissue, image diagnosing system using the tracing method, and movement tracing program for biological tissue
JP2004129773A (en) * 2002-10-09 2004-04-30 Hitachi Medical Corp Ultrasonic imaging device and ultrasonic signal processing method
JP2004135929A (en) 2002-10-18 2004-05-13 Hitachi Medical Corp Ultrasonic diagnostic apparatus
JP2004351050A (en) * 2003-05-30 2004-12-16 Hitachi Medical Corp Method of tracking movement of organic tissue in diagnostic image and image diagnostic apparatus using the method
JP2008079792A (en) 2006-09-27 2008-04-10 Hitachi Ltd Ultrasonic diagnostic apparatus
WO2010052868A1 (en) * 2008-11-10 2010-05-14 株式会社日立メディコ Ultrasonic image processing method and device, and ultrasonic image processing program

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5002181B2 (en) * 2006-03-31 2012-08-15 株式会社東芝 Ultrasonic diagnostic apparatus and ultrasonic diagnostic apparatus control method

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002222410A (en) * 2001-01-25 2002-08-09 Hitachi Medical Corp Image diagnostic device
JP2004057275A (en) * 2002-07-25 2004-02-26 Hitachi Medical Corp Image diagnostic device
JP2004121834A (en) * 2002-09-12 2004-04-22 Hitachi Medical Corp Movement tracing method for biological tissue, image diagnosing system using the tracing method, and movement tracing program for biological tissue
JP2004129773A (en) * 2002-10-09 2004-04-30 Hitachi Medical Corp Ultrasonic imaging device and ultrasonic signal processing method
JP2004135929A (en) 2002-10-18 2004-05-13 Hitachi Medical Corp Ultrasonic diagnostic apparatus
JP2004351050A (en) * 2003-05-30 2004-12-16 Hitachi Medical Corp Method of tracking movement of organic tissue in diagnostic image and image diagnostic apparatus using the method
JP2008079792A (en) 2006-09-27 2008-04-10 Hitachi Ltd Ultrasonic diagnostic apparatus
WO2010052868A1 (en) * 2008-11-10 2010-05-14 株式会社日立メディコ Ultrasonic image processing method and device, and ultrasonic image processing program

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101819028B1 (en) * 2011-07-11 2018-01-16 삼성전자주식회사 Method and apparatus for processing a ultrasound image
WO2013061664A1 (en) * 2011-10-28 2013-05-02 日立アロカメディカル株式会社 Ultrasound imaging apparatus, ultrasound imaging method and ultrasound imaging program
JPWO2013061664A1 (en) * 2011-10-28 2015-04-02 日立アロカメディカル株式会社 Ultrasonic imaging apparatus, ultrasonic imaging method, and ultrasonic imaging program
CN103906473A (en) * 2011-10-28 2014-07-02 日立阿洛卡医疗株式会社 Ultrasound imaging apparatus, ultrasound imaging method and ultrasound imaging program
US20130170721A1 (en) * 2011-12-29 2013-07-04 Samsung Electronics Co., Ltd. Method and apparatus for processing ultrasound image
KR20130077406A (en) * 2011-12-29 2013-07-09 삼성전자주식회사 Apparatus and method for processing ultrasound image
KR101881924B1 (en) * 2011-12-29 2018-08-27 삼성전자주식회사 Apparatus and Method for processing ultrasound image
US9129186B2 (en) * 2011-12-29 2015-09-08 Samsung Electronics Co., Ltd. Method and apparatus for processing ultrasound image
CN103536305A (en) * 2012-07-11 2014-01-29 通用电气公司 Systems and methods for performing image type recognition
JP2014099178A (en) * 2012-11-15 2014-05-29 Thomson Licensing Method for superpixel life cycle management
KR102128121B1 (en) * 2012-11-15 2020-06-29 인터디지털 브이씨 홀딩스 인코포레이티드 Method for superpixel life cycle management
KR20140063440A (en) * 2012-11-15 2014-05-27 톰슨 라이센싱 Method for superpixel life cycle management
WO2014080833A1 (en) * 2012-11-21 2014-05-30 株式会社東芝 Ultrasonic diagnostic device, image processing device, and image processing method
US10376236B2 (en) 2012-11-21 2019-08-13 Canon Medical Systems Corporation Ultrasound diagnostic apparatus, image processing apparatus, and image processing method
WO2014103501A1 (en) * 2012-12-28 2014-07-03 興和株式会社 Image processing device, image processing method, image processing program, and recording medium storing said program
JPWO2014103501A1 (en) * 2012-12-28 2017-01-12 興和株式会社 Image processing apparatus, image processing method, image processing program, and recording medium storing the program
JP2018000261A (en) * 2016-06-27 2018-01-11 株式会社日立製作所 Ultrasonic imaging apparatus, ultrasonic imaging method, and coupling state evaluation apparatus

Also Published As

Publication number Publication date
JPWO2011052602A1 (en) 2013-03-21
EP2494924A1 (en) 2012-09-05
US20120224759A1 (en) 2012-09-06
JP5587332B2 (en) 2014-09-10
CN102596050A (en) 2012-07-18
CN102596050B (en) 2014-08-13
US8867813B2 (en) 2014-10-21

Similar Documents

Publication Publication Date Title
JP5587332B2 (en) Ultrasonic imaging apparatus and program for ultrasonic imaging
JP5498299B2 (en) System and method for providing 2D CT images corresponding to 2D ultrasound images
JP5579527B2 (en) System and method for providing 2D CT images corresponding to 2D ultrasound images
JP6935020B2 (en) Systems and methods for identifying features of ultrasound images
US7864998B2 (en) Apparatus and method for processing an ultrasound spectrum image
US11013495B2 (en) Method and apparatus for registering medical images
US10136875B2 (en) Ultrasonic diagnostic apparatus and ultrasonic diagnostic method
JP5813779B2 (en) Ultrasonic imaging apparatus, ultrasonic imaging method, and ultrasonic imaging program
JP2008079792A (en) Ultrasonic diagnostic apparatus
CN108209966B (en) Parameter adjusting method and device of ultrasonic imaging equipment
JP2011152416A (en) Ultrasonic system and method for processing three-dimensional ultrasonic screen image and measuring size of concerned object
JP2005193017A (en) Method and system for classifying diseased part of mamma
US20240050062A1 (en) Analyzing apparatus and analyzing method
US9110156B2 (en) Apparatus and system for measuring velocity of ultrasound signal
JP6385702B2 (en) Ultrasonic diagnostic apparatus, medical image processing apparatus, and medical image processing program
KR101627319B1 (en) medical image processor and method thereof for medical diagnosis
JP6731369B2 (en) Ultrasonic diagnostic device and program
JP4634872B2 (en) Ultrasonic diagnostic equipment
JP2005318921A (en) Ultrasonic diagnostic equipment
JP2004242836A (en) Ultrasonic diagnostic apparatus and method for image processing in ultrasonic diagnostic apparatus

Legal Events

Date Code Title Description
WWE Wipo information: entry into national phase

Ref document number: 201080046798.3

Country of ref document: CN

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 10826734

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 2011538440

Country of ref document: JP

WWE Wipo information: entry into national phase

Ref document number: 13503858

Country of ref document: US

Ref document number: 2010826734

Country of ref document: EP

NENP Non-entry into the national phase

Ref country code: DE