WO2019130636A1 - Ultrasound imaging device, image processing device, and method - Google Patents

Ultrasound imaging device, image processing device, and method Download PDF

Info

Publication number
WO2019130636A1
WO2019130636A1 PCT/JP2018/028698 JP2018028698W WO2019130636A1 WO 2019130636 A1 WO2019130636 A1 WO 2019130636A1 JP 2018028698 W JP2018028698 W JP 2018028698W WO 2019130636 A1 WO2019130636 A1 WO 2019130636A1
Authority
WO
WIPO (PCT)
Prior art keywords
organ
image
image processing
ultrasound
area
Prior art date
Application number
PCT/JP2018/028698
Other languages
French (fr)
Japanese (ja)
Inventor
子盛 黎
Original Assignee
株式会社日立製作所
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 株式会社日立製作所 filed Critical 株式会社日立製作所
Publication of WO2019130636A1 publication Critical patent/WO2019130636A1/en

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/13Tomography
    • A61B8/14Echo-tomography

Definitions

  • the present invention relates to an ultrasonic imaging apparatus, and more particularly, to an imaging technique for simultaneously displaying a characteristic site in a subject with an image of the imaged ultrasonic image and an image of the same cross section imaged by another imaging apparatus.
  • the ultrasonic imaging apparatus irradiates an ultrasonic wave to a subject, and images the structure inside the subject based on the reflection signal, so that it is possible to observe a patient non-invasively in real time.
  • other medical imaging devices such as X-ray CT (Computed Tomography) devices or MRI (Magnetic Resonance Imaging) devices can perform imaging in a wide range and with high resolution, so grasping the positional relationship of fine lesions and organs Easy to do.
  • tumors such as liver cancer can be found at an early stage from MRI images and X-ray CT images.
  • a position sensor is attached to the ultrasound probe to calculate the positional relationship of the scan plane, and from the volume data for three-dimensional diagnosis captured by the medical image diagnostic apparatus, the two-dimensional image corresponding to the image of the ultrasound scan surface 2D) Diagnostic imaging systems that construct and display images are also becoming popular.
  • Patent Document 1 describes a method of aligning an ultrasonic three-dimensional image (volume) with an MRI three-dimensional image using blood vessel information, and constructing an MRI cross-sectional image corresponding to an ultrasonic scan plane image. ing.
  • an ultrasonic image and an MRI image are acquired, a plurality of image areas are set around the branch point of the blood vessel bifurcation, an index representing the feature of the image is calculated for each image area, and a combination to be compared is It is determined that the blood vessel branch is the same by repeating the calculation of the branch point similarity while changing, and the geometric transformation matrix for alignment is estimated.
  • the alignment results are used to align the MRI image with the ultrasound image to generate and display a corresponding cross-sectional image.
  • the area to be operated on (such as thermal treatment or surgical excision) for a tumor or the like during operation of a subject is confirmed by intraoperative ultrasound images and corresponding high-resolution MRI images or CT images. That is, alignment of the intraoperative ultrasound image with the high resolution modality image is desired.
  • the imaging directions of intraoperative ultrasound and other modality images are largely different, it is difficult to estimate the initial value of alignment, and proper ultrasonic probe scanning greatly affects the success of alignment.
  • the ultrasound probe scan for acquiring the ultrasound volume for alignment depends on the operator's procedure and the patient's situation, and the range and site to be scanned vary widely depending on the operator / patient.
  • the intraoperative ultrasound volume to be acquired is inappropriate for alignment, alignment will fail, and the ultrasound volume will be re-photographed and alignment will be re-executed, which takes time and labor for the operator. The burden also increases. Therefore, it is desirable to have a function of determining in advance whether the acquired ultrasound volume or scanning of the ultrasound probe is appropriate for alignment, and accurately guiding the ultrasound probe scanning.
  • an ultrasound image is transmitted from an ultrasound probe to an object to receive ultrasound waves from the object, and an ultrasound image from a reception signal of the ultrasound probe.
  • an image processing apparatus for processing an ultrasound image and second volume data of the subject the image processing apparatus identifying and extracting an organ attention area of the ultrasound image Using the identification / extraction unit and the extracted organ attention area, the scanning range and region of the ultrasound probe are the first volume data and the second volume data generated from the ultrasound image or the ultrasound image
  • an ultrasonic imaging apparatus including: an alignment determination unit that determines whether the alignment is appropriate.
  • the present invention is an image processing apparatus, which transmits an ultrasonic wave from an ultrasonic probe to a subject, and an organ attention area of an ultrasonic image generated from a reception signal thereof. And a first volume data generated from an ultrasound image or an ultrasound image, using the region of interest identification / extraction unit for identifying and extracting the extracted region of interest and the extracted region of interest of the organ.
  • an image processing apparatus including: an alignment determination unit that determines whether the alignment is appropriate for alignment with second volume data of an object.
  • the image processing apparatus transmits an ultrasonic wave from the ultrasonic probe to the object, and from the received signal An organ attention area of the generated ultrasound image is identified and extracted, and using the extracted organ attention area, a first volume data in which a scan range and a region of the ultrasound probe are generated from the ultrasound image or the ultrasound image And an image processing method of determining whether the alignment is appropriate for alignment with the second volume data of the subject.
  • the scanning range and region of the ultrasound probe can be identified and extracted, and the position and location of the organ attention area to be aligned can be specified. It can be determined whether or not it is appropriate for alignment with volume data of the imaging device.
  • FIG. 1 is a block diagram showing an example of the entire configuration of an ultrasonic imaging apparatus according to a first embodiment.
  • FIG. 1 is a block diagram showing an example of a hardware configuration of an ultrasonic imaging apparatus according to a first embodiment.
  • FIG. 1 is a functional block diagram of an image processing apparatus of an ultrasonic imaging apparatus according to a first embodiment.
  • FIG. 2 is a flowchart showing a process flow of the image processing apparatus of the ultrasonic imaging apparatus according to the first embodiment.
  • FIG. 5 is a flowchart showing the flow of identification processing and extraction processing of a region of interest according to the first embodiment. The figure which shows an example of the site
  • FIG. 7 is a flowchart showing a flow of processing for learning a learning model for identifying and extracting a region of interest according to the first embodiment.
  • FIG. 7 is a diagram showing an example of area candidates of a notable area according to the first embodiment. The figure which shows an example of the organ area information of an attention area based on Example 1.
  • FIG. Explanatory drawing which shows the flow which learns the learning model which identifies and extracts an attention area based on Example 1.
  • FIG. FIG. 7 is a flowchart showing a flow of processing of determining whether or not ultrasonic probe scanning is appropriate for alignment according to the first embodiment.
  • FIG. 7 is a table diagram showing an example of an organ area, a part name, and a volume weighting of each attention area according to the first embodiment.
  • FIG. 7 is a block diagram showing an example of the entire configuration of an ultrasonic imaging apparatus according to a second embodiment.
  • FIG. 7 is a block diagram showing an example of a hardware configuration of an ultrasonic imaging apparatus according to a second embodiment.
  • FIG. 7 is a functional block diagram of an image processing apparatus of an ultrasonic imaging apparatus according to a second embodiment.
  • FIG. 7 is a flowchart showing the process flow of the image processing apparatus of the ultrasonic imaging apparatus according to the second embodiment.
  • FIG. 14 is an explanatory diagram showing an example of the display of the alignment result of the image processing apparatus and the attention area extraction result according to the second embodiment.
  • FIG. 7 is a flowchart showing a flow of alignment processing of the image processing apparatus according to the second embodiment. Explanatory drawing which shows an example of the present scanning area of an ultrasonic transducer
  • FIG. 1 Explanatory drawing which shows an example of the present scanning area of an
  • an ultrasonic probe that transmits ultrasonic waves to a subject and receives ultrasonic waves from the subject, and an image generation unit that generates an ultrasonic image from a received signal of the ultrasonic probe.
  • the image processing apparatus comprises an ultrasound image and an image processing apparatus for processing second volume data of an object, wherein the image processing apparatus extracts a notable area identification / extraction unit for identifying and extracting an organ attention area of the ultrasound image. Using the organ attention area, it is determined whether the scanning range and region of the ultrasound probe are appropriate for aligning the first volume data generated from the ultrasound image or the ultrasound image and the second volume data 1 is an embodiment of an ultrasonic imaging apparatus including an alignment determination unit, an image processing apparatus therefor, and an image processing method therefor.
  • FIG. 1 illustrates an example of the entire configuration of an ultrasonic imaging apparatus according to a first embodiment.
  • the ultrasound imaging apparatus according to the present embodiment includes an ultrasound probe 7, an image generation unit 107, and an image processing device 108, and further includes a transmission unit 102, a transmission / reception switching unit 101, a reception unit 105, and a user interface ( UI) 121, and a control unit 106.
  • the external display 16 may be part of a user interface (UI) 121.
  • the transmission unit 102 generates a transmission signal under the control of the control unit 106, and delivers the transmission signal to each of a plurality of ultrasonic elements constituting an ultrasonic probe 7 called an ultrasonic probe.
  • the plurality of ultrasonic elements of the ultrasonic probe 7 transmit ultrasonic waves toward the subject 120, respectively.
  • the ultrasonic waves reflected and the like by the object 120 again reach the plurality of ultrasonic elements of the ultrasonic probe 7 and are received, and converted into electric signals.
  • the signal received by the ultrasonic element is delayed by a predetermined delay amount according to the position of the reception focus by the reception unit 105, and is phasing and addition. This is repeated for each of a plurality of reception focuses.
  • the signal after the phasing addition is delivered to the image generation unit 107.
  • the transmission / reception switching unit 101 selectively connects the transmission unit 102 or the reception unit 105 to the ultrasound probe 7.
  • the image generation unit 107 performs processing such as arranging the phasing addition signal received from the reception unit 105 at a position corresponding to the reception focus, generates a 2D ultrasound image, and outputs the 2D ultrasound image to the image processing apparatus 108.
  • the image processing device 108 generates first volume data of a three-dimensional ultrasound image using the received ultrasound image.
  • the image processing device 108 receives, via the user interface (UI) 121, second volume data obtained by the other image capturing device for the subject 120, and performs ultrasound
  • the organ attention area of the image is identified and extracted in units of pixels, the scan area and region of the ultrasound probe are identified, and it is determined whether or not it is appropriate for alignment.
  • the image processing device 108 transmits an ultrasound wave from the ultrasound probe to the subject, and extracts and extracts a notable area identification / extraction unit that identifies and extracts the organ attention area of the ultrasound image generated from the received signal. Whether the scanning range and region of the ultrasound probe are appropriate for aligning the first volume data generated from the ultrasound image or the ultrasound image and the second volume data using the focused organ region of interest And a registration determination unit to be determined.
  • CT modalities other imaging devices such as an MRI apparatus, an X-ray CT apparatus, and other ultrasound diagnostic apparatuses are referred to as medical modalities.
  • an X-ray CT apparatus is used as an example of a medical modality
  • volume data of the X-ray CT apparatus is referred to as CT volume data as the above-mentioned second volume data.
  • FIG. 2 shows an example of the hardware configuration of the image processing apparatus 108 and the user interface 121.
  • the hardware configuration shown in FIG. 2 is commonly used in the other embodiments described later.
  • the image processing apparatus 108 includes a CPU (processor) 1, a ROM (nonvolatile memory: storage medium for reading only) 2, a RAM (volatile memory: storage medium for reading and writing data) 3, a storage device 4 and a display control unit It comprises 15 of them.
  • the user interface (UI) 121 includes an image input unit 9, a medium input unit 11, an input control unit 13, and an input device 14. These and the image generation unit 107 are mutually connected by the bus 5. Further, a display 16 is connected to the display control unit 15 of the image processing apparatus 108. This display 16 can be considered as the output of the user interface.
  • At least one of the ROM 2 and the RAM 3 stores, in advance, programs and data required to realize the operation of the image processing apparatus 108 by the arithmetic processing of the CPU 1.
  • the CPU 1 executes a program stored in advance in at least one of the ROM 2 and the RAM 3 to implement various processes of the image processing apparatus 108 which will be described in detail later.
  • the program executed by the CPU 1 may be stored, for example, in a storage medium 12 such as an optical disk, and the medium input unit 11 such as an optical disk drive may read the program and store it in the RAM 3.
  • the program may be stored in the storage device 4 and loaded from the storage device 4 into the RAM 3.
  • the program may be stored in advance in the ROM 2.
  • the image input unit 9 is an interface for capturing CT volume data captured by the image capturing apparatus 10 which is a medical modality such as an X-ray CT apparatus.
  • the storage device 4 is a magnetic storage device that stores CT volume data and the like input through the image input unit 9.
  • the storage device 4 may include, for example, a non-volatile semiconductor storage medium such as a flash memory.
  • an external storage device connected via a network or the like may be used.
  • the input device 14 is a device that receives a user's operation, and includes, for example, a keyboard, a trackball, an operation panel, a foot switch, and the like.
  • the input control unit 13 is an interface that receives an operation input input by the user.
  • the operation input received by the input control unit 13 is processed by the CPU 1.
  • the display control unit 15 controls the display 16 to display, for example, image data obtained by the processing of the CPU 1.
  • the display 16 displays an image under the control of the display control unit 15.
  • FIG. 3 is a block diagram showing the functions of the image processing apparatus 108 of this embodiment.
  • the image processing apparatus 108 uses a 2D ultrasound image that is an ultrasound scan image acquired by the ultrasound image acquisition unit 28 configured by the transmission unit 102, the reception unit 105, and the image generation unit 107.
  • step S201 the CT volume data receiving unit 22 receives CT volume data from the imaging device 10 via the image input unit 9.
  • step S202 the ultrasonic probe 7 is placed on the display 16 to display a prompt for moving or scanning.
  • the ultrasound image acquisition unit 28 When the user moves the ultrasound probe 7 in the area of the organ according to the display, the ultrasound image acquisition unit 28 generates and acquires a 2D ultrasound image which is an ultrasound scan image.
  • the ultrasound volume data generation unit 23 receives a 2D ultrasound image continuously generated from the image generation unit 107 of the ultrasound image acquisition unit 28.
  • step S203 the attention area identification / extraction unit 21 of the ultrasound image identifies and extracts a predetermined organ attention area in pixel units from the continuously generated 2D ultrasound image, and the scanning of the ultrasound probe 7 is performed. Identify and estimate parts and locations. As a result, a mask of the organ attention area to which the information of the scanning site and position is given is generated.
  • step S204 the ultrasound volume data generation unit 23 generates ultrasound volume data as first volume data based on the 2D ultrasound image generated continuously.
  • step S205 the region-of-interest identification / extraction unit 24 of the CT volume identifies and extracts a predetermined organ region of interest from the CT volume data in pixel units, and as a result, information on the scanning region and position of each region is added. Generate a mask of organ attention area.
  • the attention area information of the CT volume and the attention area information of the ultrasound image generated from the attention area identification / extraction unit 21 of the ultrasound image are output to the alignment determination unit 32 as ultrasound / CT attention area information 25 Be done.
  • step S206 it is determined whether the obtained ultrasound image is appropriate for alignment.
  • the alignment determination unit 32 calculates the volume of each predetermined organ attention area from the attention area information of the CT volume. Further, the alignment determination unit 32 calculates the volume of each organ attention area in the ultrasound probe scanning range from the attention area information of the ultrasound image. The alignment determination unit 32 calculates a weighted average of proportions of volumes of corresponding regions of the ultrasound probe scan and the CT volume, compares it with a predetermined threshold set in advance, and determines whether the ultrasound scan is appropriate for alignment Determine if. If it is determined to be inappropriate, further thresholding is performed to determine whether to perform an additional scan.
  • step S207 If it is determined in step S207 that additional scanning is to be performed, the scanning direction of the ultrasound probe 7 and the scanning region are presented to the user using the image display unit 31 such as the display 16. A specific display example will be described later.
  • step S208 If it is determined in step S208 that additional scanning is not to be performed, the result that the scanning of the ultrasound probe 7 is inappropriate for alignment is displayed on the image display unit 31 as a determination result. In addition, when it is determined in step S208 that the positioning is appropriate, the determination result and the attention area information of the ultrasonic image are displayed to the user using the image display unit 31.
  • the attention area identification / extraction unit 21 learns the first learning model for identifying and extracting the attention area candidate of the organ attention area, and generates the parameter of the first learning model as the anatomical area information of the organ attention area, That is, the second learning model is learned using the region to which the organ area information indicating the anatomical position in the organ is given as the initial parameter of the second learning model to identify and extract. Furthermore, the attention area identification / extraction unit 21 is an initial parameter of the third learning model for identifying / extracting the area to which the part name information of the organ attention area and the organ area information are given, of the generated second learning model parameters. To learn the third learning model, and generate the result as the final learning model.
  • step S301 the region-of-interest identification / extraction unit 21 for ultrasound images receives a 2D ultrasound image that is an ultrasound scan image from the image generation unit 107.
  • step S302 image preprocessing such as noise removal and contrast enhancement is performed.
  • step S303 a learning model for identification / extraction is read.
  • step S304 the organ attention area including the information on the part and its position is identified and extracted based on the learning model.
  • step S305 a mask image of the identified and extracted organ attention area is generated.
  • the FCN method is a method of estimating an image on a pixel basis (Semantic Segmentation) by replacing an entire joint layer of a deep learning CNN method (Convolution Nueral Network) with a Convolution layer.
  • FIG. 6 shows an example of an image displaying a 2D ultrasound image which is an ultrasonic scan image of the liver as an example, a part name of an organ attention area, and a predetermined attention area (identification extraction target) as organ area information. It shows.
  • FIG. 6A shows a 2D ultrasound image 122 of the liver.
  • (B) of FIG. 6 is a diagram showing a mask image 123 with supervised labeling used at the time of learning, that is, correct data of learning. In (B) of FIG.
  • the vein area-upper front right area, right vein lower area-lower right area, portal vein area-as site names and organ area information of four types of organ attention areas The upper front lobe of the right lobe, the gallbladder region-the lower lobe of the right lobe is shown.
  • the notable area identification / extraction units 21 and 24 of the image processing apparatus 108 of this embodiment execute three-stage learning processing.
  • the learning process of the present embodiment will be described using the flowchart shown in FIG. 7 and the image examples shown in FIGS.
  • the ultrasonic image of the attention area identification / extraction unit 21 is described as an example, the same processing can be applied to a CT image.
  • step S401 a learning image is input.
  • step S402 a mask image of a region of interest candidate prepared in advance is input.
  • FIGS. 8A and 8B show examples of a learning image and a corresponding region of interest candidate mask image.
  • step S403 learning of the first learning model is performed using the learning image and the generated attention area candidate mask image. The purpose of this learning processing is to generate a learning model that can identify and extract attention area candidates of organ attention areas.
  • step S404 the parameters of the generated first learning model are used as initial parameters of the second learning model.
  • step S405 the organ area information correct mask image of the learning attention area prepared in advance is input.
  • FIGS. 9A and 9B show an example of the learning image 124 and the corresponding organ area information correct mask image 125.
  • organ area information which is anatomical area information of an organ
  • the lower right anterior area and the upper right anterior area of the liver are shown.
  • the initialized second learning model is further learned using the learning image and the organ area information correct mask image. The purpose of this learning process is to identify and extract the area to which organ area information of the organ attention area is given.
  • step S407 the parameters of the generated second learning model are used as initial parameters of the third learning model. That is, the parameter of the second learning model is used as an initial parameter of the third learning model that identifies and extracts the region to which the part name information of the organ attention area and the organ area information are added.
  • step S408 the learning image and the corresponding correct mask image of the organ attention area provided with the part name information and the organ area information, which are prepared in advance, are input.
  • step S409 the third learning model initialized in step S407 is further trained using the image data input in step S408, and the result is generated as a final learning model.
  • FIG. 10 schematically shows the above-described three-step learning process.
  • the three-step learning process has been described as an example, but the invention is not limited to the three-step learning process, and the learning process is executed in other than three stages such as two or four stages as needed. can do.
  • step S501 the information of the organ attention area of ultrasound and CT is received.
  • step S502 a scan region is estimated from organ area information of ultrasound attention area information.
  • step S503 the volume of each organ attention area is estimated from the ultrasound attention area information.
  • step S504 the volume of each organ attention area corresponding to the ultrasound attention area is estimated from the CT volume.
  • step S505 it is determined whether the scanning of the ultrasound probe 7 is appropriate for alignment using a ratio of the volumes of the two, or the like.
  • the U-Net method which is a method for identifying and extracting the organ attention area described above, is applicable to 2D images, and can distinguish and extract the organ attention area from continuous 2D ultrasound images and estimate its volume.
  • the ultrasound probe 7 can directly acquire ultrasound volume data
  • the 3D-Net method or V-Net method which is a three-dimensional extension of the U-Net method, for ultrasound volumes and CT volumes. It is possible to use
  • a calculation table 126 showing an example of the organ area, part name, and volume weighting of each organ attention area shown in FIG.
  • the liver is used as an example of an organ
  • 4 areas of each of a right lobe and a left lobe as an example of an anatomical area
  • a portal vein, a vein, a gallbladder a total of 24 identification extraction targets (classes)
  • w weight
  • the 24 identification extraction targets classes
  • the alignment determination unit 32 calculates a weighted addition value (V US , V CT ) of each volume with respect to the volume of each organ attention area extracted from each of the ultrasound image and the CT volume.
  • V US / V CT > T 1 it is determined that the scan of the ultrasonic probe 7 is appropriate for alignment. If T 1 > V US / V CT > T 2 , it is determined that the scan of the ultrasound probe 7 is not appropriate for alignment, but it is determined that the additional scan enables an appropriate scan, and the image is displayed In the section 31, the scanning direction and the part are presented to the user.
  • the current scan range and the recommended additional scan range are presented to the user by the message 129 and are displayed in the left area of the display screen 128
  • the user can be guided by displaying the scanning position of the ultrasonic probe and the recommended scanning direction in the image of the subject by using an arrow or the like. If V US / V CT ⁇ T 2 , it is determined that the scan of the ultrasound probe 7 is not appropriate for alignment, and the scan is rerun.
  • the alignment determination unit 32 performs weighted addition on each organ attention area of the ultrasonic image identified and extracted to calculate the first weighting volume, and the organ attention of the ultrasonic image of the second volume data
  • the ultrasound probe is performed by performing weighted addition on an organ attention area corresponding to each area to calculate a second weighting volume, and comparing the ratio of the first weighting volume to the second weighting volume with a predetermined threshold value. It is possible to determine whether the scanning range and the part of the image data are appropriate for alignment with the second volume data. Furthermore, when the alignment determination unit 32 determines that the scanning range of the ultrasound probe and the region are inappropriate for alignment with the second volume data, the additional scanning range and the region of the recommended ultrasound probe are To the image display unit 31 for guiding the user to
  • the image display unit 31 can display the segmentation result and the volume calculation numerical value of each attention area which has been identified and extracted, in addition to the various displays described above.
  • the above-described alignment determination threshold values T 1 and T 2 can be adjusted through the user interface (UI) 121.
  • the organ attention area of the ultrasonic image is identified and extracted in pixel units, and the scanning area and part of the ultrasonic probe Can be identified to determine whether it is appropriate for alignment.
  • the ultrasonic-CT image is aligned, and then scanned in real time. It is an embodiment of an ultrasonic imaging apparatus, an image processing apparatus, and a method capable of simultaneously displaying an acquired intraoperative ultrasound image and a high resolution CT image corresponding thereto and accurately guiding a surgery.
  • the same components and processes as those of the first embodiment are denoted by the same reference numerals, and the description thereof is omitted.
  • FIG. 13 shows one configuration example of the ultrasonic imaging apparatus in the second embodiment.
  • FIG. 14 is a block diagram showing an example of the hardware configuration of the image processing apparatus 108 and the user interface (UI) 121 in the second embodiment.
  • the position sensor 8 and the position detection unit 6 are provided.
  • the position detection unit 6 detects the position of the ultrasonic probe 7 from the output of the position sensor 8.
  • a magnetic sensor unit can be used as the position detection unit 6.
  • the position detection unit 6 forms a magnetic field space, and the position sensor 8 detects the magnetic field, whereby coordinates from the position serving as the reference point can be detected.
  • the image generation unit 107 receives the position information of the ultrasound probe 7 at that time from the position detection unit 6 and gives the position information to the generated ultrasound image.
  • the user moves the ultrasound probe 7, and the image generation unit 107 generates an ultrasound image to which the position information of the ultrasound probe 7 at that time is added, and outputs the ultrasound image to the image processing device 108,
  • the image processing device 108 can generate first volume data of the three-dimensional ultrasound image.
  • FIG. 15 is a block diagram showing an example of the function of the image processing apparatus 108 of this embodiment.
  • the image processing apparatus 108 includes an ultrasonic probe position information acquisition unit 29 and a CT volume coordinate conversion calculation (alignment) unit 26 that executes alignment.
  • Real-time 2D-CT image calculation unit 27, and real-time 2D ultrasound image acquisition unit 30 are examples of the image processing apparatus 108 of this embodiment.
  • Steps S501, S502, S504, S506, S507, and S508 are the same as the processing of steps S201, S202, S203, S205, S206, and S207 illustrated in FIG.
  • step S 503 the position detection unit 6 detects the position of the ultrasonic probe 7 from the output of the position sensor 8.
  • the ultrasonic volume data generation unit 23 receives real-time position information of the ultrasonic probe.
  • step S505 the ultrasonic volume data generation unit 23 performs ultrasonic volume data as first volume data based on the 2D ultrasonic image generated continuously and the position information of the ultrasonic probe applied thereto.
  • the CT volume coordinate conversion calculation (alignment) unit 26 receives the ultrasound / CT target area information 25 and calculates a registration conversion matrix for aligning the CT volume with the ultrasonic volume. Details of the alignment conversion matrix calculation will be described later.
  • the real time 2D ultrasound image acquisition unit 30 receives a 2D ultrasound image which is an ultrasound scan image acquired in real time from the ultrasound image acquisition unit 28.
  • the real-time 2D-CT image calculation unit 27, which is a CT cross-section receives real-time positional information of the ultrasound probe corresponding to the 2D ultrasound image, as in step S505.
  • the real-time 2D-CT image calculation unit 27 uses the position information of the ultrasound probe 7 and the coordinate transformation matrix of the CT volume to cope with 2D ultrasound images acquired in real time.
  • the cross-sectional image of the 2D-CT to be calculated is calculated in real time from the CT volume.
  • the image display unit 31 receives a 2D ultrasound image, a cross-sectional image of 2D-CT, and ultrasound / CT attention area information 25.
  • the image display unit 31 displays the cross-sectional images (CT, US) of the 2D-CT and 2D ultrasound images on different screen areas of the display area 127 of the image display unit 31, as an example shown in FIG. .
  • the region and area information of the region of interest, and the estimated volume of ultrasonic scanning are displayed in the display region 127, respectively.
  • step S601 the CT volume coordinate conversion calculation (alignment) unit 26 receives an ultrasound volume (first volume) and a CT volume (second volume).
  • step S602 ultrasound / CT attention area information 25 is accepted.
  • step S603 the CT volume coordinate conversion calculation (alignment) unit 26 aligns the point clouds of the ultrasound / CT attention area.
  • a well-known ICP (Iterative Closest Point) method can be used as an automatic registration method.
  • ICP Intelligent Closest Point
  • the point group in the CT notable area is subjected to geometric transformation, ie, parallel translation and rotation, and the distance between corresponding points in the ultrasound notable area is determined to obtain the distance between the corresponding points. Make calculations. Thereby, both can be aligned.
  • step S604 the CT volume coordinate conversion calculation (alignment) unit 26 performs image-based alignment of the ultrasound volume and the CT volume.
  • the parameter of the image based alignment is initialized using the alignment result of the point group of the said attention area.
  • the CT volume coordinate conversion calculation (alignment) unit 26 acquires sample point data to be aligned from each of the ultrasonic volume and the CT volume.
  • the sample point data may extract all pixels in the image area as sampling points, and may use pixel values sampled randomly or on a grid. Furthermore, sampling may be performed from the corresponding region of interest.
  • the CT volume coordinate transformation calculation (alignment) unit 26 geometrically transforms the coordinates of the sampling points extracted from the ultrasound volume into the coordinates of the corresponding points in the CT volume, and performs the luminance data at these sampling points Apply a predetermined evaluation function to calculate image similarity between the ultrasound volume and the CT volume. A known mutual information amount can be used as the image similarity.
  • the CT volume coordinate conversion calculation (alignment) unit 26 obtains geometric conversion information that maximizes or maximizes the image similarity between the ultrasound volume and the CT volume, and updates the geometric conversion information. In the final step S605, the CT volume coordinate conversion calculation (alignment) unit 26 outputs the result of alignment.
  • the organ attention area of the ultrasonic image is identified and extracted in pixel units, the scanning area and the region of the ultrasonic probe are identified, and it is determined whether or not it is appropriate for alignment. After that, the ultrasound volume and CT volume can be aligned, and intraoperative ultrasound images acquired by scanning in real time and the corresponding high resolution CT images can be simultaneously displayed to accurately guide the surgery. .
  • the present invention is not limited to the embodiments described above, but includes various modifications.
  • the embodiments described above have been described in detail for better understanding of the present invention, and are not necessarily limited to those having all the configurations of the description.
  • the present invention is not limited to the ultrasonic imaging apparatus, the image processing apparatus and method thereof, and an image processing apparatus connected to the ultrasonic imaging apparatus through the network, and the image processing method thereof It goes without saying that it can be realized as
  • part of the configuration of one embodiment can be replaced with the configuration of another embodiment, and the configuration of another embodiment can be added to the configuration of one embodiment.

Landscapes

  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Pathology (AREA)
  • Radiology & Medical Imaging (AREA)
  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Medical Informatics (AREA)
  • Molecular Biology (AREA)
  • Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Ultra Sonic Daignosis Equipment (AREA)

Abstract

In order to determine whether a scan by an ultrasound probe is suited to alignment and accurately guide ultrasound probe scanning, this ultrasound imaging device is provided with: an ultrasound probe for transmitting an ultrasonic wave to a subject and receiving an ultrasonic wave from the subject; an ultrasound image acquisition unit 28 comprising an image generation unit for generating an ultrasound image from a signal received by the ultrasound probe and generating first volume data from the ultrasound image; and an image processing device for receiving and processing the ultrasound image, the first volume data, and second volume data relating to the subject. The image processing device is provided with a region-of-interest discernment & extraction unit 21 for discerning and extracting an organ region of interest in the ultrasound image on a pixel-by-pixel basis; and an alignment determination unit 32 for determining, using the extracted organ region of interest, whether or not a region and site scanned by the ultrasound probe are suited to an alignment with the second volume data.

Description

超音波撮像装置、画像処理装置、及び方法Ultrasonic imaging apparatus, image processing apparatus and method
 本発明は、超音波撮像装置に係り、特に、撮像した超音波画像と他の撮像装置で撮像された同じ断面の画像で、被検体内の特徴部位を同時に表示する撮像技術に関する。 The present invention relates to an ultrasonic imaging apparatus, and more particularly, to an imaging technique for simultaneously displaying a characteristic site in a subject with an image of the imaged ultrasonic image and an image of the same cross section imaged by another imaging apparatus.
 超音波撮像装置は、超音波を被検体に照射し、その反射信号により被検体内部の構造を画像化するため、無侵襲かつリアルタイムに患者を観察することが可能である。一方、X線CT(Computed Tomography)装置あるいはMRI(Magnetic Resonance Imaging)装置などの他の医用画像撮像装置は、広範囲かつ高分解能で撮像することができるため、細かな病変や臓器の位置関係の把握が容易に行える。例えば肝臓癌などの腫瘍を、早期の段階でMRI画像やX線CT画像から見つけ出すことができる。 The ultrasonic imaging apparatus irradiates an ultrasonic wave to a subject, and images the structure inside the subject based on the reflection signal, so that it is possible to observe a patient non-invasively in real time. On the other hand, other medical imaging devices such as X-ray CT (Computed Tomography) devices or MRI (Magnetic Resonance Imaging) devices can perform imaging in a wide range and with high resolution, so grasping the positional relationship of fine lesions and organs Easy to do. For example, tumors such as liver cancer can be found at an early stage from MRI images and X-ray CT images.
 また、超音波探触子に位置センサを貼り付けてスキャン面の位置関係を算出し、医用画像診断装置で撮像した3次元診断用ボリュームデータから、超音波スキャン面の画像と対応する2次元(2D)画像を構築し、表示する画像診断システムも普及し始めている。 In addition, a position sensor is attached to the ultrasound probe to calculate the positional relationship of the scan plane, and from the volume data for three-dimensional diagnosis captured by the medical image diagnostic apparatus, the two-dimensional image corresponding to the image of the ultrasound scan surface 2D) Diagnostic imaging systems that construct and display images are also becoming popular.
 特許文献1には、血管情報を用いて、超音波3次元画像(ボリューム)とMRI3次元画像との位置合わせを行い、超音波スキャン面の画像と対応するMRI断面画像を構築する方法が記載されている。この技術では、超音波画像及びMRI画像を取得し、血管分岐の分岐点の周辺に複数の画像領域を設定し、画像領域ごとに画像の特徴を表す指標を算出し、比較対象となる組み合わせを変えながら分岐点類似度の算出を繰り返すことで同一の血管分岐であるとの判定を行い、位置合わせ用の幾何変換行列を推定する。その位置合わせの結果を用いて、MRI画像を超音波画像に合わせて、対応した断面画像を生成して表示する。 Patent Document 1 describes a method of aligning an ultrasonic three-dimensional image (volume) with an MRI three-dimensional image using blood vessel information, and constructing an MRI cross-sectional image corresponding to an ultrasonic scan plane image. ing. In this technique, an ultrasonic image and an MRI image are acquired, a plurality of image areas are set around the branch point of the blood vessel bifurcation, an index representing the feature of the image is calculated for each image area, and a combination to be compared is It is determined that the blood vessel branch is the same by repeating the calculation of the branch point similarity while changing, and the geometric transformation matrix for alignment is estimated. The alignment results are used to align the MRI image with the ultrasound image to generate and display a corresponding cross-sectional image.
特開2017-012341号公報JP 2017-012341 A
 近年、被検体の手術中に腫瘍等の手術(熱的治療、あるいは外科的切除など)すべき領域を、術中超音波画像と、それに対応する高解像度のMRI画像やCT画像とで確認すること、すなわち、術中超音波画像と高解像度モダリティ画像との位置合わせが望まれている。ただし、術中超音波と他のモダリティ画像の撮影方向が大きく異なるため、位置合わせの初期値推定が困難であり、適切な超音波探触子走査が位置合わせの成功に大きく影響する。しかし、位置合わせ用の超音波ボリュームの取得するための超音波探触子走査は、術者手技や患者状況に依存し、走査される範囲や部位は術者/患者によってばらつきが大きい。取得される術中超音波ボリュームが位置合わせに不適切な場合、位置合わせが失敗となり、超音波ボリュームを再撮影、位置合わせを再実行することとなるため、術者にとって手間がかかり、患者への負担も大きくなる。そこで、取得された超音波ボリューム、もしくは超音波探触子の走査が、位置合わせに適切かどうかを事前に判定し、超音波探触子走査を正確にガイドする機能が望ましい。 In recent years, the area to be operated on (such as thermal treatment or surgical excision) for a tumor or the like during operation of a subject is confirmed by intraoperative ultrasound images and corresponding high-resolution MRI images or CT images. That is, alignment of the intraoperative ultrasound image with the high resolution modality image is desired. However, since the imaging directions of intraoperative ultrasound and other modality images are largely different, it is difficult to estimate the initial value of alignment, and proper ultrasonic probe scanning greatly affects the success of alignment. However, the ultrasound probe scan for acquiring the ultrasound volume for alignment depends on the operator's procedure and the patient's situation, and the range and site to be scanned vary widely depending on the operator / patient. If the intraoperative ultrasound volume to be acquired is inappropriate for alignment, alignment will fail, and the ultrasound volume will be re-photographed and alignment will be re-executed, which takes time and labor for the operator. The burden also increases. Therefore, it is desirable to have a function of determining in advance whether the acquired ultrasound volume or scanning of the ultrasound probe is appropriate for alignment, and accurately guiding the ultrasound probe scanning.
 しかしながら、特許文献1の技術では、取得された超音波ボリュームとMRIボリュームから、対応する血管分岐点を総当たりで探索するため、超音波探触子の走査領域や走査部位の識別ができない。また、血管が豊富に存在しない部位や血管が明瞭に撮像できない場合には、血管分岐の抽出、そして位置合わせが困難である。 However, in the technique of Patent Document 1, since the corresponding blood vessel bifurcation points are searched for in a round-robin manner from the acquired ultrasound volume and MRI volume, it is not possible to identify the scan region or the scan region of the ultrasound probe. In addition, when a region or a blood vessel that is not rich in blood vessels can not be clearly imaged, it is difficult to extract and align the blood vessel bifurcation.
 本発明の目的は、超音波探触子の走査が、位置合わせに適切かどうかを判定し、超音波探触子走査を正確にガイドすることが可能な超音波撮像装置、画像処理装置、及び方法を提供することにある。 It is an object of the present invention to determine whether an ultrasound probe scan is appropriate for alignment and to accurately guide the ultrasound probe scan, an image processing apparatus, and To provide a way.
 上記の目的を達成するため、本発明においては、被検体に超音波を送信し、被検体からの超音波を受信する超音波探触子と、超音波探触子の受信信号から超音波画像を生成する画像生成部と、超音波画像と、被検体についての第2ボリュームデータを処理する画像処理装置と、を備え、画像処理装置は、超音波画像の臓器注目領域を識別抽出する注目領域識別・抽出部と、抽出した臓器注目領域を用いて、超音波探触子の走査範囲と部位が、超音波画像あるいは超音波画像から生成された第1ボリュームデータと、第2ボリュームデータとの位置合わせに適切かどうかを判定する位置合わせ判定部とを含む超音波撮像装置を提供する。 In order to achieve the above object, according to the present invention, an ultrasound image is transmitted from an ultrasound probe to an object to receive ultrasound waves from the object, and an ultrasound image from a reception signal of the ultrasound probe. And an image processing apparatus for processing an ultrasound image and second volume data of the subject, the image processing apparatus identifying and extracting an organ attention area of the ultrasound image Using the identification / extraction unit and the extracted organ attention area, the scanning range and region of the ultrasound probe are the first volume data and the second volume data generated from the ultrasound image or the ultrasound image Provided is an ultrasonic imaging apparatus including: an alignment determination unit that determines whether the alignment is appropriate.
 また、上記の目的を達成するため、本発明においては、画像処理装置であって、超音波探触子から被検体に超音波を送信し、その受信信号から生成した超音波画像の臓器注目領域を識別抽出する注目領域識別・抽出部と、抽出した臓器注目領域を用いて、超音波探触子の走査範囲と部位が、超音波画像あるいは超音波画像から生成された第1ボリュームデータと、被検体についての第2ボリュームデータとの位置合わせに適切かどうかを判定する位置合わせ判定部とを含む画像処理装置を提供する。 Further, in order to achieve the above object, in the present invention, the present invention is an image processing apparatus, which transmits an ultrasonic wave from an ultrasonic probe to a subject, and an organ attention area of an ultrasonic image generated from a reception signal thereof. And a first volume data generated from an ultrasound image or an ultrasound image, using the region of interest identification / extraction unit for identifying and extracting the extracted region of interest and the extracted region of interest of the organ. There is provided an image processing apparatus including: an alignment determination unit that determines whether the alignment is appropriate for alignment with second volume data of an object.
 さらに、上記の目的を達成するため、本発明においては、画像処理装置における画像処理方法であって、画像処理装置は、超音波探触子から被検体に超音波を送信し、その受信信号から生成した超音波画像の臓器注目領域を識別抽出し、抽出した臓器注目領域を用いて、超音波探触子の走査範囲と部位が、超音波画像あるいは超音波画像から生成された第1ボリュームデータと、被検体についての第2ボリュームデータとの位置合わせに適切かどうかを判定する画像処理方法を提供する。 Furthermore, in order to achieve the above object, in the present invention, in the image processing method in the image processing apparatus, the image processing apparatus transmits an ultrasonic wave from the ultrasonic probe to the object, and from the received signal An organ attention area of the generated ultrasound image is identified and extracted, and using the extracted organ attention area, a first volume data in which a scan range and a region of the ultrasound probe are generated from the ultrasound image or the ultrasound image And an image processing method of determining whether the alignment is appropriate for alignment with the second volume data of the subject.
 本発明によれば、超音波探触子の走査範囲と部位を識別抽出でき、位置合わせ対象となる臓器注目領域の位置や場所を特定できるため、超音波探触子の走査が、他の画像撮像装置のボリュームデータとの位置合わせに適切かどうかを判定できる。 According to the present invention, the scanning range and region of the ultrasound probe can be identified and extracted, and the position and location of the organ attention area to be aligned can be specified. It can be determined whether or not it is appropriate for alignment with volume data of the imaging device.
実施例1に係る、超音波撮像装置の全体構成の一例を示すブロック図。FIG. 1 is a block diagram showing an example of the entire configuration of an ultrasonic imaging apparatus according to a first embodiment. 実施例1に係る、超音波撮像装置のハードウェア構成の一例を示すブロック図。FIG. 1 is a block diagram showing an example of a hardware configuration of an ultrasonic imaging apparatus according to a first embodiment. 実施例1に係る、超音波撮像装置の画像処理装置の機能ブロック図。FIG. 1 is a functional block diagram of an image processing apparatus of an ultrasonic imaging apparatus according to a first embodiment. 実施例1に係る、超音波撮像装置の画像処理装置の処理流れを示すフローチャート図。FIG. 2 is a flowchart showing a process flow of the image processing apparatus of the ultrasonic imaging apparatus according to the first embodiment. 実施例1に係る、画像処理装置の注目領域の識別・抽出処理の流れを示すフローチャート図。FIG. 5 is a flowchart showing the flow of identification processing and extraction processing of a region of interest according to the first embodiment. 実施例1に係る、注目領域における部位名称・臓器区域情報の一例を示す図。The figure which shows an example of the site | part name * organ area information in an attention area based on Example 1. FIG. 実施例1に係る、注目領域の識別・抽出する学習モデルを学習する処理の流れを示すフローチャート図。FIG. 7 is a flowchart showing a flow of processing for learning a learning model for identifying and extracting a region of interest according to the first embodiment. 実施例1に係る、注目領域の領域候補の一例を示す図。FIG. 7 is a diagram showing an example of area candidates of a notable area according to the first embodiment. 実施例1に係る、注目領域の臓器区域情報の一例を示す図。The figure which shows an example of the organ area information of an attention area based on Example 1. FIG. 実施例1に係る、注目領域の識別・抽出する学習モデルを学習する流れを示す説明図。Explanatory drawing which shows the flow which learns the learning model which identifies and extracts an attention area based on Example 1. FIG. 実施例1に係る、超音波探触子走査が位置合わせに適切かどうかを判定する処理の流れを示すフローチャート図。FIG. 7 is a flowchart showing a flow of processing of determining whether or not ultrasonic probe scanning is appropriate for alignment according to the first embodiment. 実施例1に係る、各注目領域の臓器区域、部位名称、体積重み付けの一例を示すテーブル図。FIG. 7 is a table diagram showing an example of an organ area, a part name, and a volume weighting of each attention area according to the first embodiment. 実施例2に係る、超音波撮像装置の全体構成の一例を示すブロック図。FIG. 7 is a block diagram showing an example of the entire configuration of an ultrasonic imaging apparatus according to a second embodiment. 実施例2に係る、超音波撮像装置のハードウェア構成の一例を示すブロック図。FIG. 7 is a block diagram showing an example of a hardware configuration of an ultrasonic imaging apparatus according to a second embodiment. 実施例2に係る、超音波撮像装置の画像処理装置の機能ブロック図。FIG. 7 is a functional block diagram of an image processing apparatus of an ultrasonic imaging apparatus according to a second embodiment. 実施例2に係る、超音波撮像装置の画像処理装置の処理流れを示すフローチャート図。FIG. 7 is a flowchart showing the process flow of the image processing apparatus of the ultrasonic imaging apparatus according to the second embodiment. 実施例2に係る、画像処理装置の位置合わせ結果と注目領域抽出結果の表示の一例を示す説明図。FIG. 14 is an explanatory diagram showing an example of the display of the alignment result of the image processing apparatus and the attention area extraction result according to the second embodiment. 実施例2に係る、画像処理装置の位置合わせ処理の流れを示すフローチャート図。FIG. 7 is a flowchart showing a flow of alignment processing of the image processing apparatus according to the second embodiment. 実施例1に係る、超音波振動子の現在の走査区域と推奨する追加走査区域・方向の表示の一例を示す説明図。Explanatory drawing which shows an example of the present scanning area of an ultrasonic transducer | transducer, and the display of a recommended additional scanning area * direction based on Example 1. FIG.
 以下、本発明の実施の形態を図面に基づいて詳細に説明する。なお、実施の形態を説明するための全図において、同一部分には原則として同一の符号を付し、その繰り返しの説明は省略する。 Hereinafter, embodiments of the present invention will be described in detail based on the drawings. Note that, in all the drawings for describing the embodiments, in principle, the same reference numerals are given to the same portions, and the repeated description thereof will be omitted.
 実施例1は、被検体に超音波を送信し、被検体からの超音波を受信する超音波探触子と、超音波探触子の受信信号から超音波画像を生成する画像生成部と、超音波画像と、被検体についての第2ボリュームデータを処理する画像処理装置と、を備え、画像処理装置は、超音波画像の臓器注目領域を識別抽出する注目領域識別・抽出部と、抽出した臓器注目領域を用いて、超音波探触子の走査範囲と部位が、超音波画像あるいは超音波画像から生成された第1ボリュームデータと、第2ボリュームデータとの位置合わせに適切かどうかを判定する位置合わせ判定部とを含む超音波撮像装置、その画像処理装置、及びその画像処理方法の実施例である。 In the first embodiment, an ultrasonic probe that transmits ultrasonic waves to a subject and receives ultrasonic waves from the subject, and an image generation unit that generates an ultrasonic image from a received signal of the ultrasonic probe. The image processing apparatus comprises an ultrasound image and an image processing apparatus for processing second volume data of an object, wherein the image processing apparatus extracts a notable area identification / extraction unit for identifying and extracting an organ attention area of the ultrasound image. Using the organ attention area, it is determined whether the scanning range and region of the ultrasound probe are appropriate for aligning the first volume data generated from the ultrasound image or the ultrasound image and the second volume data 1 is an embodiment of an ultrasonic imaging apparatus including an alignment determination unit, an image processing apparatus therefor, and an image processing method therefor.
 <構成及び動作>
  以下、実施例1の超音波撮像装置の具体的な構成例について詳述する。図1は、実施例1に係る超音波撮像装置の全体構成の一例を示す。本実施例の超音波撮像装置は、超音波探触子7と、画像生成部107と、画像処理装置108とを備え、さらに、送信部102、送受切替部101、受信部105、ユーザインタフェース(UI)121、及び制御部106を備えて構成される。外付けのディスプレイ16は、ユーザインタフェース(UI)121の一部としても良い。
<Configuration and operation>
Hereinafter, a specific configuration example of the ultrasonic imaging apparatus of the first embodiment will be described in detail. FIG. 1 illustrates an example of the entire configuration of an ultrasonic imaging apparatus according to a first embodiment. The ultrasound imaging apparatus according to the present embodiment includes an ultrasound probe 7, an image generation unit 107, and an image processing device 108, and further includes a transmission unit 102, a transmission / reception switching unit 101, a reception unit 105, and a user interface ( UI) 121, and a control unit 106. The external display 16 may be part of a user interface (UI) 121.
 送信部102は、制御部106の制御下で、送信信号を生成し、超音波プローブと呼ばれる超音波探触子7を構成する複数の超音波素子ごとに受け渡す。これにより、超音波探触子7の複数の超音波素子は、それぞれ超音波を被検体120に向かって送信する。被検体120で反射等された超音波は、再び超音波探触子7の複数の超音波素子に到達して受信され、電気信号に変換される。超音波素子が受信した信号は、受信部105によって、受信焦点の位置に応じた所定の遅延量で遅延され、整相加算される。これを複数の受信焦点ごとについて繰り返す。整相加算後の信号は、画像生成部107に受け渡される。送受切替部101は、送信部102または受信部105を選択的に超音波探触子7に接続する。 The transmission unit 102 generates a transmission signal under the control of the control unit 106, and delivers the transmission signal to each of a plurality of ultrasonic elements constituting an ultrasonic probe 7 called an ultrasonic probe. Thereby, the plurality of ultrasonic elements of the ultrasonic probe 7 transmit ultrasonic waves toward the subject 120, respectively. The ultrasonic waves reflected and the like by the object 120 again reach the plurality of ultrasonic elements of the ultrasonic probe 7 and are received, and converted into electric signals. The signal received by the ultrasonic element is delayed by a predetermined delay amount according to the position of the reception focus by the reception unit 105, and is phasing and addition. This is repeated for each of a plurality of reception focuses. The signal after the phasing addition is delivered to the image generation unit 107. The transmission / reception switching unit 101 selectively connects the transmission unit 102 or the reception unit 105 to the ultrasound probe 7.
 画像生成部107は、受信部105から受け取った整相加算信号を受信焦点に対応する位置に並べる等の処理を行い、2D超音波画像を生成し、画像処理装置108に出力する。画像処理装置108は、受け取った超音波画像を用いて3次元超音波画像の第1ボリュームデータを生成する。 The image generation unit 107 performs processing such as arranging the phasing addition signal received from the reception unit 105 at a position corresponding to the reception focus, generates a 2D ultrasound image, and outputs the 2D ultrasound image to the image processing apparatus 108. The image processing device 108 generates first volume data of a three-dimensional ultrasound image using the received ultrasound image.
 画像処理装置108は、これら2D超音波画像と第1ボリュームデータに加え、他の画像撮像装置が被検体120について得た第2ボリュームデータをユーザインタフェース(UI)121を介して受け取って、超音波画像の臓器注目領域をピクセル単位で識別抽出し、超音波探触子の走査領域と部位を識別し、位置合わせに適切かどうかを判定する。言い換えると、画像処理装置108は、超音波探触子から被検体に超音波を送信し、その受信信号から生成した超音波画像の臓器注目領域を識別抽出する注目領域識別・抽出部と、抽出した臓器注目領域を用いて、超音波探触子の走査範囲と部位が、超音波画像あるいは超音波画像から生成された第1ボリュームデータと、第2ボリュームデータとの位置合わせに適切かどうかを判定する位置合わせ判定部と、を含む構成を有する。 In addition to the 2D ultrasound image and the first volume data, the image processing device 108 receives, via the user interface (UI) 121, second volume data obtained by the other image capturing device for the subject 120, and performs ultrasound The organ attention area of the image is identified and extracted in units of pixels, the scan area and region of the ultrasound probe are identified, and it is determined whether or not it is appropriate for alignment. In other words, the image processing device 108 transmits an ultrasound wave from the ultrasound probe to the subject, and extracts and extracts a notable area identification / extraction unit that identifies and extracts the organ attention area of the ultrasound image generated from the received signal. Whether the scanning range and region of the ultrasound probe are appropriate for aligning the first volume data generated from the ultrasound image or the ultrasound image and the second volume data using the focused organ region of interest And a registration determination unit to be determined.
 以下の説明において、MRI装置やX線CT装置や他の超音波診断装置等の他の画像撮像装置を、医用モダリティと呼ぶ。本実施例では、医用モダリティの一例として、X線CT装置を用い、X線CT装置のボリュームデータを、上記の第2ボリュームデータとしてのCTボリュームデータと呼ぶ。 In the following description, other imaging devices such as an MRI apparatus, an X-ray CT apparatus, and other ultrasound diagnostic apparatuses are referred to as medical modalities. In this embodiment, an X-ray CT apparatus is used as an example of a medical modality, and volume data of the X-ray CT apparatus is referred to as CT volume data as the above-mentioned second volume data.
 以下、図2~図6を用いて、画像処理装置108とユーザインタフェース(UI)121の構成と動作について詳しく説明する。図2は、画像処理装置108とユーザインタフェース121のハードウェア構成の一例を示している。図2に示すハードウェア構成例は、後述する他の実施例においても、共通に用いられる。 Hereinafter, configurations and operations of the image processing apparatus 108 and the user interface (UI) 121 will be described in detail with reference to FIGS. 2 to 6. FIG. 2 shows an example of the hardware configuration of the image processing apparatus 108 and the user interface 121. The hardware configuration shown in FIG. 2 is commonly used in the other embodiments described later.
 画像処理装置108は、CPU(プロセッサ)1、ROM(不揮発性メモリ:読出専用の記憶媒体)2、RAM(揮発性メモリ:データの読み書きが可能な記憶媒体)3、記憶装置4および表示制御部15を備えて構成される。ユーザインタフェース(UI)121は、画像入力部9、媒体入力部11、入力制御部13および入力装置14を備えて構成される。これらと画像生成部107は、バス5によって相互に接続されている。また、画像処理装置108の表示制御部15には、ディスプレイ16が接続されている。このディスプレイ16は、ユーザインタフェースの出力部と考えることができる。 The image processing apparatus 108 includes a CPU (processor) 1, a ROM (nonvolatile memory: storage medium for reading only) 2, a RAM (volatile memory: storage medium for reading and writing data) 3, a storage device 4 and a display control unit It comprises 15 of them. The user interface (UI) 121 includes an image input unit 9, a medium input unit 11, an input control unit 13, and an input device 14. These and the image generation unit 107 are mutually connected by the bus 5. Further, a display 16 is connected to the display control unit 15 of the image processing apparatus 108. This display 16 can be considered as the output of the user interface.
 ROM2およびRAM3の少なくとも一方には、CPU1の演算処理で画像処理装置108の動作を実現するために必要とされるプログラムとデータが予め格納されている。CPU1が、このROM2およびRAM3の少なくとも一方に予め格納されたプログラムを実行することによって、後で詳述する画像処理装置108の各種処理が実現される。なお、CPU1が実行するプログラムは、例えば、光ディスクなどの記憶媒体12に格納しておき、光ディスクドライブなどの媒体入力部11がそのプログラムを読み込んでRAM3に格納する様にしてもよい。また、記憶装置4に当該プログラムを格納しておき、記憶装置4からそのプログラムをRAM3にロードしてもよい。また、ROM2にあらかじめ当該プログラムを記憶させておいてもよい。 At least one of the ROM 2 and the RAM 3 stores, in advance, programs and data required to realize the operation of the image processing apparatus 108 by the arithmetic processing of the CPU 1. The CPU 1 executes a program stored in advance in at least one of the ROM 2 and the RAM 3 to implement various processes of the image processing apparatus 108 which will be described in detail later. The program executed by the CPU 1 may be stored, for example, in a storage medium 12 such as an optical disk, and the medium input unit 11 such as an optical disk drive may read the program and store it in the RAM 3. Alternatively, the program may be stored in the storage device 4 and loaded from the storage device 4 into the RAM 3. Alternatively, the program may be stored in advance in the ROM 2.
 画像入力部9は、X線CT装置などの医用モダリティである画像撮像装置10が撮影したCTボリュームデータを取り込むためのインターフェースである。記憶装置4は、画像入力部9を介して入力されたCTボリュームデータ等を格納する磁気記憶装置である。記憶装置4は、例えば、フラッシュメモリなどの不揮発性半導体記憶媒体を備えてもよい。
また、ネットワークなどを介して接続された外部記憶装置を利用してもよい。
The image input unit 9 is an interface for capturing CT volume data captured by the image capturing apparatus 10 which is a medical modality such as an X-ray CT apparatus. The storage device 4 is a magnetic storage device that stores CT volume data and the like input through the image input unit 9. The storage device 4 may include, for example, a non-volatile semiconductor storage medium such as a flash memory.
In addition, an external storage device connected via a network or the like may be used.
 入力装置14は、ユーザの操作を受け付ける装置であり、例えば、キーボード、トラックボール、操作パネル、フットスイッチなどを含む。入力制御部13は、ユーザによって入力された操作入力を受け付けるインターフェースである。入力制御部13が受けた操作入力は、CPU1によって処理される。表示制御部15は、例えば、CPU1の処理で得られた画像データをディスプレイ16に表示させる制御を行う。ディスプレイ16は、表示制御部15の制御下で画像を表示する。 The input device 14 is a device that receives a user's operation, and includes, for example, a keyboard, a trackball, an operation panel, a foot switch, and the like. The input control unit 13 is an interface that receives an operation input input by the user. The operation input received by the input control unit 13 is processed by the CPU 1. The display control unit 15 controls the display 16 to display, for example, image data obtained by the processing of the CPU 1. The display 16 displays an image under the control of the display control unit 15.
 図3は、本実施例の画像処理装置108の機能を示すブロック図である。図3のように、画像処理装置108は、送信部102、受信部105および画像生成部107で構成される超音波画像取得部28で取得された超音波走査画像である2D超音波画像を用いて、第1のボリュームデータを生成す超音波ボリュームデータ生成部23と、超音波画像の注目領域識別・抽出部21と、第2のボリュームデータとしてのCTボリュームデータ受け付け部22と、CTボリュームの注目領域識別・抽出部24と、超音波・CT注目領域情報25と、位置合わせ判定部32と、画像表示部31とを含む。 FIG. 3 is a block diagram showing the functions of the image processing apparatus 108 of this embodiment. As illustrated in FIG. 3, the image processing apparatus 108 uses a 2D ultrasound image that is an ultrasound scan image acquired by the ultrasound image acquisition unit 28 configured by the transmission unit 102, the reception unit 105, and the image generation unit 107. , An ultrasonic volume data generation unit 23 for generating first volume data, an attention area identification / extraction unit 21 for ultrasonic images, a CT volume data reception unit 22 as second volume data, and a CT volume It includes an attention area identification / extraction unit 24, ultrasound / CT attention area information 25, a registration determination unit 32, and an image display unit 31.
 つぎに、図4に示すフローチャートを用いて、図3に示した画像処理装置108の各機能ブロックの動作処理を説明する。まず、ステップS201において、CTボリュームデータ受付部22は、画像入力部9を介して、画像撮像装置10からCTボリュームデータを受け付ける。 Next, operation processing of each functional block of the image processing apparatus 108 shown in FIG. 3 will be described using the flowchart shown in FIG. First, in step S201, the CT volume data receiving unit 22 receives CT volume data from the imaging device 10 via the image input unit 9.
 ステップS202において、超音波探触子7を当てて、移動やスキャンを行うように促す表示をディスプレイ16に表示する。ユーザが表示に従い超音波探触子7をその臓器の区域で移動させると、超音波画像取得部28により、超音波走査画像である2D超音波画像が生成、取得される。超音波ボリュームデータ生成部23は、超音波画像取得部28の画像生成部107から連続的に生成された2D超音波画像を受け付ける。 In step S202, the ultrasonic probe 7 is placed on the display 16 to display a prompt for moving or scanning. When the user moves the ultrasound probe 7 in the area of the organ according to the display, the ultrasound image acquisition unit 28 generates and acquires a 2D ultrasound image which is an ultrasound scan image. The ultrasound volume data generation unit 23 receives a 2D ultrasound image continuously generated from the image generation unit 107 of the ultrasound image acquisition unit 28.
 ステップS203において、超音波画像の注目領域識別・抽出部21は、連続的に生成された2D超音波画像から、所定の臓器注目領域をピクセル単位で識別抽出し、超音波探触子7の走査部位と位置を識別・推定する。結果として、その走査部位と位置の情報を付与した臓器注目領域のマスクを生成する。 In step S203, the attention area identification / extraction unit 21 of the ultrasound image identifies and extracts a predetermined organ attention area in pixel units from the continuously generated 2D ultrasound image, and the scanning of the ultrasound probe 7 is performed. Identify and estimate parts and locations. As a result, a mask of the organ attention area to which the information of the scanning site and position is given is generated.
 ステップS204において、超音波ボリュームデータ生成部23は、連続的に生成された2D超音波画像に基づいて、第1ボリュームデータとしての超音波ボリュームデータを生成する。 In step S204, the ultrasound volume data generation unit 23 generates ultrasound volume data as first volume data based on the 2D ultrasound image generated continuously.
 ステップS205において、CTボリュームの注目領域識別・抽出部24は、CTボリュームデータから、所定の臓器注目領域をピクセル単位で識別抽出し、結果として、各領域の走査部位と位置の情報が付与された臓器注目領域のマスクを生成する。そのCTボリュームの注目領域情報と、超音波画像の注目領域識別・抽出部21から生成された超音波画像の注目領域情報が、超音波・CT注目領域情報25として、位置合わせ判定部32に出力される。 In step S205, the region-of-interest identification / extraction unit 24 of the CT volume identifies and extracts a predetermined organ region of interest from the CT volume data in pixel units, and as a result, information on the scanning region and position of each region is added. Generate a mask of organ attention area. The attention area information of the CT volume and the attention area information of the ultrasound image generated from the attention area identification / extraction unit 21 of the ultrasound image are output to the alignment determination unit 32 as ultrasound / CT attention area information 25 Be done.
 ステップS206において、得られた超音波画像が位置合わせに適切かどうかを判定する。まず、位置合わせ判定部32は、CTボリュームの注目領域情報から、所定の各臓器注目領域の体積を算出する。さらに、位置合わせ判定部32は、超音波画像の注目領域情報から、超音波探触子走査範囲内での各臓器注目領域の体積を算出する。位置合わせ判定部32は、超音波探触子走査とCTボリュームの対応領域の体積の比例の重み付け加算平均を算出し、予め設定した所定の閾値と比較し、超音波走査が位置合わせに適切かどうかを判定する。不適切と判定された場合、さらなる閾値処理により、追加走査をするかどうかを判定する。 In step S206, it is determined whether the obtained ultrasound image is appropriate for alignment. First, the alignment determination unit 32 calculates the volume of each predetermined organ attention area from the attention area information of the CT volume. Further, the alignment determination unit 32 calculates the volume of each organ attention area in the ultrasound probe scanning range from the attention area information of the ultrasound image. The alignment determination unit 32 calculates a weighted average of proportions of volumes of corresponding regions of the ultrasound probe scan and the CT volume, compares it with a predetermined threshold set in advance, and determines whether the ultrasound scan is appropriate for alignment Determine if. If it is determined to be inappropriate, further thresholding is performed to determine whether to perform an additional scan.
 ステップS207において、追加走査をすると判定された場合、超音波探触子7の走査方向や走査部位を、ディスプレイ16などの画像表示部31を用いてユーザに提示する。
具体的な表示例は後で説明する。
If it is determined in step S207 that additional scanning is to be performed, the scanning direction of the ultrasound probe 7 and the scanning region are presented to the user using the image display unit 31 such as the display 16.
A specific display example will be described later.
 ステップS208において、追加走査しないと判定された場合、超音波探触子7の走査が、位置合わせに不適切であるとの結果を判定結果として画像表示部31に表示する。また、ステップS208において、位置合わせに適切と判定された場合、その判定結果と、超音波画像の注目領域情報を、画像表示部31を用いてユーザに表示する。 If it is determined in step S208 that additional scanning is not to be performed, the result that the scanning of the ultrasound probe 7 is inappropriate for alignment is displayed on the image display unit 31 as a determination result. In addition, when it is determined in step S208 that the positioning is appropriate, the determination result and the attention area information of the ultrasonic image are displayed to the user using the image display unit 31.
 引続き、注目領域の識別抽出する処理と位置合わせ判定処理について詳しく述べる。図5に示すフローチャートを用いて、図3の超音波画像の注目領域識別・抽出部21の動作処理を説明する。CTボリュームの注目領域識別・抽出部24も類似する処理を実行するため、説明は省略する。 Subsequently, the process of identifying and extracting a region of interest and the alignment determination process will be described in detail. The operation process of the attention area identification / extraction unit 21 of the ultrasonic image of FIG. 3 will be described using the flowchart shown in FIG. The region-of-interest identification / extraction unit 24 of the CT volume also executes similar processing, so the description will be omitted.
 注目領域識別・抽出部21は、臓器注目領域の注目領域候補を識別抽出する第1学習モデルの学習を行い、生成された第1学習モデルのパラメータを、臓器注目領域の解剖学的区域情報、すなわち、臓器内の解剖学的位置を示す臓器区域情報が付与された領域を識別・抽出する第2学習モデルの初期パラメータとして用いて、第2学習モデルの学習を行う。更に、注目領域識別・抽出部21は、生成された第2学習モデルのパラメータを、臓器注目領域の部位名称情報と臓器区域情報が付与された領域を識別・抽出する第3学習モデルの初期パラメータとして用いて、第3学習モデルの学習を行い、その結果を最終学習モデルとして生成する。 The attention area identification / extraction unit 21 learns the first learning model for identifying and extracting the attention area candidate of the organ attention area, and generates the parameter of the first learning model as the anatomical area information of the organ attention area, That is, the second learning model is learned using the region to which the organ area information indicating the anatomical position in the organ is given as the initial parameter of the second learning model to identify and extract. Furthermore, the attention area identification / extraction unit 21 is an initial parameter of the third learning model for identifying / extracting the area to which the part name information of the organ attention area and the organ area information are given, of the generated second learning model parameters. To learn the third learning model, and generate the result as the final learning model.
 ステップS301において、超音波画像の注目領域識別・抽出部21は、画像生成部107から超音波走査画像である2D超音波画像を受け付ける。ステップS302において、ノイズ除去やコントラスト強調などの画像前処理を行う。ステップS303において、識別・抽出用の学習モデルを読み込む。ステップS304において、学習モデルに基づいて部位とその位置の情報を含めた臓器注目領域を識別抽出する。ステップS305において、識別抽出した臓器注目領域のマスク画像を生成する。 In step S301, the region-of-interest identification / extraction unit 21 for ultrasound images receives a 2D ultrasound image that is an ultrasound scan image from the image generation unit 107. In step S302, image preprocessing such as noise removal and contrast enhancement is performed. In step S303, a learning model for identification / extraction is read. In step S304, the organ attention area including the information on the part and its position is identified and extracted based on the learning model. In step S305, a mask image of the identified and extracted organ attention area is generated.
 ここで、公知であるFCN法(Fully Convolutional Network)あるいはその改良版のU-Net法を用いる。FCN法は、深層学習(Deep Learning)のCNN法(Convolution Nueral Network)の全結合層をConvolution層に置き換えることで画像をピクセル単位で推定する(Semantic Segmentationする)手法である。 Here, the well-known FCN method (Fully Convolutional Network) or a modified version of the U-Net method is used. The FCN method is a method of estimating an image on a pixel basis (Semantic Segmentation) by replacing an entire joint layer of a deep learning CNN method (Convolution Nueral Network) with a Convolution layer.
 識別抽出の対象(クラス)として、臓器注目領域だけではなく、その領域が所属する臓器の解剖学的区域情報である臓器区域情報も含まれる。図6は、肝臓を例として、その超音波走査画像である2D超音波画像と、臓器注目領域の部位名称と、臓器区域情報としての所定の注目領域(識別抽出対象)を表示する画像例を示している。図6の(A)は、肝臓の2D超音波画像122を示す図である。図6の(B)は、学習時に用いる教師ラベル付けのマスク画像123、すなわち学習の正解データを示す図である。図6の(B)では、識別抽出対象として、4種類の臓器注目領域の部位名称と臓器区域情報として、静脈領域-右葉前上区域、静脈領域-右葉前下区域、門脈領域-右葉前上区域、胆嚢領域-右葉前下区域が示されている。ただし、このような細分化される識別抽出対象に対応できるCNNネットワークを学習させるのに、大量の教師ラベル付けの超音波画像データが必要になり、学習データ不足や学習効率、識別抽出精度などの課題がある。この課題を解決するため、本実施例の画像処理装置108の注目領域識別・抽出部21、24では、3段階の学習処理を実行する。 Not only the organ attention area but also organ area information, which is anatomical area information of the organ to which the area belongs, is included as a target (class) of the identification and extraction. FIG. 6 shows an example of an image displaying a 2D ultrasound image which is an ultrasonic scan image of the liver as an example, a part name of an organ attention area, and a predetermined attention area (identification extraction target) as organ area information. It shows. FIG. 6A shows a 2D ultrasound image 122 of the liver. (B) of FIG. 6 is a diagram showing a mask image 123 with supervised labeling used at the time of learning, that is, correct data of learning. In (B) of FIG. 6, as identification and extraction targets, the vein area-upper front right area, right vein lower area-lower right area, portal vein area-as site names and organ area information of four types of organ attention areas The upper front lobe of the right lobe, the gallbladder region-the lower lobe of the right lobe is shown. However, in order to learn a CNN network that can cope with such segmented identification and extraction targets, a large amount of supervising labeled ultrasound image data is required, and learning data shortage, learning efficiency, identification and extraction accuracy, etc. There is a problem. In order to solve this problem, the notable area identification / extraction units 21 and 24 of the image processing apparatus 108 of this embodiment execute three-stage learning processing.
 本実施例の学習処理を、図7に示すフローチャートと図8~10に示す画像例を用いて説明する。ここで、注目領域識別・抽出部21の超音波画像を例として説明するが、同じ処理がCT画像にも適用可能である。 The learning process of the present embodiment will be described using the flowchart shown in FIG. 7 and the image examples shown in FIGS. Here, although the ultrasonic image of the attention area identification / extraction unit 21 is described as an example, the same processing can be applied to a CT image.
 ステップS401において、学習用画像を入力する。ステップS402において、予め用意した注目領域候補のマスク画像を入力する。図8の(A)、(B)に、学習用画像とそれに対応する注目領域候補マスク画像の例を示している。ステップS403において、学習用画像と生成した注目領域候補マスク画像を用いて、第1学習モデルの学習を行う。
この学習処理では、臓器注目領域の注目領域候補を識別抽出できるような学習モデルを生成することを目的とする。ステップS404において、生成された第1学習モデルのパラメータを第2学習モデルの初期パラメータとして用いる。ステップS405において、予め用意した学習用注目領域の臓器区域情報正解マスク画像を入力する。
In step S401, a learning image is input. In step S402, a mask image of a region of interest candidate prepared in advance is input. FIGS. 8A and 8B show examples of a learning image and a corresponding region of interest candidate mask image. In step S403, learning of the first learning model is performed using the learning image and the generated attention area candidate mask image.
The purpose of this learning processing is to generate a learning model that can identify and extract attention area candidates of organ attention areas. In step S404, the parameters of the generated first learning model are used as initial parameters of the second learning model. In step S405, the organ area information correct mask image of the learning attention area prepared in advance is input.
 図9の(A)、(B)には、学習用画像124とそれに対応する臓器区域情報正解マスク画像125の一例を示している。ここでは、臓器の解剖学的区域情報である臓器区域情報の例として、肝臓の右葉前下区域と右葉前上区域を示している。ステップS406において、学習用画像と臓器区域情報正解マスク画像を用いて、初期化された第2学習モデルをさらに学習させる。この学習処理の目的は、臓器注目領域の臓器区域情報が付与された領域を識別抽出することである。 FIGS. 9A and 9B show an example of the learning image 124 and the corresponding organ area information correct mask image 125. Here, as an example of organ area information which is anatomical area information of an organ, the lower right anterior area and the upper right anterior area of the liver are shown. In step S406, the initialized second learning model is further learned using the learning image and the organ area information correct mask image. The purpose of this learning process is to identify and extract the area to which organ area information of the organ attention area is given.
 更に、ステップS407において、生成された第2学習モデルのパラメータを第3学習モデルの初期パラメータとして用いる。すなわち、第2学習モデルのパラメータを、臓器注目領域の部位名称情報と臓器区域情報が付与された領域を識別・抽出する第3学習モデルの初期パラメータとして用いる。ステップS408において、学習用画像とそれに対応する、予め用意した、部位名称情報と臓器区域情報が付与された臓器注目領域の正解マスク画像を入力する。ステップS409において、ステップS408で入力された画像データを用いて、ステップS407で初期化された第3学習モデルをさらに学習させ、その結果を最終学習モデルとして生成する。図10は、上述した3段階の学習処理を模式的に示した図である。 Furthermore, in step S407, the parameters of the generated second learning model are used as initial parameters of the third learning model. That is, the parameter of the second learning model is used as an initial parameter of the third learning model that identifies and extracts the region to which the part name information of the organ attention area and the organ area information are added. In step S408, the learning image and the corresponding correct mask image of the organ attention area provided with the part name information and the organ area information, which are prepared in advance, are input. In step S409, the third learning model initialized in step S407 is further trained using the image data input in step S408, and the result is generated as a final learning model. FIG. 10 schematically shows the above-described three-step learning process.
 なお以上の説明にあっては、3段階の学習処理を例示して説明したが、3段階に限定されず、必要に応じて2段階、4段階等の3段階以外の段階で学習処理を実行することができる。 In the above description, the three-step learning process has been described as an example, but the invention is not limited to the three-step learning process, and the learning process is executed in other than three stages such as two or four stages as needed. can do.
 つぎに、図11に示すフローチャートを用いて、本実施例の画像処理装置108の位置合わせ判定部32の位置合わせ判定処理を説明する。ステップS501において、超音波とCTの臓器注目領域の情報を受け付ける。ステップS502において、超音波注目領域情報の臓器区域情報から走査部位を推定する。ステップS503において、超音波注目領域情報から各臓器注目領域の体積を推定する。ステップS504において、CTボリュームの中から、超音波注目領域に対応する各臓器注目領域の体積を推定する。ステップS505において、両者の体積の比などを用いて、超音波探触子7の走査が位置合わせに適切かどうかを判定する。 Next, alignment determination processing of the alignment determination unit 32 of the image processing apparatus 108 of this embodiment will be described using the flowchart shown in FIG. In step S501, the information of the organ attention area of ultrasound and CT is received. In step S502, a scan region is estimated from organ area information of ultrasound attention area information. In step S503, the volume of each organ attention area is estimated from the ultrasound attention area information. In step S504, the volume of each organ attention area corresponding to the ultrasound attention area is estimated from the CT volume. In step S505, it is determined whether the scanning of the ultrasound probe 7 is appropriate for alignment using a ratio of the volumes of the two, or the like.
 上述の臓器注目領域の識別抽出方法であるU-Net法は2D画像に適用可能であり、連続2D超音波画像から臓器注目領域を識別抽出し、その体積を推定することができる。
超音波探触子7が超音波ボリュームデータを直接取得することが可能な場合、超音波ボリュームやCTボリュームに対し、U-Net法の3次元拡張版である3D-Net法やV-Net法を用いることが可能である。
The U-Net method, which is a method for identifying and extracting the organ attention area described above, is applicable to 2D images, and can distinguish and extract the organ attention area from continuous 2D ultrasound images and estimate its volume.
When the ultrasound probe 7 can directly acquire ultrasound volume data, the 3D-Net method or V-Net method, which is a three-dimensional extension of the U-Net method, for ultrasound volumes and CT volumes. It is possible to use
 ここからは、図12に示す、各臓器注目領域の臓器区域、部位名称、体積重み付けの一例を示す計算テーブル126を用いて、位置合わせに適切かどうかの判定処理を説明する。ここで、臓器の例として肝臓、解剖学的区域の例として右葉、左葉それぞれの4区域、部位名称の例として門脈、静脈、胆嚢、合わせて24の識別抽出対象(クラス)を用いて説明する。各対象領域に対し、解剖学的な体積や位置合わせ対象としての重要度などを考慮し、24の識別抽出対象(クラス)に対して計算テーブル126に一例を示した所定の重み(w)を設定する。他の臓器、人体領域も類似する識別抽出対象の定義や重み付けの設定が可能である。 From here, determination processing of whether or not appropriate for alignment will be described using a calculation table 126 showing an example of the organ area, part name, and volume weighting of each organ attention area shown in FIG. Here, the liver is used as an example of an organ, 4 areas of each of a right lobe and a left lobe as an example of an anatomical area, and a portal vein, a vein, a gallbladder, a total of 24 identification extraction targets (classes) Explain. For each of the target areas, given the weight (w) shown as an example in the calculation table 126 for the 24 identification extraction targets (classes) in consideration of the anatomical volume, the importance as an alignment target, etc. Set It is possible to set definitions and weights of discrimination and extraction targets similar to other organs and human body regions.
 位置合わせ判定部32は、超音波画像とCTボリュームのそれぞれから抽出された各臓器注目領域の体積に対し、それぞれの体積の重みづけ加算値(VUS、VCT)を算出する。ここで、上述した閾値処理の所定の判定閾値として、T1=0.5、T2=0.2を設定できる。
VUS / VCT > T1の場合は、超音波探触子7の走査が位置合わせに適切と判定する。
T1 > VUS / VCT > T2の場合は、超音波探触子7の走査が位置合わせに適切ではないと判定するが、追加走査により適切の走査が可能になると判定し、画像表示部31にてユーザに走査方向や部位を提示する。例えば、図19に一例を示す画像表示部31の表示画面128のように、ユーザに現在の走査範囲と推奨する追加走査範囲をメッセージ129で提示し、さらに表示画面128の左側領域に表示したように、被検体の画像内に超音波探触子の走査位置と推奨走査方向を矢印などで表示して、ユーザを誘導することができる。
VUS / VCT < T2の場合は超音波探触子7の走査が位置合わせに適切ではない、走査をやり直しと判定する。
The alignment determination unit 32 calculates a weighted addition value (V US , V CT ) of each volume with respect to the volume of each organ attention area extracted from each of the ultrasound image and the CT volume. Here, T 1 = 0.5 and T 2 = 0.2 can be set as predetermined determination thresholds of the above-described threshold processing.
In the case of V US / V CT > T 1 , it is determined that the scan of the ultrasonic probe 7 is appropriate for alignment.
If T 1 > V US / V CT > T 2 , it is determined that the scan of the ultrasound probe 7 is not appropriate for alignment, but it is determined that the additional scan enables an appropriate scan, and the image is displayed In the section 31, the scanning direction and the part are presented to the user. For example, as shown on the display screen 128 of the image display unit 31 an example of which is shown in FIG. 19, the current scan range and the recommended additional scan range are presented to the user by the message 129 and are displayed in the left area of the display screen 128 The user can be guided by displaying the scanning position of the ultrasonic probe and the recommended scanning direction in the image of the subject by using an arrow or the like.
If V US / V CT <T 2 , it is determined that the scan of the ultrasound probe 7 is not appropriate for alignment, and the scan is rerun.
 このように、位置合わせ判定部32は、識別抽出された超音波画像の臓器注目領域各々に対し重み付け加算を行って第1重み付け体積を算出し、第2ボリュームデータの、超音波画像の臓器注目領域各々に対応する臓器注目領域に対して重み付け加算を行って第2重み付け体積を算出し、第1重み付け体積と第2重み付け体積の比を所定の閾値と比較することにより、超音波探触子の走査範囲と部位が、第2ボリュームデータとの位置合わせに適切かどうかを判定することができる。更に、位置合わせ判定部32は、超音波探触子の走査範囲と部位が、第2ボリュームデータとの位置合わせに不適切と判定した場合、推奨する超音波探触子の追加走査範囲と部位にユーザを誘導するためのメッセージなどを画像表示部31に行う。 As described above, the alignment determination unit 32 performs weighted addition on each organ attention area of the ultrasonic image identified and extracted to calculate the first weighting volume, and the organ attention of the ultrasonic image of the second volume data The ultrasound probe is performed by performing weighted addition on an organ attention area corresponding to each area to calculate a second weighting volume, and comparing the ratio of the first weighting volume to the second weighting volume with a predetermined threshold value. It is possible to determine whether the scanning range and the part of the image data are appropriate for alignment with the second volume data. Furthermore, when the alignment determination unit 32 determines that the scanning range of the ultrasound probe and the region are inappropriate for alignment with the second volume data, the additional scanning range and the region of the recommended ultrasound probe are To the image display unit 31 for guiding the user to
 画像表示部31は、上述した各種の表示に加え、識別抽出された各注目領域のセグメンテーション結果や体積計算数値を表示することができる。また、ユーザインタフェース(UI)121を介して、上述の位置合わせ判定用閾値T1、T2 の調整をすることができる。 The image display unit 31 can display the segmentation result and the volume calculation numerical value of each attention area which has been identified and extracted, in addition to the various displays described above. In addition, the above-described alignment determination threshold values T 1 and T 2 can be adjusted through the user interface (UI) 121.
 以上説明したように、本実施例の超音波撮像装置、画像処理装置、及び方法によれば、超音波画像の臓器注目領域をピクセル単位で識別抽出し、超音波探触子の走査領域と部位を識別し、位置合わせに適切かどうかを判定することが可能となる。 As described above, according to the ultrasonic imaging apparatus, the image processing apparatus, and the method of the present embodiment, the organ attention area of the ultrasonic image is identified and extracted in pixel units, and the scanning area and part of the ultrasonic probe Can be identified to determine whether it is appropriate for alignment.
 実施例2は、実施例1の構成に加えて更に、超音波探触子走査が位置合わせに適切と判定された場合、超音波-CT画像の位置合わせを行い、その後、リアルタイムに走査して取得した術中の超音波画像と、それに対応する高解像度のCT画像を同時に表示し、手術を正確にガイドすることができる超音波撮像装置、画像処理装置、及び方法の実施例である。なお、実施例2の説明において、実施例1と同じ構成及び処理については、同じ符号を付して説明を省略する。 In the second embodiment, in addition to the configuration of the first embodiment, when it is determined that the ultrasonic probe scan is appropriate for alignment, the ultrasonic-CT image is aligned, and then scanned in real time. It is an embodiment of an ultrasonic imaging apparatus, an image processing apparatus, and a method capable of simultaneously displaying an acquired intraoperative ultrasound image and a high resolution CT image corresponding thereto and accurately guiding a surgery. In the description of the second embodiment, the same components and processes as those of the first embodiment are denoted by the same reference numerals, and the description thereof is omitted.
 <構成及び動作>
  図13は、実施例2における超音波撮像装置の一構成例を示している。図14は、実施例2における画像処理装置108とユーザインタフェース(UI)121のハードウェア構成例を示すブロック図である。図13、図14から明らかなように、実施例1の構成例に加えて、位置センサ8と、位置検出ユニット6とを備えて構成される。位置検出ユニット6は、位置センサ8の出力から超音波探触子7の位置を検出する。例えば、位置検出ユニット6として、磁気センサユニットを用いることができる。位置検出ユニット6は、磁場空間を形成し、位置センサ8が磁場を検出することにより、基準点となる位置からの座標を検出することができる。
<Configuration and operation>
FIG. 13 shows one configuration example of the ultrasonic imaging apparatus in the second embodiment. FIG. 14 is a block diagram showing an example of the hardware configuration of the image processing apparatus 108 and the user interface (UI) 121 in the second embodiment. As apparent from FIGS. 13 and 14, in addition to the configuration example of the first embodiment, the position sensor 8 and the position detection unit 6 are provided. The position detection unit 6 detects the position of the ultrasonic probe 7 from the output of the position sensor 8. For example, a magnetic sensor unit can be used as the position detection unit 6. The position detection unit 6 forms a magnetic field space, and the position sensor 8 detects the magnetic field, whereby coordinates from the position serving as the reference point can be detected.
 画像生成部107は、超音波探触子7のその時の位置情報を位置検出ユニット6から受け取って、生成する超音波画像に位置情報を付与する。ユーザが超音波探触子7を移動させ、画像生成部107がその時の超音波探触子7の位置情報が付与された超音波画像を生成して、画像処理装置108に出力することにより、画像処理装置108は3次元超音波画像の第1ボリュームデータを生成することができる。 The image generation unit 107 receives the position information of the ultrasound probe 7 at that time from the position detection unit 6 and gives the position information to the generated ultrasound image. The user moves the ultrasound probe 7, and the image generation unit 107 generates an ultrasound image to which the position information of the ultrasound probe 7 at that time is added, and outputs the ultrasound image to the image processing device 108, The image processing device 108 can generate first volume data of the three-dimensional ultrasound image.
 図15は、本実施例の画像処理装置108の機能例を示すブロック図である。図15のように、画像処理装置108は、実施例1における構成に加えて、超音波探触子位置情報取得部29と、位置合わせを実行するCTボリューム座標変換算出(位置合わせ)部26と、リアルタイム2D-CT画像算出部27と、リアルタイム2D超音波画像取得部30とを含む。 FIG. 15 is a block diagram showing an example of the function of the image processing apparatus 108 of this embodiment. As shown in FIG. 15, in addition to the configuration in the first embodiment, the image processing apparatus 108 includes an ultrasonic probe position information acquisition unit 29 and a CT volume coordinate conversion calculation (alignment) unit 26 that executes alignment. , Real-time 2D-CT image calculation unit 27, and real-time 2D ultrasound image acquisition unit 30.
 つぎに、図16に示すフローチャートを用いて、図15に示した実施例2の画像処理装置108の動作処理を説明する。ここでは、実施例1における画像処理装置108の動作処理と異なる部分のみを述べる。ステップS501、S502、S504、S506、S507、S508のそれぞれは、図4に示している、ステップS201、S202、S203、S205、S206、S207の処理と同様である。 Next, the operation processing of the image processing apparatus 108 of the second embodiment shown in FIG. 15 will be described using the flowchart shown in FIG. Here, only portions different from the operation processing of the image processing apparatus 108 in the first embodiment will be described. Steps S501, S502, S504, S506, S507, and S508 are the same as the processing of steps S201, S202, S203, S205, S206, and S207 illustrated in FIG.
 ステップS503において、位置検出ユニット6は、位置センサ8の出力から、超音波探触子7の位置を検出する。超音波ボリュームデータ生成部23は、その超音波探触子のリアルタイムの位置情報を受け付ける。ステップS505において、超音波ボリュームデータ生成部23は、連続的に生成された2D超音波画像と、それに付与した超音波探触子の位置情報に基づいて、第1ボリュームデータとしての超音波ボリュームデータを生成する。 In step S 503, the position detection unit 6 detects the position of the ultrasonic probe 7 from the output of the position sensor 8. The ultrasonic volume data generation unit 23 receives real-time position information of the ultrasonic probe. In step S505, the ultrasonic volume data generation unit 23 performs ultrasonic volume data as first volume data based on the 2D ultrasonic image generated continuously and the position information of the ultrasonic probe applied thereto. Generate
 ステップS509において、CTボリューム座標変換算出(位置合わせ)部26は、超音波・CT注目領域情報25を受け付け、CTボリュームを超音波ボリュームに位置合わせを行うための位置合わせ変換行列を算出する。位置合わせ変換行列算出の詳細は後で述べる。ステップS510において、リアルタイム2D超音波画像取得部30は、超音波画像取得部28からリアルタイムに取得した超音波走査画像である2D超音波画像を受け付ける。ステップS511において、CT断面であるリアルタイム2D-CT画像算出部27は、ステップS505と同様に、2D超音波画像に対応する超音波探触子のリアルタイムの位置情報を受け付ける。 In step S509, the CT volume coordinate conversion calculation (alignment) unit 26 receives the ultrasound / CT target area information 25 and calculates a registration conversion matrix for aligning the CT volume with the ultrasonic volume. Details of the alignment conversion matrix calculation will be described later. In step S510, the real time 2D ultrasound image acquisition unit 30 receives a 2D ultrasound image which is an ultrasound scan image acquired in real time from the ultrasound image acquisition unit 28. In step S511, the real-time 2D-CT image calculation unit 27, which is a CT cross-section, receives real-time positional information of the ultrasound probe corresponding to the 2D ultrasound image, as in step S505.
 次に、ステップS512においては、リアルタイム2D-CT画像算出部27は、超音波探触子7の位置情報と、CTボリュームの座標変換行列とを用いて、リアルタイムに取得した2D超音波画像に対応する2D-CTの断面画像をCTボリュームからリアルタイムに算出する。ステップS513においては、画像表示部31は、2D超音波画像と、2D-CTの断面画像と超音波・CT注目領域情報25を受け付ける。画像表示部31は、2D-CT、2D超音波画像の断面画像(CT、US)のそれぞれを、図17に一例を示すように、画像表示部31の表示領域127の異なる画面領域に表示する。そして、注目領域の部位と区域情報、および超音波走査の推定体積を、それぞれ表示領域127に表示する。 Next, in step S512, the real-time 2D-CT image calculation unit 27 uses the position information of the ultrasound probe 7 and the coordinate transformation matrix of the CT volume to cope with 2D ultrasound images acquired in real time. The cross-sectional image of the 2D-CT to be calculated is calculated in real time from the CT volume. In step S513, the image display unit 31 receives a 2D ultrasound image, a cross-sectional image of 2D-CT, and ultrasound / CT attention area information 25. The image display unit 31 displays the cross-sectional images (CT, US) of the 2D-CT and 2D ultrasound images on different screen areas of the display area 127 of the image display unit 31, as an example shown in FIG. . Then, the region and area information of the region of interest, and the estimated volume of ultrasonic scanning are displayed in the display region 127, respectively.
 ここからは、図18に示したフローチャートを用いて、CTボリューム座標変換算出(位置合わせ)部26における、位置合わせ変換行列算出の処理について説明する。ステップS601において、CTボリューム座標変換算出(位置合わせ)部26は、超音波ボリューム(第1ボリューム)とCTボリューム(第2ボリューム)を受け付ける。ステップS602において、超音波・CT注目領域情報25を受け付ける。 From here, the process of calculating the alignment conversion matrix in the CT volume coordinate conversion calculation (alignment) unit 26 will be described using the flowchart shown in FIG. In step S601, the CT volume coordinate conversion calculation (alignment) unit 26 receives an ultrasound volume (first volume) and a CT volume (second volume). In step S602, ultrasound / CT attention area information 25 is accepted.
 ステップS603において、CTボリューム座標変換算出(位置合わせ)部26は、超音波・CT注目領域の点群同士の位置合わせを実行する。自動位置合わせ手法としては、公知のICP(Iterative Closest Point)法を用いることができる。ICP法では、CT注目領域の点群を幾何変換、すなわち平行移動と回転を行って、超音波注目領域の点群との対応点間の距離を求めて、その距離が最小となるように反復的に計算を行う。これにより、両者を位置合わせすることができる。 In step S603, the CT volume coordinate conversion calculation (alignment) unit 26 aligns the point clouds of the ultrasound / CT attention area. A well-known ICP (Iterative Closest Point) method can be used as an automatic registration method. In the ICP method, the point group in the CT notable area is subjected to geometric transformation, ie, parallel translation and rotation, and the distance between corresponding points in the ultrasound notable area is determined to obtain the distance between the corresponding points. Make calculations. Thereby, both can be aligned.
 つぎに、ステップS604において、CTボリューム座標変換算出(位置合わせ)部26は、超音波ボリュームとCTボリュームの画像ベースの位置合わせを行う。前記注目領域の点群同士の位置合わせ結果を用いて、画像ベースの位置合わせのパラメータを初期化する。CTボリューム座標変換算出(位置合わせ)部26は、超音波ボリュームとCTボリュームのそれぞれから、位置合わせ対象であるサンプル点データを取得する。サンプル点データは、画像領域のすべての画素をサンプリング点として抽出してもよい、ランダムもしくはグリッド上にてサンプリングした画素値を用いてもよい。さらに、対応する注目領域からサンプリングしてもよい。 Next, in step S604, the CT volume coordinate conversion calculation (alignment) unit 26 performs image-based alignment of the ultrasound volume and the CT volume. The parameter of the image based alignment is initialized using the alignment result of the point group of the said attention area. The CT volume coordinate conversion calculation (alignment) unit 26 acquires sample point data to be aligned from each of the ultrasonic volume and the CT volume. The sample point data may extract all pixels in the image area as sampling points, and may use pixel values sampled randomly or on a grid. Furthermore, sampling may be performed from the corresponding region of interest.
 CTボリューム座標変換算出(位置合わせ)部26は、超音波ボリュームから抽出されたサンプリング点の座標を、CTボリュームにおいて対応する点の座標へ、幾何変換し、これらのサンプリング点における輝度データに対して、所定の評価関数を適用して、超音波ボリュームとCTボリュームとの間の画像類似度を演算する。画像類似度としては、公知の相互情報量を使用することができる。CTボリューム座標変換算出(位置合わせ)部26は、超音波ボリュームとCTボリュームの間の画像類似度が最大あるいは極大となるような幾何変換情報を求めて、その幾何変換情報を更新する。最後のステップS605においては、CTボリューム座標変換算出(位置合わせ)部26は、位置合わせの結果を出力する。 The CT volume coordinate transformation calculation (alignment) unit 26 geometrically transforms the coordinates of the sampling points extracted from the ultrasound volume into the coordinates of the corresponding points in the CT volume, and performs the luminance data at these sampling points Apply a predetermined evaluation function to calculate image similarity between the ultrasound volume and the CT volume. A known mutual information amount can be used as the image similarity. The CT volume coordinate conversion calculation (alignment) unit 26 obtains geometric conversion information that maximizes or maximizes the image similarity between the ultrasound volume and the CT volume, and updates the geometric conversion information. In the final step S605, the CT volume coordinate conversion calculation (alignment) unit 26 outputs the result of alignment.
 以上説明したように、本実施例によれば、超音波画像の臓器注目領域をピクセル単位で識別抽出し、超音波探触子の走査領域と部位を識別し、位置合わせに適切かどうかを判定した後に、超音波ボリュームとCTボリュームの位置合わせを行い、リアルタイムに走査して取得した術中超音波画像と、それに対応する高解像度のCT画像を同時に表示し、手術を正確にガイドすることができる。 As described above, according to the present embodiment, the organ attention area of the ultrasonic image is identified and extracted in pixel units, the scanning area and the region of the ultrasonic probe are identified, and it is determined whether or not it is appropriate for alignment. After that, the ultrasound volume and CT volume can be aligned, and intraoperative ultrasound images acquired by scanning in real time and the corresponding high resolution CT images can be simultaneously displayed to accurately guide the surgery. .
 なお、本発明は上記した実施例に限定されるものではなく、様々な変形例が含まれる。
例えば、上記した実施例は本発明のより良い理解のために詳細に説明したのであり、必ずしも説明の全ての構成を備えるものに限定されものではない。上述した通り、本発明は、超音波撮像装置と、その画像処理装置、及び方法に限定されるものでなく、ネットワークを介して超音波撮像装置と接続された画像処理装置、及びその画像処理方法として実現することができることは言うまでもない。また、ある実施例の構成の一部を他の実施例の構成に置き換えることが可能であり、また、ある実施例の構成に他の実施例の構成を加えることが可能である。また、各実施例の構成の一部について、他の構成の追加・削除・置換をすることが可能である。
The present invention is not limited to the embodiments described above, but includes various modifications.
For example, the embodiments described above have been described in detail for better understanding of the present invention, and are not necessarily limited to those having all the configurations of the description. As described above, the present invention is not limited to the ultrasonic imaging apparatus, the image processing apparatus and method thereof, and an image processing apparatus connected to the ultrasonic imaging apparatus through the network, and the image processing method thereof It goes without saying that it can be realized as In addition, part of the configuration of one embodiment can be replaced with the configuration of another embodiment, and the configuration of another embodiment can be added to the configuration of one embodiment. In addition, with respect to a part of the configuration of each embodiment, it is possible to add, delete, and replace other configurations.
 更に、上述した各構成、機能、処理部等は、それらの一部又は全部を実現するプログラムを作成する例を説明したが、それらの一部又は全部を例えば集積回路で設計する等によりハードウェアで実現しても良い。 Furthermore, although each configuration, function, processing unit, etc. mentioned above explained the example which creates a program which realizes a part or all of them, hardware is designed by designing a part or all of them with an integrated circuit etc. It may be realized by
1 CPU、2 ROM、3 RAM、4 記憶装置、5 バス、6 位置検出ユニット、7 超音波探触子、8 位置センサ、9 画像入力部、10 画像撮像装置、11 媒体入力部、12 記憶媒体、13 入力制御部、14 入力装置、15 表示制御部、16 ディスプレイ、21 超音波画像の注目領域識別・抽出部、22 CTボリュームデータ受付部、23 超音波ボリュームデータ生成部、24 CTボリュームの注目領域識別・抽出部、25 超音波・CT注目領域情報、26 CTボリューム座標変換算出部、27 リアルタイム2D-CT画像算出部、28 超音波画像取得部、29 超音波探触子位置情報取得部、30 リアルタイム2D超音波画像取得部、31 画像表示部、32 位置合わせ判定部、100 超音波撮像装置、101 送受切替部、102 送信部、105 受信部、106 制御部、107 画像生成部、108 画像処理装置、120 ユーザ、121 ユーザインタフェース(UI)、122 超音波画像、123 マスク画像、124 学習用画像、125 正解マスク画像、126 計算テーブル、127 表示領域 1 CPU, 2 ROM, 3 RAM, 4 Storage Device, 5 Bus, 6 Position Detection Unit, 7 Ultrasonic Probe, 8 Position Sensor, 9 Image Input Unit, 10 Image Capturing Device, 11 Medium Input Unit, 12 Storage Medium , 13 input control unit, 14 input device, 15 display control unit, 16 display, 21 attention area identification / extraction unit of ultrasound image, 22 CT volume data reception unit, 23 ultrasound volume data generation unit, 24 attention of CT volume Region identification / extraction unit, 25 ultrasound / CT attention area information, 26 CT volume coordinate conversion calculation unit, 27 real time 2D-CT image calculation unit, 28 ultrasound image acquisition unit, 29 ultrasound probe position information acquisition unit, 30 real time 2D ultrasound image acquisition unit 31 image display unit 32 alignment determination unit 100 Ultrasonic imaging apparatus 101 transmission / reception switching unit 102 transmission unit 105 reception unit 106 control unit 107 image generation unit 108 image processing unit 120 user 121 user interface (UI) 122 ultrasonic image 123 mask image , 124 training images, 125 correct mask images, 126 calculation tables, 127 display areas

Claims (15)

  1. 被検体に超音波を送信し、前記被検体からの超音波を受信する超音波探触子と、
    前記超音波探触子の受信信号から超音波画像を生成する画像生成部と、
    前記超音波画像と、前記被検体についての第2ボリュームデータを処理する画像処理装置と、を備え、
    前記画像処理装置は、
    前記超音波画像の臓器注目領域を識別抽出する注目領域識別・抽出部と、
    抽出した前記臓器注目領域を用いて、前記超音波探触子の走査範囲と部位が、前記超音波画像あるいは前記超音波画像から生成された第1ボリュームデータと、前記第2ボリュームデータとの位置合わせに適切かどうかを判定する位置合わせ判定部と、を含む、
    ことを特徴とする超音波撮像装置。
    An ultrasonic probe that transmits ultrasonic waves to a subject and receives ultrasonic waves from the subject;
    An image generation unit that generates an ultrasound image from the reception signal of the ultrasound probe;
    And an image processing apparatus configured to process second volume data of the subject.
    The image processing apparatus is
    An attention area identification / extraction unit for identifying and extracting the organ attention area of the ultrasound image;
    The position of the scanning range and region of the ultrasound probe using the extracted organ attention area, the first volume data generated from the ultrasound image or the ultrasound image, and the second volume data An alignment determination unit that determines whether the alignment is appropriate or not;
    An ultrasonic imaging apparatus characterized in that.
  2. 請求項1に記載の超音波撮像装置であって、
    前記注目領域識別・抽出部は、
    前記臓器注目領域の部位名称と、所属する臓器内の臓器区域情報が付与された領域をピクセル単位で識別抽出する、
    ことを特徴とする超音波撮像装置。
    The ultrasonic imaging apparatus according to claim 1, wherein
    The attention area identification / extraction unit
    Identify and extract the area name of the organ attention area and the area to which organ area information in the belonging organ is given in pixel units.
    An ultrasonic imaging apparatus characterized in that.
  3. 請求項2に記載の超音波撮像装置であって、
    前記注目領域識別・抽出部は、
    前記臓器注目領域の注目領域候補を識別抽出する第1学習モデルを学習し、
    生成された前記第1学習モデルのパラメータを、前記臓器注目領域の前記臓器区域情報が付与された領域を識別・抽出する第2学習モデルの初期パラメータとして用いて、第2学習モデルを学習する、
    ことを特徴とする超音波撮像装置。
    The ultrasonic imaging apparatus according to claim 2, wherein
    The attention area identification / extraction unit
    Learning a first learning model for identifying and extracting a target area candidate of the organ target area;
    The second learning model is learned using the generated parameter of the first learning model as an initial parameter of a second learning model for identifying and extracting the region to which the organ area information of the organ attention area is added.
    An ultrasonic imaging apparatus characterized in that.
  4. 請求項3に記載の超音波撮像装置であって、
    前記注目領域識別・抽出部は、
    生成された前記第2学習モデルのパラメータを、前記臓器注目領域の部位名称情報と前記臓器区域情報が付与された領域を識別・抽出する第3学習モデルの初期パラメータとして用いて、第3学習モデルを学習する、
    ことを特徴とする超音波撮像装置。
    The ultrasonic imaging apparatus according to claim 3,
    The attention area identification / extraction unit
    A third learning model is used, using the generated parameters of the second learning model as initial parameters of a third learning model for identifying and extracting the region to which the organ name of interest area and the organ area information are added. To learn
    An ultrasonic imaging apparatus characterized in that.
  5. 請求項1に記載の超音波撮像装置であって、
    前記位置合わせ判定部は、
    識別抽出された前記超音波画像の前記臓器注目領域各々に対し重み付け加算を行い、第1重み付け体積を算出し、前記第2ボリュームデータの、前記超音波画像の前記臓器注目領域各々に対応する臓器注目領域に対して重み付け加算を行い、第2重み付け体積を算出し、
    前記第1重み付け体積と前記第2重み付け体積の比を所定の閾値と比較することにより、前記超音波探触子の走査範囲と部位が、前記第2ボリュームデータとの位置合わせに適切かどうかを判定する、
    ことを特徴とする超音波撮像装置。
    The ultrasonic imaging apparatus according to claim 1, wherein
    The alignment determination unit
    Weighted addition is performed on each of the organ attention areas of the ultrasonic image identified and extracted to calculate a first weighted volume, and an organ corresponding to each of the organ attention areas of the ultrasound image of the second volume data. Weighted addition is performed on the region of interest to calculate a second weighted volume,
    By comparing the ratio of the first weighting volume to the second weighting volume with a predetermined threshold value, it is determined whether the scanning range of the ultrasonic probe and the region are appropriate for alignment with the second volume data. judge,
    An ultrasonic imaging apparatus characterized in that.
  6. 請求項5に記載の超音波撮像装置であって、
    前記位置合わせ判定部は、
    前記超音波探触子の走査範囲と部位が、前記第2ボリュームデータとの位置合わせに不適切と判定した場合、推奨する前記超音波探触子の追加走査範囲と部位を出力する、
    ことを特徴とする超音波撮像装置。
    The ultrasonic imaging apparatus according to claim 5, wherein
    The alignment determination unit
    When it is determined that the scanning range and the part of the ultrasonic probe are inappropriate for alignment with the second volume data, the additional scanning range and the part of the recommended ultrasonic probe are output.
    An ultrasonic imaging apparatus characterized in that.
  7. 画像処理装置であって、
    超音波探触子から被検体に超音波を送信し、その受信信号から生成した超音波画像の臓器注目領域を識別抽出する注目領域識別・抽出部と、
    抽出した前記臓器注目領域を用いて、前記超音波探触子の走査範囲と部位が、前記超音波画像あるいは前記超音波画像から生成された第1ボリュームデータと、前記被検体についての第2ボリュームデータとの位置合わせに適切かどうかを判定する位置合わせ判定部と、を含む、
    ことを特徴とする画像処理装置。
    An image processing apparatus,
    An attention area identification / extraction unit for transmitting an ultrasonic wave from the ultrasonic probe to the subject and identifying and extracting an organ attention area of an ultrasonic image generated from the reception signal;
    Using the extracted organ attention area, a scan range and a region of the ultrasound probe are first volume data generated from the ultrasound image or the ultrasound image, and a second volume of the subject An alignment determination unit that determines whether the alignment is appropriate for alignment with data;
    An image processing apparatus characterized by
  8. 請求項7に記載の画像処理装置であって、
    前記注目領域識別・抽出部は、
    前記臓器注目領域の部位名称と、所属する臓器内の臓器区域情報が付与された領域をピクセル単位で識別抽出する、
    ことを特徴とする画像処理装置。
    The image processing apparatus according to claim 7, wherein
    The attention area identification / extraction unit
    Identify and extract the area name of the organ attention area and the area to which organ area information in the belonging organ is given in pixel units.
    An image processing apparatus characterized by
  9. 請求項8に記載の画像処理装置であって、
    前記注目領域識別・抽出部は、
    前記臓器注目領域の注目領域候補を識別抽出する第1学習モデルを学習し、
    生成された前記第1学習モデルのパラメータを、前記臓器注目領域の前記臓器区域情報が付与された領域を識別・抽出する第2学習モデルの初期パラメータとして用いて、第2学習モデルを学習し、
    生成された前記第2学習モデルのパラメータを、前記臓器注目領域の部位名称情報と前記臓器区域情報が付与された領域を識別・抽出する第3学習モデルの初期パラメータとして用いて、第3学習モデルを学習する、
    ことを特徴とする画像処理装置。
    The image processing apparatus according to claim 8,
    The attention area identification / extraction unit
    Learning a first learning model for identifying and extracting a target area candidate of the organ target area;
    The second learning model is learned using the generated parameter of the first learning model as an initial parameter of a second learning model for identifying and extracting the region to which the organ area information of the organ attention area is added.
    A third learning model is used, using the generated parameters of the second learning model as initial parameters of a third learning model for identifying and extracting the region to which the organ name of interest area and the organ area information are added. To learn
    An image processing apparatus characterized by
  10. 請求項7に記載の画像処理装置であって、
    前記位置合わせ判定部は、
    識別抽出された前記超音波画像の前記臓器注目領域各々に対し重み付け加算を行い、第1重み付け体積を算出し、前記第2ボリュームデータの、前記超音波画像の前記臓器注目領域各々に対応する臓器注目領域に対して重み付け加算を行い、第2重み付け体積を算出し、
    前記第1重み付け体積と前記第2重み付け体積の比を所定の閾値と比較することにより、前記超音波探触子の走査範囲と部位が、前記第2ボリュームデータとの位置合わせに適切かどうかを判定する、
    ことを特徴とする画像処理装置。
    The image processing apparatus according to claim 7, wherein
    The alignment determination unit
    Weighted addition is performed on each of the organ attention areas of the ultrasonic image identified and extracted to calculate a first weighted volume, and an organ corresponding to each of the organ attention areas of the ultrasound image of the second volume data. Weighted addition is performed on the region of interest to calculate a second weighted volume,
    By comparing the ratio of the first weighting volume to the second weighting volume with a predetermined threshold value, it is determined whether the scanning range of the ultrasonic probe and the region are appropriate for alignment with the second volume data. judge,
    An image processing apparatus characterized by
  11. 請求項10に記載の画像処理装置であって、
    前記位置合わせ判定部は、
    前記超音波探触子の走査範囲と部位が、前記第2ボリュームデータとの位置合わせに不適切と判定した場合、推奨する前記超音波探触子の追加走査範囲と部位を出力する、
    ことを特徴とする画像処理装置。
    The image processing apparatus according to claim 10, wherein
    The alignment determination unit
    When it is determined that the scanning range and the part of the ultrasonic probe are inappropriate for alignment with the second volume data, the additional scanning range and the part of the recommended ultrasonic probe are output.
    An image processing apparatus characterized by
  12. 画像処理装置における画像処理方法であって、
    前記画像処理装置は、
    超音波探触子から被検体に超音波を送信し、その受信信号から生成した超音波画像の臓器注目領域を識別抽出し、
    抽出した前記臓器注目領域を用いて、前記超音波探触子の走査範囲と部位が、前記超音波画像あるいは前記超音波画像から生成された第1ボリュームデータと、前記被検体についての第2ボリュームデータとの位置合わせに適切かどうかを判定する、
    ことを特徴とする画像処理方法。
    An image processing method in an image processing apparatus, comprising:
    The image processing apparatus is
    Ultrasonic waves are transmitted from the ultrasonic probe to the subject, and an organ attention area of an ultrasonic image generated from the received signal is identified and extracted;
    Using the extracted organ attention area, a scan range and a region of the ultrasound probe are first volume data generated from the ultrasound image or the ultrasound image, and a second volume of the subject Determine if it is appropriate for alignment with data,
    An image processing method characterized in that.
  13. 請求項12に記載の画像処理方法であって、
    前記画像処理装置は、
    前記臓器注目領域の部位名称と、所属する臓器内の臓器区域情報が付与された領域をピクセル単位で識別抽出する、
    ことを特徴とする画像処理方法。
    The image processing method according to claim 12, wherein
    The image processing apparatus is
    Identify and extract the area name of the organ attention area and the area to which organ area information in the belonging organ is given in pixel units.
    An image processing method characterized in that.
  14. 請求項13に記載の画像処理方法であって、
    前記画像処理装置は、
    前記臓器注目領域の注目領域候補を識別抽出する第1学習モデルを学習し、
    生成された前記第1学習モデルのパラメータを、前記臓器注目領域の前記臓器区域情報が付与された領域を識別・抽出する第2学習モデルの初期パラメータとして用いて、第2学習モデルを学習し、
    生成された前記第2学習モデルのパラメータを、前記臓器注目領域の部位名称情報と前記臓器区域情報が付与された領域を識別・抽出する第3学習モデルの初期パラメータとして用いて、第3学習モデルを学習する、
    ことを特徴とする画像処理方法。
    The image processing method according to claim 13,
    The image processing apparatus is
    Learning a first learning model for identifying and extracting a target area candidate of the organ target area;
    The second learning model is learned using the generated parameter of the first learning model as an initial parameter of a second learning model for identifying and extracting the region to which the organ area information of the organ attention area is added.
    A third learning model is used, using the generated parameters of the second learning model as initial parameters of a third learning model for identifying and extracting the region to which the organ name of interest area and the organ area information are added. To learn
    An image processing method characterized in that.
  15. 請求項12に記載の画像処理方法であって、
    前記画像処理装置は、
    識別抽出された前記超音波画像の前記臓器注目領域各々に対し重み付け加算を行い、第1重み付け体積を算出し、前記第2ボリュームデータの、前記超音波画像の前記臓器注目領域各々に対応する臓器注目領域に対して重み付け加算を行い、第2重み付け体積を算出し、
    前記第1重み付け体積と前記第2重み付け体積の比を所定の閾値と比較することにより、前記超音波探触子の走査範囲と部位が、前記第2ボリュームデータとの位置合わせに適切かどうかを判定する、
    ことを特徴とする画像処理方法。
    The image processing method according to claim 12, wherein
    The image processing apparatus is
    Weighted addition is performed on each of the organ attention areas of the ultrasonic image identified and extracted to calculate a first weighted volume, and an organ corresponding to each of the organ attention areas of the ultrasound image of the second volume data. Weighted addition is performed on the region of interest to calculate a second weighted volume,
    By comparing the ratio of the first weighting volume to the second weighting volume with a predetermined threshold value, it is determined whether the scanning range of the ultrasonic probe and the region are appropriate for alignment with the second volume data. judge,
    An image processing method characterized in that.
PCT/JP2018/028698 2017-12-27 2018-07-31 Ultrasound imaging device, image processing device, and method WO2019130636A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2017-251107 2017-12-27
JP2017251107A JP6887942B2 (en) 2017-12-27 2017-12-27 Ultrasound imaging equipment, image processing equipment, and methods

Publications (1)

Publication Number Publication Date
WO2019130636A1 true WO2019130636A1 (en) 2019-07-04

Family

ID=67063449

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2018/028698 WO2019130636A1 (en) 2017-12-27 2018-07-31 Ultrasound imaging device, image processing device, and method

Country Status (2)

Country Link
JP (1) JP6887942B2 (en)
WO (1) WO2019130636A1 (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6991519B2 (en) * 2020-04-08 2022-01-12 Arithmer株式会社 Vehicle damage estimation device, its estimation program and its estimation method
JP2021178175A (en) * 2020-05-13 2021-11-18 キヤノンメディカルシステムズ株式会社 Ultrasonic diagnostic device, medical image processing device, medical image processing method, and program

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2013098768A2 (en) * 2011-12-27 2013-07-04 Koninklijke Philips Electronics N.V. Intra-operative quality monitoring of tracking systems
WO2014156973A1 (en) * 2013-03-29 2014-10-02 日立アロカメディカル株式会社 Image alignment display method and ultrasonic diagnostic device
WO2017038300A1 (en) * 2015-09-02 2017-03-09 株式会社日立製作所 Ultrasonic imaging device, and image processing device and method

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2013098768A2 (en) * 2011-12-27 2013-07-04 Koninklijke Philips Electronics N.V. Intra-operative quality monitoring of tracking systems
WO2014156973A1 (en) * 2013-03-29 2014-10-02 日立アロカメディカル株式会社 Image alignment display method and ultrasonic diagnostic device
WO2017038300A1 (en) * 2015-09-02 2017-03-09 株式会社日立製作所 Ultrasonic imaging device, and image processing device and method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
HASHIMOTO, HIROSHI ET AL.: "Development of Volume Data Matching software", JOURNAL OF MEDICAL ULTRASONICS, vol. 39, 15 April 2012 (2012-04-15), pages 337 *

Also Published As

Publication number Publication date
JP2019115487A (en) 2019-07-18
JP6887942B2 (en) 2021-06-16

Similar Documents

Publication Publication Date Title
US10242450B2 (en) Coupled segmentation in 3D conventional ultrasound and contrast-enhanced ultrasound images
US10709425B2 (en) 3D ultrasound imaging system
KR101121396B1 (en) System and method for providing 2-dimensional ct image corresponding to 2-dimensional ultrasound image
US11653897B2 (en) Ultrasonic diagnostic apparatus, scan support method, and medical image processing apparatus
JP6490820B2 (en) Ultrasonic imaging apparatus, image processing apparatus, and method
JP6383483B2 (en) Ultrasonic imaging apparatus and image processing apparatus
JP2005296436A (en) Ultrasonic diagnostic apparatus
US11712224B2 (en) Method and systems for context awareness enabled ultrasound scanning
CN108697410B (en) Ultrasonic imaging apparatus, image processing apparatus, and method thereof
WO2019130636A1 (en) Ultrasound imaging device, image processing device, and method
US20120123249A1 (en) Providing an optimal ultrasound image for interventional treatment in a medical system
JP2020039646A (en) Ultrasonic diagnostic device and volume data taking-in method
US10521069B2 (en) Ultrasonic apparatus and method for controlling the same
KR101627319B1 (en) medical image processor and method thereof for medical diagnosis
KR20150026354A (en) Method and Appartus for registering medical images
CN113662579A (en) Ultrasonic diagnostic apparatus, medical image processing apparatus and method, and storage medium
JP7299100B2 (en) ULTRASOUND DIAGNOSTIC DEVICE AND ULTRASOUND IMAGE PROCESSING METHOD
CN112672696A (en) System and method for tracking tools in ultrasound images
WO2021230230A1 (en) Ultrasonic diagnosis device, medical image processing device, and medical image processing method
CN115886876A (en) Fetal posture evaluation method, ultrasonic imaging method and ultrasonic imaging system

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18894608

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18894608

Country of ref document: EP

Kind code of ref document: A1