WO2017013990A1 - Dispositif de diagnostic à ultrasons, procédé et dispositif de traitement d'image - Google Patents

Dispositif de diagnostic à ultrasons, procédé et dispositif de traitement d'image Download PDF

Info

Publication number
WO2017013990A1
WO2017013990A1 PCT/JP2016/068660 JP2016068660W WO2017013990A1 WO 2017013990 A1 WO2017013990 A1 WO 2017013990A1 JP 2016068660 W JP2016068660 W JP 2016068660W WO 2017013990 A1 WO2017013990 A1 WO 2017013990A1
Authority
WO
WIPO (PCT)
Prior art keywords
region
dimensional
image
unit
mask
Prior art date
Application number
PCT/JP2016/068660
Other languages
English (en)
Japanese (ja)
Inventor
崇 豊村
昌宏 荻野
喜実 野口
村下 賢
Original Assignee
株式会社日立製作所
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 株式会社日立製作所 filed Critical 株式会社日立製作所
Priority to JP2017529514A priority Critical patent/JP6490814B2/ja
Publication of WO2017013990A1 publication Critical patent/WO2017013990A1/fr

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/08Detecting organic movements or changes, e.g. tumours, cysts, swellings
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/13Tomography
    • A61B8/14Echo-tomography
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing

Definitions

  • the present invention relates to an ultrasonic diagnostic apparatus, and more particularly to a technique for speeding up image processing of a target object to be diagnosed.
  • the ultrasonic diagnostic apparatus is equipped with a three-dimensional display function using volume rendering as a technique for visualizing the fetus that is the object.
  • a region of interest By setting a range called a region of interest, only an arbitrary region in the volume data can be rendered.
  • maternal tissues and floating substances in the amniotic fluid exist around the fetus.
  • the setting items of recent ultrasonic diagnostic apparatuses have been subdivided so that the region of interest can be set flexibly.
  • there is a problem that the setting operation of the region of interest is complicated and takes time as an adverse effect of subdividing the setting items.
  • Patent Document 1 discloses a technique for automatically extracting the surface of an object and obtaining a clear three-dimensional ultrasound image of the object based on the extracted surface. Has been.
  • Patent Document 1 since the surface of the object is extracted for each of a plurality of two-dimensional images constituting the volume data, for example, 100 extraction processes are required, which takes time.
  • an ultrasonic diagnostic apparatus in order to solve the above-described problem, an ultrasonic diagnostic apparatus, an image processing method, and an apparatus that can directly extract an area of volume data and extract an arbitrary area of an object at high speed.
  • the purpose is to provide.
  • an image generation unit that generates a three-dimensional acquired image of a tissue in a target body based on a signal acquired from a probe that transmits and receives ultrasonic waves, and an image generation unit
  • An ultrasonic diagnostic apparatus includes an image extraction unit that extracts an arbitrary region included in the generated three-dimensional acquired image, and an output unit that outputs an image of the region extracted by the image extraction unit.
  • an image processing method in an ultrasonic diagnostic apparatus based on a signal acquired from a probe that transmits and receives ultrasonic waves, and a three-dimensional tissue of a target body.
  • An image processing method for generating an acquired image, extracting an arbitrary region included in the generated three-dimensional acquired image, and outputting an image of the extracted region is configured.
  • the image processing apparatus generates a three-dimensional acquired image of a tissue in a target body based on a signal acquired from a probe that transmits and receives ultrasonic waves.
  • An image processing apparatus comprising: an image generation unit; an image extraction unit that extracts an arbitrary region included in a three-dimensional acquired image generated by the image generation unit; and an output unit that outputs an image of the region extracted by the image extraction unit Constitute.
  • an arbitrary object included in the acquired image can be extracted and output at high speed.
  • FIG. 1 is a block diagram showing an example of the configuration of an ultrasonic diagnostic apparatus according to Embodiment 1.
  • FIG. 2 is a block diagram illustrating an example of a configuration of an image extraction unit according to the first embodiment.
  • FIG. 3 is a schematic diagram illustrating an example of a three-dimensional ultrasound image to be presented according to the first embodiment.
  • FIG. 6 is a diagram illustrating a relationship between a user's viewpoint and volume data that is a three-dimensional acquired image according to the first embodiment.
  • FIG. 6 is a diagram showing an example of the arrangement of regions on an arbitrary two-dimensional plane of volume data according to the first embodiment.
  • FIG. 3 is a flowchart for explaining a procedure performed by a region recognition unit according to the first embodiment.
  • FIG. 6 is a diagram illustrating an example of a result of region recognition in a specific two-dimensional plane in volume data according to the first embodiment.
  • FIG. 6 is a diagram illustrating an example of a result of sequentially expanding an extraction target area according to the first embodiment.
  • FIG. 3 is a diagram showing an example of the relationship between the start point and the fetal surface according to the first embodiment.
  • FIG. 6 is a diagram illustrating an example of volume data after image extraction according to the first embodiment.
  • FIG. 6 is a diagram illustrating an example of an image extraction result according to the first embodiment.
  • FIG. 6 is a block diagram illustrating an example of a configuration of an image extraction unit according to the second embodiment.
  • FIG. 9 is a flowchart for explaining a procedure performed by a region recognition unit according to the second embodiment.
  • FIG. 9 is a flowchart for explaining a procedure performed by a region recognition unit according to the second embodiment.
  • FIG. 10 is a diagram illustrating an example of a result of region recognition performed on a specific two-dimensional plane in volume data according to the second embodiment.
  • FIG. 10 is a diagram illustrating an example of volume data after image extraction according to the second embodiment.
  • FIG. 10 is a diagram illustrating an example of an image extraction result according to the second embodiment.
  • FIG. 10 is a block diagram illustrating an example of a configuration of an image extraction unit according to the third embodiment.
  • FIG. 10 is a diagram for explaining volume data after image extraction according to the third embodiment.
  • the present embodiment includes an image generation unit that generates a three-dimensional acquired image of a tissue in a target body based on a signal acquired from a probe that transmits and receives ultrasonic waves, and a three-dimensional acquired image generated by the image generation unit.
  • 1 is an example of an ultrasonic diagnostic apparatus, an image processing method thereof, and an apparatus that include an image extraction unit that extracts an arbitrary region and an output unit that outputs an image of the region extracted by the image extraction unit.
  • FIG. 1 is a block diagram of an example of the configuration of the ultrasonic diagnostic apparatus according to the first embodiment.
  • the ultrasonic diagnostic apparatus in FIG. 1 includes a probe 1001 using an ultrasonic transducer for acquiring echo data, a transmission / reception unit 1002 that controls transmission pulses and amplifies reception echo signals, an analog / digital conversion unit 1003, and many
  • the beam forming processing unit 1004 for bundling the received echoes from the transducers of the above and performing phasing addition, and the RF signal from the beam forming processing unit 1004 is subjected to dynamic range compression, filter processing, and scan conversion processing, and a cross-sectional image
  • volume data that is a three-dimensional acquired image is generated by the image processing unit 1005 and the three-dimensional coordinate conversion unit 1008, these are collectively referred to as an image generation unit in this specification.
  • a control unit 1007, an image generation unit composed of an image processing unit 1005 and a three-dimensional coordinate conversion unit 1008, an image extraction unit 1009, and a volume rendering processing unit 1010 are a central processing unit (CPU) 1012 that is a processing unit of a normal computer. It can be realized by program execution.
  • CPU central processing unit
  • FIG. 2 shows an example of the configuration of the image extraction unit 1009 of the ultrasonic diagnostic apparatus of this embodiment.
  • the image extraction unit 1009 includes a start point determination unit 2001 for determining a start point of image extraction processing for a three-dimensional acquired image, a region contraction unit 2002 for contracting a region included in the three-dimensional acquired image, and a region In the image generated by the contraction unit, a region recognition unit 2003 that recognizes a region including the start point, a mask generation unit 2004 that generates three-dimensional mask data in which the boundary of the region including the start point recognized by the region recognition unit is expanded, The mask superimposing unit 2005 extracts the specific region including the start point by superimposing the three-dimensional mask data generated by the mask generating unit on the volume data that is the original three-dimensional acquired image.
  • the function of each functional block of the image extraction unit 1009 will be described.
  • the start point determination unit 2001 determines the three-dimensional coordinates for starting the extraction process of the fetus region that is the target body.
  • FIG. 3 shows an example of a three-dimensional ultrasound image that is two-dimensional projection data that the output unit 1011 presents to the user at the start of image extraction according to the present embodiment.
  • a fetus 3001, a mother body 3002, a floating body 3003, and amniotic fluid 3004 are depicted.
  • the user can give the apparatus an instruction to select a specific area depicted in the video.
  • the user operates the marker 3005 to select a fetus, and the start point determination unit 2001 acquires the two-dimensional coordinates (x0, y0) indicated by the marker 3005.
  • the two-dimensional coordinates of the start point may be fixed coordinates set in advance, such as the center of the three-dimensional ultrasound image.
  • FIG. 4 shows the relationship between the user's viewpoint and volume data.
  • the 3D ultrasound image as shown in FIG. 3 viewed by the user is a result of projecting the volume data 4000 onto a 2D plane using, for example, the ray casting method. If this two-dimensional plane is the xy plane, the user's line of sight is in the z-axis direction.
  • the three-dimensional coordinates (x0, y0, z0) selected by the user can be considered to exist in the voxel string in the z-axis direction at the two-dimensional coordinates (x0, y0).
  • (B) in the lower part of FIG. 4 is a voxel row in the z-axis direction taken out in two-dimensional coordinates (x0, y0) .
  • Black voxels are non-luminous areas such as amniotic fluid, and gray voxels are floating substances.
  • the low luminance region, white voxel indicates a high luminance region such as a fetus.
  • the voxel string in the z-axis direction at (x0, y0) is searched from the user viewpoint side, and the start of the fetal region may be set as the start point (x0, y0, z0). Since the fetus generally reflects ultrasound strongly and is acquired as a high-luminance region, the start point coordinates 4001 (x0, y0, z0) are the coordinates of the first voxel that has a luminance exceeding the first specific threshold. .
  • This specific threshold value may be a fixed value or a variable value depending on the volume data.
  • the region contraction unit 2002 of the image extraction unit 1009 contracts the boundary of each region included in the volume data.
  • the fetus and the mother have a high luminance region, but the surface has a slightly low luminance, and it is difficult to distinguish whether it is a part of the fetal region or a region other than the fetus.
  • this contraction processing in order to reliably extract the fetal region that is the object, such a region that is difficult to distinguish is processed as a region other than the fetus.
  • the brightness value of the voxel that does not satisfy the second specific threshold is set to 0, so that the vicinity of the boundary of the high brightness area is removed and the area contracts.
  • the two high-intensity areas may continue through the low-intensity area.
  • Fig. 5 shows an example of the arrangement of areas.
  • A in the upper part of FIG. 5 shows an example in the case where the fetus 3001 and the mother body 3002 are continuous via a gray low-luminance region in an arbitrary two-dimensional plane of the volume data. In such a case, a black non-luminous area is generated between the two areas by the contraction process, and the two high-luminance areas are clearly separated.
  • B in the lower part of FIG. 5 is an example of the case where the fetus 3001 and the mother body 3002 are separated by applying such processing to the region contraction part 2002 in the two-dimensional plane of (a) in the upper part of FIG.
  • the region recognition unit 2003 of the image extraction unit 1009 recognizes a region including the start point.
  • this region recognition when voxels adjacent to the start point have a predetermined luminance value, they are determined to belong to the same region and connected, and a region expansion method is applied in which regions are sequentially expanded.
  • the luminance value of the starting point is acquired from the volume data, and the luminance value range that is determined to belong to the same region is determined based on the luminance value.
  • the region expansion processing of the region recognition unit 2003 will be described using a flowchart.
  • FIG. 6 is a flowchart illustrating a procedure performed by the region recognition unit 2003.
  • Step 6001 is a process for extracting voxels adjacent to the recognition area. At the start of this method, only the start point is included in the recognition area, and 26 neighborhoods that are adjacent to each other by one voxel in the x-axis, y-axis, and z-axis directions are extracted.
  • FIG. 7 is an example of a region recognition result on a specific two-dimensional plane in the volume data. A voxel with a numerical value is a recognition area, and a voxel with a circle 0 indicates a starting point.
  • Step 6002 is a process for selecting one adjacent voxel extracted in Step 6001.
  • Step 6003 is processing for determining whether or not the luminance value of the adjacent voxel selected in Step 6002 is within a predetermined luminance value range. If it is within the range, the process proceeds to step 6004. If it is out of the range, the process proceeds to step 6005.
  • Step 6004 is a process of connecting adjacent voxels selected as being within the range in Step 6002 to form a new recognition area.
  • Step 6005 is a process for determining whether or not the processing from Step 6002 to Step 6004 has been completed for all adjacent voxels extracted in Step 6001. If not completed, the processing returns to step 6002 and the processing from step 6002 to step 6004 is performed on the remaining adjacent voxels. If completed, go to Step 6006.
  • FIG. 7B shows the state of the recognition area when the process proceeds to step 6006 for the first time after the start of this process.
  • a voxel having a circle 1 is a voxel connected as a recognition area in the first round of processing from step 6002 to step 6004.
  • Step 6006 determines whether or not the current recognition area has been updated in comparison with the recognition area in the previous step 6001. For example, (b) in FIG. 7 is determined to have been updated (Yes) because a voxel having a circle 1 is newly connected as compared with (a). If updated, the process returns to step 6001 to extract adjacent voxels for the current recognition area. If not updated (No), the processing in the area recognition unit 2003 is terminated. In the example shown in FIG. 7, voxels with circles 2, 3, and 4 are connected in the 2nd, 3rd, and 4th rounds, as shown in (c), (d), and (e) of the figure. Then, the black poxels in the adjacent poxels are not connected as out of range, and the state shown in (f) is obtained at the end of the process.
  • the start can be performed without depending on the luminance distribution of the entire volume data. It becomes possible to dynamically recognize a region having luminance close to a point.
  • the mask generation unit 2004 of the image extraction unit 1009 generates three-dimensional mask data based on the region including the start point recognized by the region recognition unit 2003.
  • three-dimensional mask data having the same size as the original volume data is defined, the coordinate voxel corresponding to the recognition area recognized by the area recognition unit 2003 is “1”, and the voxel of coordinates not corresponding to the recognition area Set to “0”.
  • a “1” voxel is an extraction target region to be extracted as an output result of the image extraction unit 1009, and a “0” voxel is an extraction non-target region that is not an extraction target.
  • the mask generation unit 2004 uniformly expands the boundary of the extraction target region.
  • the region contracted portion 2002 restores the contracted region so that the surface is intentionally scraped to reliably extract only the fetus.
  • the voxel in the three-dimensional mask data is scanned, and when the extraction target region is adjacent to the target voxel, the target voxel is added to the extraction target region, and the corresponding voxel is set to “1”.
  • FIG. 8 shows an example of a result obtained by sequentially expanding the extraction target region by the mask generation unit 2004.
  • FIG. 8A shows the initial state of the three-dimensional mask data.
  • a voxel with “1” indicates an extraction target region, and a voxel with “0” indicates an extraction non-target region.
  • FIG. 8B shows the result of completing the first round of scanning for the voxels in the 3D mask data
  • FIG. 8C shows the result of completing the second round of scanning. Voxels newly added to the extraction target area in each scan are indicated by “1” in bold underline.
  • the mask generation unit 2004 repeats this expansion process a predetermined number of times to generate three-dimensional mask data in which the extraction target region is expanded by a predetermined width.
  • the width and thickness for expanding the extraction target region are determined according to a predetermined fixed thickness or a contracted thickness of the region, that is, a thickness contracted at each position of the fetal surface at the region contraction part, and a fetal body surface thickness.
  • the thickness may be variable.
  • a high luminance region is generally a bone portion, and a skin portion having a relatively low luminance is present outside the bone portion.
  • the conceptual diagram shown in FIG. 9 is a voxel string in the z-axis direction at the two-dimensional coordinates (x0, y0) selected by the user as in FIG.
  • the start point 4001 corresponds to the fetal bone portion, and it can be considered that the end of the low luminance region closer to the user viewpoint is the fetal surface 9001.
  • the image extraction unit 1009 can estimate the fetal body surface thickness 9002 from the positional relationship between these two points, and it is possible to restore the fetal surface more accurately by expanding the extraction target area with this width. Become.
  • the optimal expansion width can be arbitrarily set.
  • the mask superimposing unit 2005 of the image extracting unit 1009 extracts the specific area from the volume data by applying the 3D mask data generated by the mask generating unit 2004 to the volume data that is the original 3D acquired image. Specifically, output volume data is obtained by multiplying the luminance value of each voxel of the volume data by the value of each voxel of the three-dimensional mask data at the corresponding coordinates. As shown in FIG. 10, an area corresponding to the extraction target area of the three-dimensional mask data 10002 is extracted from the original volume data 10001, and volume data 10003 after image extraction is generated.
  • the fetal region selected by the user with the marker 3005 in FIG. A clear fetal image as shown in (b) can be output.
  • Embodiment 1 shows an embodiment in which a region including a starting point is selectively extracted
  • Embodiment 2 shows an embodiment of an ultrasonic diagnostic apparatus that selectively removes a region by inverting a mask. That is, in the 3D acquired image, the image extraction unit recognizes an area including the start point, and a removal area recognition unit that generates 3D mask data based on the area including the start point recognized by the removal area recognition unit
  • FIG. 12 shows an example of the configuration of the image extraction unit 1009 in the second embodiment.
  • the image extraction unit 1009 includes a start point determination unit 2001 that determines the start point of the image extraction process, a removal region recognition unit 12001 that recognizes a region including an arbitrary start point in the three-dimensional acquired image, Based on the region including the start point recognized by the region recognition unit 12001, a removal mask generation unit 12002 that generates three-dimensional mask data for removal, the generated three-dimensional mask on the original volume data that is a three-dimensional acquired image
  • the mask superimposing unit 2005 is configured to superimpose data and remove a region including a start point.
  • the starting point determination unit 2001 and the mask superimposing unit 2005 are equivalent to the components of the image extracting unit 1009 in the first embodiment.
  • a removal region recognition unit 12001 and a removal mask generation unit 12002 different from the configuration of the image extraction unit 1009 in the first embodiment will be described.
  • the removal region recognition unit 12001 recognizes a region including the start point determined by the start point determination unit 2001. Similar to the region recognition unit 2003 of the first embodiment, when a voxel adjacent to the start point has a predetermined luminance value, it is determined that they belong to the same region and are connected to sequentially expand the region. However, when removing suspended solids, it is desirable not to overrecognize the region and remove it too much. In general, floating substances have a relatively low luminance, and it is necessary to set the luminance value range wide on the low luminance side in order to reliably remove the suspended matter.
  • most voxels of the entire volume data that is a three-dimensional acquired image may be connected as a recognition area.
  • an upper limit distance of the recognition area boundary centered on the start point is determined in advance as a range of the recognition area.
  • FIG. 13 is a flowchart for explaining the procedure performed by the removal region recognition unit 12001. Steps 6001 to 6006 have the same contents as the processing of the first embodiment shown in FIG. Here, step 13001 different from that of the first embodiment will be described.
  • Step 13001 is a process for determining whether or not the selected adjacent voxel has reached the distance upper limit of the recognition area boundary.
  • the distance defined here may be a linear distance between the start point and the adjacent voxel, or may be a path length when the voxel is traced one by one from the start point toward the adjacent pixel.
  • FIG. 14 shows an example of the result of performing removal area recognition on a specific two-dimensional plane in the volume data.
  • FIG. 14A shows an initial state before the removal area recognition unit 12001 starts processing, and a voxel with a circle 0 represents a start point.
  • FIG. 14 (b) shows the state of the recognition region when the removal region recognition unit 12001 is in the middle of processing, and
  • FIG. 14 (c) shows the state of the recognition region when the processing of the removal region recognition unit 12001 is finished.
  • the removal mask generation unit 12002 generates three-dimensional mask data based on the region including the start point recognized by the removal region extraction unit. First, three-dimensional mask data having the same size as the original volume data is defined, and the voxel of the coordinates corresponding to the recognition area recognized by the removal area recognition unit 12001 is set to “0”, and the coordinates of the coordinates not corresponding to the recognition area are set. Set the voxel to “1”.
  • FIG. 14D shows the three-dimensional mask data generated based on FIG.
  • the 3D mask data generated by the removal mask generating unit 12002 is superimposed on the volume data that is the original 3D acquired image, and a specific area is extracted from the volume data. .
  • a region corresponding to the extraction target region of the three-dimensional mask data 15002 is extracted from the original volume data 15001, and volume data 15003 after removal is generated.
  • obstacles such as floating objects selected by the user with the marker 3005 in FIG. 16A can be removed at high speed, and a clear fetal image as shown in FIG. 16B can be output. .
  • the image extracting unit recognizes a region including each of a plurality of start points in a region contracting unit that contracts each region included in the three-dimensional acquired image and an image generated by the region contracting unit.
  • a recognizing unit a mask generating unit that generates a plurality of three-dimensional mask data in which boundaries of a plurality of regions recognized by the region recognizing unit are expanded, and a plurality of three-dimensional mask data are combined to generate three-dimensional combined mask data
  • a mask combining unit that performs
  • a mask superimposing unit that applies a three-dimensional combined mask data to a three-dimensional acquired image and extracts a specific region from the three-dimensional acquired image.
  • FIG. 17 is a diagram illustrating an example of the configuration of the image extraction unit 1009 according to the third embodiment.
  • an image extraction unit 1009 recognizes a start point determination unit 2001 that determines the start point of image extraction processing, a region contraction unit 2002 that contracts each region included in the image, and a region that includes the start point.
  • the mask superimposing unit 2005 extracts the region including the start point by superimposing the three-dimensional combined mask data generated by the mask combining unit 17001 on the volume data that is the original three-dimensional acquired image.
  • the start point determination unit 2001, the region contraction unit 2002, the region recognition unit 2003, the mask generation unit 2004, and the mask superimposition unit 2005 are equivalent to the components of the image extraction unit 1009 in the first embodiment.
  • a mask combining unit 17001 different from the configuration of the image extracting unit 1009 in the first embodiment will be described.
  • the mask combining unit 17001 has a function of combining three-dimensional mask data generated for two different starting points.
  • the two start points may be instructed at the same time, or the first start point may be instructed to perform region extraction, and the result may be output and then the second start point instructed. good.
  • the mask combining unit 17001 receives the three-dimensional mask data 1 generated for the first start point and the three-dimensional mask data 2 generated for the second start point.
  • the mask combining unit 17001 obtains three-dimensional combined mask data by calculating the logical sum of the values of the voxels of the three-dimensional mask data 1 and the values of the voxels of the three-dimensional mask data 2 at the corresponding coordinates.
  • FIG. 18 shows the volume data that is a three-dimensional acquired image by applying the three-dimensional combined mask data 18004 generated by the mask combining unit 17001 to the original volume data 18001 in the mask superimposing unit 2005 according to the configuration of the present embodiment. This is for explaining the volume data 18005 after image extraction, which shows a state where a specific area is extracted from the image.
  • this invention is not limited to the above-mentioned Example, Various modifications are included.
  • the above-described embodiments have been described in detail for better understanding of the present invention, and are not necessarily limited to those having all the configurations described.
  • the ultrasonic diagnostic apparatus provided with a probe or the like has been described as an example.
  • the present invention can also be applied to an image processing method and an image processing apparatus for executing the above.
  • a part of the configuration of one embodiment can be replaced with the configuration of another embodiment, and the configuration of another embodiment can be added to the configuration of one embodiment.
  • the start point determination unit has been described by exemplifying the case where the coordinates specified by the user are used. It is good also as a structure to confirm.

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Biophysics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Pathology (AREA)
  • Radiology & Medical Imaging (AREA)
  • Veterinary Medicine (AREA)
  • Public Health (AREA)
  • Medical Informatics (AREA)
  • Molecular Biology (AREA)
  • Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • General Health & Medical Sciences (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Ultra Sonic Daignosis Equipment (AREA)

Abstract

La présente invention concerne la mise en œuvre d'une extraction de région directe pour des données de volume, et l'extraction d'une région d'un sujet à vitesse élevée. Un dispositif de diagnostic à ultrasons est doté d'une unité de traitement d'image (1005) pour générer une image en coupe d'un tissu chez un sujet sur la base d'un signal acquis à partir d'une sonde (1001), d'une unité de conversion de coordonnées tridimensionnelles (1008) pour réaliser une conversion de coordonnées de l'image en coupe et pour générer des données de volume constituant une image acquise tridimensionnelle, d'une unité d'extraction d'image (1009) pour extraire une région spécifique comprise dans les données de volume, et d'une unité de sortie (1011) pour délivrer en sortie une image de la région extraite par l'unité d'extraction d'image. L'unité d'extraction d'image (1009) reconnaît une région comprenant un point de départ arbitraire dans une image générée par rétraction de chaque région comprise dans les données de volume, génère des données de masque tridimensionnelles dans lesquelles une limite est élargie sur la base de la région reconnue, et applique les données de masque tridimensionnelles à l'image acquise tridimensionnelle et extrait une région spécifique.
PCT/JP2016/068660 2015-07-23 2016-06-23 Dispositif de diagnostic à ultrasons, procédé et dispositif de traitement d'image WO2017013990A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2017529514A JP6490814B2 (ja) 2015-07-23 2016-06-23 超音波診断装置、画像処理方法、及び装置

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2015145729 2015-07-23
JP2015-145729 2015-07-23

Publications (1)

Publication Number Publication Date
WO2017013990A1 true WO2017013990A1 (fr) 2017-01-26

Family

ID=57833892

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2016/068660 WO2017013990A1 (fr) 2015-07-23 2016-06-23 Dispositif de diagnostic à ultrasons, procédé et dispositif de traitement d'image

Country Status (2)

Country Link
JP (1) JP6490814B2 (fr)
WO (1) WO2017013990A1 (fr)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108294780A (zh) * 2018-01-31 2018-07-20 深圳开立生物医疗科技股份有限公司 超声三维成像方法、超声三维成像系统及装置
CN110235172A (zh) * 2018-06-07 2019-09-13 深圳迈瑞生物医疗电子股份有限公司 基于超声影像设备的图像分析方法及超声影像设备
CN111429588A (zh) * 2020-03-11 2020-07-17 上海嘉奥信息科技发展有限公司 基于三维体数据和二维面数据的去背板方法及系统

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006218210A (ja) * 2005-02-14 2006-08-24 Toshiba Corp 超音波診断装置、超音波画像生成プログラム及び超音波画像生成方法
JP2006223712A (ja) * 2005-02-21 2006-08-31 Hitachi Medical Corp 超音波診断装置
WO2012042808A1 (fr) * 2010-09-30 2012-04-05 パナソニック株式会社 Équipement de diagnostic par ultrasons
JP2013017669A (ja) * 2011-07-12 2013-01-31 Hitachi Aloka Medical Ltd 超音波診断装置

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006218210A (ja) * 2005-02-14 2006-08-24 Toshiba Corp 超音波診断装置、超音波画像生成プログラム及び超音波画像生成方法
JP2006223712A (ja) * 2005-02-21 2006-08-31 Hitachi Medical Corp 超音波診断装置
WO2012042808A1 (fr) * 2010-09-30 2012-04-05 パナソニック株式会社 Équipement de diagnostic par ultrasons
JP2013017669A (ja) * 2011-07-12 2013-01-31 Hitachi Aloka Medical Ltd 超音波診断装置

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108294780A (zh) * 2018-01-31 2018-07-20 深圳开立生物医疗科技股份有限公司 超声三维成像方法、超声三维成像系统及装置
CN110235172A (zh) * 2018-06-07 2019-09-13 深圳迈瑞生物医疗电子股份有限公司 基于超声影像设备的图像分析方法及超声影像设备
CN111429588A (zh) * 2020-03-11 2020-07-17 上海嘉奥信息科技发展有限公司 基于三维体数据和二维面数据的去背板方法及系统
CN111429588B (zh) * 2020-03-11 2024-02-20 上海嘉奥信息科技发展有限公司 基于三维体数据和二维面数据的去背板方法及系统

Also Published As

Publication number Publication date
JPWO2017013990A1 (ja) 2018-04-19
JP6490814B2 (ja) 2019-03-27

Similar Documents

Publication Publication Date Title
JP4550599B2 (ja) 3次元超音波映像形成装置及び方法
JP5087694B2 (ja) 超音波診断装置
US9569818B2 (en) Ultrasonic image processing apparatus
JP5990834B2 (ja) 診断画像生成装置および診断画像生成方法
JP7258568B2 (ja) 超音波診断装置、画像処理装置、及び画像処理プログラム
JP2009153918A (ja) 超音波診断装置、超音波画像処理装置、超音波画像処理プログラム
KR101100464B1 (ko) 부 관심영역에 기초하여 3차원 초음파 영상을 제공하는 초음파 시스템 및 방법
JP6490814B2 (ja) 超音波診断装置、画像処理方法、及び装置
US10433815B2 (en) Ultrasound diagnostic image generating device and method
CN106600550B (zh) 超声图像处理方法及系统
JP2011120869A (ja) 超音波3次元映像復元方法およびその超音波システム
JP2008100073A (ja) 対象体の大きさを測定するための超音波診断装置及び方法
EP3011897A1 (fr) Imageur photo-acoustique et procédé d'imagerie photo-acoustique
US8727990B2 (en) Providing an ultrasound spatial compound image in an ultrasound system
TWI446897B (zh) 超音波影像對齊裝置及其方法
JP6204544B2 (ja) 診断画像生成装置
JP2008079885A (ja) 超音波診断装置
JP2013165922A (ja) 超音波診断装置
JP7308600B2 (ja) 超音波診断装置、医用画像処理装置、及び超音波画像表示プログラム
JP7270331B2 (ja) 医用画像診断装置及び画像処理装置
JP5989735B2 (ja) 超音波画像処理装置、プログラム及び超音波画像処理方法
JP5550507B2 (ja) 超音波診断装置
CN107432753B (zh) 一种生成融合超声影像的方法、装置及超声机
JP2020069304A (ja) 超音波診断装置、超音波診断装置の制御方法、及び、超音波診断装置の制御プログラム
CN111354039B (zh) 一种基于b扫图像识别的焊缝区域提取快速算法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16827554

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2017529514

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 16827554

Country of ref document: EP

Kind code of ref document: A1