WO2014050020A1 - Photoacoustic image generation device, and photoacoustic image generation method - Google Patents

Photoacoustic image generation device, and photoacoustic image generation method Download PDF

Info

Publication number
WO2014050020A1
WO2014050020A1 PCT/JP2013/005497 JP2013005497W WO2014050020A1 WO 2014050020 A1 WO2014050020 A1 WO 2014050020A1 JP 2013005497 W JP2013005497 W JP 2013005497W WO 2014050020 A1 WO2014050020 A1 WO 2014050020A1
Authority
WO
WIPO (PCT)
Prior art keywords
photoacoustic
detection
acoustic
coordinates
region
Prior art date
Application number
PCT/JP2013/005497
Other languages
French (fr)
Japanese (ja)
Inventor
覚 入澤
剛也 阿部
Original Assignee
富士フイルム株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 富士フイルム株式会社 filed Critical 富士フイルム株式会社
Publication of WO2014050020A1 publication Critical patent/WO2014050020A1/en

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/0093Detecting, measuring or recording by applying one single type of energy and measuring its conversion into another type of energy
    • A61B5/0095Detecting, measuring or recording by applying one single type of energy and measuring its conversion into another type of energy by applying light and detecting acoustic waves, i.e. photoacoustic measurements
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B2576/00Medical imaging apparatus involving image processing or analysis
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/40ICT specially adapted for the handling or processing of medical images for processing medical images, e.g. editing

Definitions

  • the present invention relates to a photoacoustic image generation apparatus and a photoacoustic image generation method for generating a photoacoustic image based on a photoacoustic wave generated due to light absorption.
  • Photoacoustic spectroscopy irradiates a subject with light having a predetermined wavelength (for example, visible light, near-infrared light, or mid-infrared wavelength band), and a specific substance in the subject radiates the energy of this light.
  • a photoacoustic wave which is an elastic wave generated as a result of absorption, is detected, and the concentration or distribution of the specific substance is measured (for example, Patent Document 1).
  • the specific substance in the subject is, for example, glucose or hemoglobin contained in blood when the subject is a human body.
  • a technique for detecting a photoacoustic wave and generating a photoacoustic image based on the detection signal is photoacoustic imaging (PAI) or photoacoustic tomography (PAT: Photo). Called Acoustic Tomography).
  • Patent Document 1 in the generation of a photoacoustic image of an imaging target range divided into a plurality of partial areas, a one-frame photoacoustic image is generated using an acoustic detection element corresponding to each partial area.
  • the photoacoustic wave (or photoacoustic signal) to be provided is detected in multiple times, all these photoacoustic signals are temporarily stored in memory, and more data than the number of data that can be sampled in parallel are read from the memory.
  • a method for phase matching addition is disclosed. According to the method of Patent Document 1, it is possible to generate a photoacoustic image with higher resolution even when the number of data that can be sampled in parallel is limited.
  • the present invention has been made in view of the above problems, and even when volume data is generated based on photoacoustic signals detected in multiple times, the structure inside the subject is expressed more accurately. It is an object of the present invention to provide a photoacoustic image generation apparatus and a photoacoustic image generation method that enable this.
  • a photoacoustic image generation apparatus includes: In a photoacoustic image generation device that detects a photoacoustic wave generated in a subject and generates a photoacoustic image based on a photoacoustic signal of the photoacoustic wave, An acoustic detection unit having a plurality of acoustic detection elements, wherein a plurality of imaging regions corresponding to the plurality of acoustic detection elements are selected while sequentially selecting some acoustic detection element groups that detect photoacoustic waves in parallel.
  • An acoustic detection unit that detects photoacoustic waves for each detection region by dividing the detection region;
  • a coordinate acquisition unit for acquiring the coordinates of the acoustic detection unit in space;
  • a coordinate setting unit that sets, for each detection region, representative coordinates that represent the coordinates of each detection region, based on the coordinates of the acoustic detection unit when detecting the photoacoustic wave acquired by the coordinate acquisition unit;
  • Of the photoacoustic image data generated based on the photoacoustic signal partial image data for displaying each detection area and the representative coordinates set in each detection area are associated with each other to generate volume data of the photoacoustic image.
  • an acoustic signal processing unit that transmits to generate volume data of the photoacoustic image.
  • the acoustic signal processing unit can generate partial image data for displaying a certain detection region in the photoacoustic signal obtained in the detection region and the other detection regions.
  • a configuration that is performed based on the photoacoustic signal can be employed.
  • the acoustic signal processing unit generates partial image data for displaying a certain detection area based only on the photoacoustic signal obtained in the detection area. Configuration can be adopted.
  • the coordinate setting unit calculates, as representative coordinates, calculated coordinates calculated using a plurality of coordinates acquired while a photoacoustic wave is detected in a certain detection region.
  • a configuration that is set in the detection region can be employed. In this case, it is possible to adopt a configuration in which the coordinate setting unit obtains the calculated coordinates using the coordinates acquired immediately before and after the period in which the photoacoustic wave is detected in a certain detection region.
  • the coordinate setting unit directly uses one of the coordinates acquired while the photoacoustic wave is detected in a certain detection area as a representative coordinate. It is possible to adopt a configuration that is set to
  • a configuration in which a plurality of acoustic detection elements constituting the acoustic detection element group is continuous can be employed.
  • the coordinate acquisition unit has a plurality of reading points for reading the coordinates
  • the coordinate setting unit sets the representative coordinates based on the coordinates read by the plurality of reading points.
  • a configuration in which a plurality of reading points are provided corresponding to each acoustic detection element group can be employed.
  • the nth acoustic detection element group is n, N + n, 2N + n,. -2) N + n and (Q-1) N + nth acoustic detection elements may be used.
  • Q represents a quotient when the total number of a plurality of sound detection elements included in the sound detection unit is divided by N.
  • the acoustic detection unit detects reflected ultrasonic waves with respect to the ultrasonic waves transmitted to the subject, It is preferable that the acoustic signal processing unit generates an ultrasonic image based on an ultrasonic signal of reflected ultrasonic waves.
  • a photoacoustic image generation method includes: In a photoacoustic image generation method for detecting a photoacoustic wave generated in a subject and generating a photoacoustic image based on a photoacoustic signal of the photoacoustic wave, Using a sound detection unit having a plurality of sound detection elements, a plurality of imaging regions corresponding to the plurality of sound detection elements are selected while sequentially selecting some sound detection element groups that detect photoacoustic waves in parallel.
  • the photoacoustic wave is detected for each detection area divided into detection areas, Based on the coordinates of the acoustic detection unit in the space when detecting the photoacoustic wave, representative coordinates representing the coordinates of each detection region are set for each detection region, Of the photoacoustic image data generated based on the photoacoustic signal, partial image data for displaying each detection area and the representative coordinates set in each detection area are associated with each other to generate volume data of the photoacoustic image. It is characterized by this.
  • the generation of partial image data for displaying a certain detection region is based on the photoacoustic signal obtained in the detection region and the photoacoustic signal obtained in another detection region.
  • a configuration in which the partial image data is generated based on the photoacoustic signal obtained in a set of detection regions corresponding to the imaging region can be employed.
  • the photoacoustic image generation method it is possible to adopt a configuration in which partial image data for displaying a certain detection region is generated based only on the photoacoustic signal obtained in the detection region.
  • calculated coordinates calculated using a plurality of coordinates acquired while a photoacoustic wave is detected in a certain detection area are set as representative coordinates in the detection area.
  • a configuration in which one of coordinates acquired while a photoacoustic wave is detected in a certain detection area is set as a representative coordinate in the detection area as it is. Can be adopted.
  • a configuration in which a plurality of acoustic detection elements constituting the acoustic detection element group is continuous can be employed.
  • the photoacoustic wave is detected for each detection region, the representative coordinates are set, and the partial image data for displaying each detection region and each detection region are set. Since the volume data is generated in correspondence with each of the representative coordinates, rather than simply arranging one frame of photoacoustic image data generated based on the photoacoustic signal obtained for each detection region (that is, at different times). The accuracy of the position of the photoacoustic image data in the volume data can be increased. As a result, even when volume data is generated based on photoacoustic signals detected in a plurality of times, the structure inside the subject can be expressed more accurately.
  • FIG. 1 is a schematic block diagram showing the configuration of the photoacoustic image generation apparatus of the present embodiment.
  • FIG. 2 is a schematic diagram illustrating a configuration of an acoustic detection unit in the probe.
  • the photoacoustic image generation apparatus 10 of the present embodiment includes a probe 11, an ultrasonic unit 12, a laser unit 13, a display unit 14, a coordinate acquisition unit (15, 41 and 42), and an input unit 16. Is provided.
  • the photoacoustic image generation method in this embodiment uses the acoustic detection unit 20 having 128 acoustic detection elements 20c to sequentially select some acoustic detection element groups that detect photoacoustic waves in parallel.
  • the imaging region corresponding to the plurality of acoustic detection elements 20c is divided into a plurality of detection regions, the photoacoustic wave is detected for each detection region, and the coordinates of the acoustic detection unit 20 in the space when the photoacoustic wave is detected
  • representative coordinates representing the coordinates of each detection area are set for each detection area, and among the photoacoustic image data generated based on the photoacoustic signal, partial image data for displaying each detection area, and each detection Volume data of the photoacoustic image is generated by associating the representative coordinates set in the region with each other.
  • the probe 11 detects, for example, the optical fiber 40 that guides the laser light L output from the laser unit 13 to the subject M, and the acoustic wave U from the subject M, and sets the intensity of the detected acoustic wave U.
  • a sound detection unit 20 that generates a corresponding electrical signal (acoustic signal) is included.
  • acoustic wave means an ultrasonic wave and a photoacoustic wave.
  • ultrasonic wave means an elastic wave generated in a subject due to vibration of an acoustic wave generator such as a piezoelectric element and its reflected wave
  • photoacoustic wave means light generated by light irradiation.
  • the probe 11 is, for example, a handheld probe, and is configured so that a user can manually scan.
  • the scanning is not limited to manual scanning, and may be performed by a mechanical mechanism.
  • the probe 11 is appropriately selected from a sector scan type, a linear scan type, a convex scan type, and the like according to the subject M to be diagnosed.
  • a magnetic sensor 42 that constitutes a part of the coordinate acquisition unit is built in the probe 11.
  • the acoustic detection unit 20 includes, for example, a backing material, a detection element array 20a, a control circuit for the detection element array 20a, a multiplexer 20b, an acoustic matching layer, and an acoustic lens.
  • the detection element array 20a is a one-dimensional array of 128 acoustic detection elements 20c, and converts actually detected acoustic waves into electrical signals.
  • the number and arrangement of the acoustic detection elements 20c are not limited to this.
  • the number of acoustic detection elements 20c may be 192, or the acoustic detection elements 20c may be two-dimensionally arranged.
  • the acoustic detection element 20c is a piezoelectric element composed of a polymer film such as piezoelectric ceramics or polyvinylidene fluoride (PVDF).
  • the multiplexer 20b selectively connects the acoustic detection element 20c and the ultrasonic unit 12 for each of the acoustic detection elements 20c that detect acoustic waves in parallel.
  • the acoustic detection unit 20 generates photoacoustic image data by dividing the imaging region into a plurality of detection regions corresponding to acoustic detection element groups that detect photoacoustic waves in parallel by selective connection by the multiplexer 20b.
  • the photoacoustic wave used for the detection is detected for each detection region.
  • the acoustic detection element group is a set of acoustic detection elements that detect photoacoustic waves in parallel among the plurality of acoustic detection elements 20c.
  • the imaging region is a region of a subject to be displayed in a one-frame photoacoustic image defined by the detection element array 20a.
  • each detection region is a region corresponding to a part (less than 128) of acoustic detection elements 20c in a state in which photoacoustic signals can be transmitted to the ultrasound unit 12 in parallel in the imaging region.
  • the detection element array 20a as an acoustic detection element group is divided into two element regions (element region A or element region B). For example, as shown in FIG. It is comprised from the one acoustic detection element 20c.
  • region may overlap with another element area
  • the number of acoustic detection elements constituting each element region is preferably equal, but is not necessarily strictly equal.
  • the detection of the photoacoustic wave is performed in order for each element region by sequentially connecting each element region to the ultrasonic unit 12 by selective connection by the multiplexer 20b.
  • the number of channels (ch) of the multiplexer 20b is 64, and data for 64 channels can be sampled in parallel.
  • a photoacoustic wave is detected in the element region A, and after the multiplexer 20b switches the connection to the element region B, the photoacoustic wave is detected in the element region B.
  • a photoacoustic wave signal (photoacoustic signal) detected in each element region is sequentially transmitted to the receiving circuit 21 of the ultrasonic unit 12.
  • the number of channels (ch) in reception can be reduced, thereby reducing costs. Is possible. And when the probe 11 is scanned, the detection of the photoacoustic wave for every element area
  • the element area may be divided by dividing the entire acoustic detection element into two as shown in FIG. 2A, or dividing the entire acoustic detection element into three or four or more as shown in B of FIG. Good.
  • photoacoustic waves are detected in the order of, for example, the element region A, the element region B, and the element region C by selective connection by the multiplexer 20b.
  • the element region is divided as shown in FIG. 2C, in addition to the aspect in which the acoustic detection elements constituting each element region are continuous in the arrangement direction as shown in FIG. 2A and FIG. 2B.
  • a mode in which the set of acoustic detection element groups is separated by another acoustic detection element group may be employed.
  • the detection element array 20a is divided into N, and there are N element regions (acoustic detection element groups).
  • the nth element region is n, N + n, 2N + n, (Q-2). It may be configured from N + n and (Q ⁇ 1) N + nth acoustic detection elements.
  • Q represents a quotient when the total number of a plurality of sound detection elements included in the sound detection unit is divided by N. If there is a remainder, it should be incorporated into each element region as evenly as possible. In this case, for example, an acoustic signal is detected in an element region of 1, N + 1, 2N + 1,...
  • an acoustic signal is detected in an element region of 2, N + 2, 2N + 2,.
  • the first element region is composed of the first, fourth, seventh,..., 121 and 124th acoustic detection elements from the left and the remaining 127th acoustic detection element.
  • the second element region is composed of the 2, 5, 8, ..., 122 and 125th acoustic detection elements from the left and the remaining 128th acoustic detection element, and the third element region is 3, 6 from the left. , 9,..., 123, and 126, and 126th acoustic detection element.
  • the optical fiber 40 guides the laser light L output from the laser unit 13 to the vicinity of the detection element array 20a.
  • the optical fiber 40 is not particularly limited, and a known fiber such as a quartz fiber can be used.
  • the laser beam L is guided to the vicinity of the detection element array 20a, and then irradiated to a range including a detection region facing at least a selectively connected element region.
  • a light guide plate or a diffusion plate is used in addition to the optical fiber so that the laser light L is uniformly emitted to the subject. You can also.
  • the laser unit 13 includes a light source that emits laser light L, for example, and outputs the laser light L as light to be irradiated on the subject M.
  • the laser unit 13 is configured to output a laser beam L in response to, for example, a trigger signal from the control unit 29 of the ultrasonic unit 12.
  • the laser light L output from the laser unit 13 is guided to the vicinity of the detection element array 20a of the probe 11 using a light guide unit such as an optical fiber 40, for example.
  • the laser unit 13 preferably outputs pulsed light having a pulse width of 1 to 100 nsec as laser light.
  • the laser unit 13 is a Q-switch alexandrite laser.
  • the pulse width of the laser light L is controlled by, for example, a Q switch.
  • the wavelength of the laser light is appropriately determined according to the light absorption characteristics of the substance in the subject to be measured.
  • the wavelength is preferably a wavelength belonging to the near-infrared wavelength region.
  • the near-infrared wavelength region means a wavelength region of about 700 to 850 nm.
  • the wavelength of the laser beam is not limited to this.
  • the laser beam L may be a single wavelength or may include a plurality of wavelengths (for example, 750 nm and 800 nm). Furthermore, when the laser light L includes a plurality of wavelengths, the light of these wavelengths may be irradiated to the subject M at the same time, or may be irradiated while being switched alternately.
  • the laser unit 13 may be a YAG-SHG-OPO laser or a Ti-Sapphire laser that can output laser light in the near-infrared wavelength region in addition to the alexandrite laser.
  • the coordinate acquisition unit sequentially or sequentially acquires coordinates (hereinafter also simply referred to as coordinates) that define the position and posture of the probe 11 (that is, the acoustic detection unit 20) in the real space and while the probe 11 is being scanned. To do.
  • the coordinate acquisition unit is a magnetic sensor unit
  • this magnetic sensor unit includes a coordinate acquisition control unit 15, a magnetic field generation unit 41 such as a transmitter, and a magnetic sensor 42.
  • the magnetic sensor unit includes a position (x, y, z) and a posture (angle) ( ⁇ , ⁇ , ⁇ ) of the magnetic sensor relative to the space of the magnetic field generation unit system (the space on the pulsed magnetic field formed by the magnetic field generation unit). ) Can be obtained.
  • the position and orientation of the magnetic sensor are associated with the position and orientation of the probe.
  • the “magnetic sensor position” means the position of the reference point of the magnetic sensor determined based on the magnetic field information acquired by the magnetic sensor.
  • the attitude of the magnetic sensor means, for example, the inclination of the space (the space of the magnetic sensor system) with the reference point relating to the magnetic sensor as the origin. Note that when the scanning of the probe 11 is only parallel movement, the acquired information may be only the relative position.
  • the coordinate acquisition control unit 15 sets the coordinates of the probe 11 at that time as the origin in the space of the magnetic field generation unit system, for example.
  • This space is, for example, a (x, y, z) triaxial space when considering only parallel movement, and (x, y, z, ⁇ , ⁇ , ⁇ ) when considering rotational movement.
  • This is a 6-axis system space.
  • the origin is set so that the axis of the space is along the array direction of the detection element array 20a (direction in which the acoustic detection elements 20c are arranged) or the elevation direction (direction perpendicular to the array direction and parallel to the detection surface of the detection element array 20a). It is preferable to do.
  • the coordinate acquisition unit may be configured to acquire coordinates using an acceleration sensor, an infrared sensor, or the like in addition to the magnetic sensor unit.
  • the coordinate acquisition unit acquires the coordinates of the probe 11 at a predetermined cycle (coordinate acquisition cycle), for example.
  • a predetermined cycle for example.
  • the coordinate acquisition cycle of the magnetic sensor unit is 5 ms.
  • the acquired coordinates are transmitted to the control means 29. These coordinates are used when generating three-dimensional volume data based on the acoustic signal, generating tomographic data from the volume data, or arranging two-dimensional acoustic images in order according to the position. .
  • two magnetic sensors 42 serving as reading points for reading coordinates in the coordinate acquisition unit are provided in the probe 11 (FIG. 1), for example, one is in the vicinity of the element region A and the other is an element. It is arranged in the vicinity of the region B.
  • the control means 29 sets representative coordinates based on the coordinates read by a plurality of magnetic sensors at each reading timing, for example, adopts the coordinates read by the closer magnetic sensor 42 for each element region, The weighted average of the coordinates read by the two magnetic sensors 42 is taken.
  • One magnetic sensor 42 may be provided.
  • the ultrasonic unit 12 includes a reception circuit 21, an AD conversion unit 22, a reception memory 23, a photoacoustic image reconstruction unit 24, a detection / logarithm conversion unit 27, a photoacoustic image construction unit 28, a control unit 29, an image synthesis unit 38, and Observation method selection means 39 is provided.
  • the ultrasonic unit 12 corresponds to an acoustic signal processing unit in the present invention.
  • the control means 29 controls each part of the photoacoustic image generation apparatus 10, and includes a trigger control circuit 30 in the present embodiment, for example.
  • the trigger control circuit 30 sends a light trigger signal to the laser unit 13 when the photoacoustic image generation apparatus is activated, for example.
  • the flash lamp is turned on in the laser unit 13 and the excitation of the laser rod is started. And the excitation state of a laser rod is maintained and the laser unit 13 will be in the state which can output a laser beam.
  • the control means 29 then transmits a Qsw trigger signal from the trigger control circuit 30 to the laser unit 13. That is, the control means 29 controls the output timing of the laser light from the laser unit 13 by this Qsw trigger signal.
  • the transmission of the Qsw trigger signal may be transmitted at regular time intervals, or may be transmitted at regular coordinate intervals based on the coordinates obtained from the coordinate acquisition unit.
  • the control unit 29 transmits the sampling trigger signal to the AD conversion unit 22 simultaneously with the transmission of the Qsw trigger signal.
  • the sampling trigger signal serves as a cue for the start timing of the photoacoustic signal sampling in the AD conversion means 22. As described above, by using the sampling trigger signal, it is possible to sample the photoacoustic signal in synchronization with the output of the laser beam.
  • control means 29 acquires the coordinates of the probe 11 (more precisely, the coordinates of the element region in consideration of the distance between the magnetic sensor and the detection element array 20a) from the coordinate acquisition control unit 15 simultaneously with the transmission of the Qsw trigger signal. Then, based on the coordinates, the representative coordinates are set in the detection area where the photoacoustic wave is detected at that time. That is, the control means 29 corresponds to a coordinate setting unit in the present invention. This makes it possible to synchronize the three timings of laser light output, photoacoustic wave detection for each detection region, and coordinate setting.
  • the control unit 29 issues a command to the coordinate acquisition unit that the coordinate should be acquired before the transmission of the Qsw trigger signal.
  • the representative coordinates are coordinates representative of the spatial position of the detection area, and are set for each detection area as described above. Information on the set representative coordinates is transmitted to the reception memory 23 and stored in association with the photoacoustic signal obtained in the detection area. The representative coordinates are determined based on the coordinates of the acoustic detection unit 20 acquired by the coordinate acquisition unit when detecting the photoacoustic wave.
  • the representative coordinates are calculated coordinates calculated using a plurality of coordinates acquired while photoacoustic waves are detected in a certain detection area (for example, average value, weighted average value, median value, mode value, etc. ).
  • the representative coordinate may be one of the coordinates acquired while the photoacoustic wave is detected in a certain detection area. In the present embodiment, the latter is adopted, and the former representative coordinates will be described in detail in the third embodiment.
  • control means 29 can be configured to start transmission of a Qsw trigger signal when a predetermined switch provided on the probe 11 is pressed. If comprised in this way, the position of the probe 11 when a switch is pushed can be handled as a starting point of probe scanning. Further, if the transmission of the Qsw trigger signal is terminated when the switch is pressed next time, the position of the probe 11 at that time can be handled as the end point of the probe scan.
  • the receiving circuit 21 receives the photoacoustic signal detected by the probe 11.
  • the photoacoustic signal received by the receiving circuit 21 is transmitted to the AD conversion means 22.
  • the AD conversion means 22 is a sampling means, which samples the photoacoustic signal received by the receiving circuit 21 and converts it into a digital signal.
  • the AD conversion unit 22 includes a sampling control unit and an AD converter.
  • the reception signal received by the reception circuit 21 is converted into a sampling signal digitized by an AD converter.
  • the AD converter is controlled by a sampling control unit, and is configured to perform sampling when the sampling control unit receives a sampling trigger signal.
  • the AD converter 22 samples the received signal at a predetermined sampling period based on, for example, an AD clock signal having a predetermined frequency input from the outside.
  • the reception memory 23 stores the photoacoustic signal (that is, the sampling signal) sampled by the AD conversion means 22 and the representative coordinate information transmitted from the control means 29 in association with each other. Then, the reception memory 23 outputs the photoacoustic signal detected by the probe 11 to the photoacoustic image reconstruction unit 24.
  • the photoacoustic image reconstruction means 24 sequentially reads out the photoacoustic signal obtained for each detection area from the reception memory 23, and each line of partial image data for displaying the detection area for each detection area based on this photoacoustic signal. Signal data is generated. Specifically, the photoacoustic image reconstruction unit 24 adds the 64ch data obtained for each detection region with a delay time corresponding to the position of the acoustic detection element, and generates signal data for one line (delay). Addition method). The photoacoustic image reconstruction unit 24 may perform reconstruction by a CBP method (Circular Back Projection) instead of the delay addition method. Alternatively, the photoacoustic image reconstruction unit 24 may perform reconstruction using the Hough transform method or the Fourier transform method.
  • the detection / logarithm conversion means 27 obtains an envelope of the signal data of each line, and logarithmically converts the obtained envelope.
  • the photoacoustic image construction means 28 constructs photoacoustic image data based on the signal data of each line subjected to logarithmic transformation. That is, the photoacoustic image construction unit 28 converts the signal data of each line into image data, and generates partial image data for each detection region.
  • the reconstruction of the signal data of each line that is the basis of the partial image data is performed based only on the photoacoustic signal obtained in the detection region related to the partial image data. That is, the generation of partial image data for displaying a certain detection area is performed based only on the photoacoustic signal obtained in the detection area.
  • FIG. 3 shows the case where the partial image data IMa for displaying the detection area is generated from the 64 ch photoacoustic signal Sa obtained in the predetermined detection area by the element area A, and the 64 ch obtained in the other detection areas by the element area B.
  • the reconstruction process in generating the partial image data of B of FIG. 3 from the photoacoustic signal of A of FIG. 3 is omitted.
  • FIG. 4 is a conceptual diagram showing partial image data stored in volume data.
  • the partial image data is arranged in order for each set of detection regions R corresponding to the imaging region.
  • the partial image data IMb related to the detection region facing the element region B is the detection region facing the element region A even in one set of detection regions as shown in FIG.
  • the volume data is stored while being shifted in the scanning direction of the probe 11.
  • a set of detection regions corresponding to the imaging region means a combination of elements having different element regions related to the detection regions.
  • the set of detection regions R is a combination of a detection region corresponding to the element region A and a detection region corresponding to the element region B.
  • the photoacoustic image construction means 28 constructs a photoacoustic image by converting, for example, a position in the time axis direction of the photoacoustic signal (peak portion) into a position in the depth direction in the photoacoustic image.
  • the observation method selection means 39 is for selecting the display mode of the photoacoustic image.
  • Examples of the volume data display mode for the photoacoustic signal include a mode as a three-dimensional image, a mode as a cross-sectional image, and a mode as a graph on a predetermined axis.
  • the display mode is selected according to the initial setting or the input from the input unit 16 by the user.
  • the image composition unit 38 performs necessary processing (for example, scale correction and coloring according to the voxel value) on the generated volume data.
  • the photoacoustic image data generated according to the selected observation method is the final image (display image) to be displayed on the display means 14.
  • the photoacoustic image data generation method described above it is naturally possible for the user to rotate or move the image as necessary after the photoacoustic image data is once generated.
  • FIG. 5 is a flowchart showing the steps of the photoacoustic image generation method of the present embodiment.
  • FIG. 6 is a timing chart for laser beam emission, photoacoustic signal detection, and coordinate setting in this embodiment.
  • LT is the laser beam emission timing (repetition frequency 15 Hz, that is, repetition period 15 ms)
  • AT 1 , AT 2 ,..., AT n are the photoacoustic wave detection timing and detection period in the element region A
  • 1 , BT 2 ,..., BT n are the photoacoustic wave detection timing and detection period in the element region B
  • PT is the coordinate acquisition timing by the coordinate acquisition unit.
  • the acoustic detection element group is switched to an element belonging to the element region A in the detection element array 20a ( (Steps 10 and 11)
  • the above procedure is repeated until the scanning of the probe 11 is completed, and when the scanning of the probe 11 is completed, the detection of the photoacoustic wave is completed.
  • the end of scanning of the probe 11 can be determined based on, for example, that a predetermined switch of the probe 11 has been pressed next, or that the scanning speed of the probe 11 has been automatically detected and the speed has become zero. it can.
  • the photoacoustic wave is detected for each detection region, the representative coordinates are set, and the partial image data for displaying each detection region Since the volume data is generated in correspondence with the representative coordinates set in each detection region, one frame of the photoacoustic image generated based on the photoacoustic signal obtained for each detection region (that is, at a different time).
  • the accuracy of the position of the photoacoustic image data in the volume data can be improved rather than simply arranging the data. This is because according to the present invention, accurate volume data can be generated by reflecting the shift of each detection area caused by the scanning of the probe 11. For example, in the example of FIG.
  • the ultrasonic unit (acoustic signal processing unit) generates partial image data for displaying a certain detection area, the photoacoustic signal obtained in the detection area, and the photoacoustic obtained in another detection area.
  • the ultrasonic unit acoustic signal processing unit
  • FIG. 7 is a conceptual diagram illustrating a partial image data generation process in the present embodiment.
  • the photoacoustic image generation apparatus 10 of this embodiment also includes a probe 11, an ultrasonic unit 12, a laser unit 13, a display unit 14, a coordinate acquisition unit (15, 41 and 42), and an input unit 16. Is provided.
  • the ultrasonic unit 12 includes a reception circuit 21, an AD conversion unit 22, a reception memory 23, a photoacoustic image reconstruction unit 24, a detection / logarithm conversion unit 27, a photoacoustic image construction unit 28, a control unit 29, an image synthesis unit 38, and Observation method selection means 39 is provided. Then, the ultrasonic unit 12 generates partial image data for displaying a certain detection area based on the photoacoustic signal obtained in the detection area and the photoacoustic signal obtained in another detection area.
  • the photoacoustic image reconstruction means 24 detects the photoacoustic wave in the entire detection area.
  • the photoacoustic signal is not reconstructed until the end.
  • the photoacoustic image reconstruction means 24 uses the photoacoustic signals Sa and Sb (detection region facing the element region A and detection region facing the element region B) in a whole set of detection regions. (A in FIG. 7), these photoacoustic signals are arranged side by side in the element region and combined into one (B in FIG. 7).
  • the photoacoustic image reconstruction means 24 reconstructs the photoacoustic signal by using the entire collected photoacoustic signal (128ch) so as to generate signal data that is the basis of the photoacoustic image for one frame.
  • the photoacoustic signal Sa portion is 1 to 64 ch and the photoacoustic signal Sb portion is 65 to 128 ch in the total photoacoustic signal (128 ch)
  • 1 to 64 ch, 2 to Reconfiguration is performed with channel combinations such as 65 ch, 3 to 66 ch,..., 65 to 128 ch.
  • FIG. 7 illustration of the reconstruction process when generating the photoacoustic image data of C of FIG. 7 from the photoacoustic signal of B of FIG. 7 is omitted.
  • the photoacoustic image reconstruction unit 24 transmits the signal data of each line obtained by these reconstructions to the detection / logarithm conversion unit 27.
  • the photoacoustic image construction means 28 generates photoacoustic image data IM for one frame based on the signal data received from the detection / logarithm conversion means 27 (C in FIG. 7). However, since the photoacoustic image construction means 28 processes and manages the partial image data IMa and IMb for each detection region, when storing the image data in the volume data, the photoacoustic image data IM for one frame is stored in the partial image. The data is divided into data IMa and IMb (D in FIG. 7). Thereby, similarly to the first embodiment, it is possible to generate the volume data by associating the partial image data for displaying each detection area with the representative coordinates set in each detection area.
  • FIG. 8 is a flowchart showing the steps of the photoacoustic image generation method of the present embodiment. Note that the timing chart regarding the emission of laser light, the detection of photoacoustic signals, and the setting of coordinates in this embodiment is the same as that in FIG.
  • the photoacoustic signal detected in the first detection region and the representative coordinate are associated and stored in the memory (STEP 25).
  • Detection of photoacoustic waves used to generate photoacoustic image data for a frame is started (STEPs 31, 32, 22 and 23). The above procedure is repeated until the scanning of the probe 11 is completed, and when the scanning of the probe 11 is completed, the detection of the photoacoustic wave is completed.
  • partial image data that detects photoacoustic waves for each detection region, sets representative coordinates, and displays each detection region. Since the volume data is generated by associating each of the detection coordinates with the representative coordinates set in each detection region, the same effect as in the first embodiment can be obtained.
  • the control unit 29 uses, as representative coordinates, calculated coordinates calculated using a plurality of coordinates acquired while photoacoustic waves are detected in a certain detection area. This is different from the first embodiment in that it is set as follows. Therefore, a detailed description of the same components as those in the first embodiment is omitted unless particularly necessary.
  • FIG. 9 is a timing chart for laser beam emission, photoacoustic signal detection, and coordinate setting in the present embodiment.
  • the photoacoustic image generation apparatus 10 of this embodiment also includes a probe 11, an ultrasonic unit 12, a laser unit 13, a display unit 14, a coordinate acquisition unit (15, 41 and 42), and an input unit 16. Is provided.
  • the control unit 29 For example, based on the coordinates acquired at timings p 1 to p 4 within the detection period AT 1 for detecting the photoacoustic wave in the element region A, the control unit 29, for example, the average value, the weighted average value, and the median value of these coordinates The mode value is calculated, and the calculated value (calculated coordinate) is set as the representative coordinate.
  • the calculated value (calculated coordinate) is set as the representative coordinate.
  • control unit 29 may obtain the calculated coordinates using the coordinates acquired immediately before and after the period in which the photoacoustic wave is detected in a certain detection area (for example, the timing p 0 in FIG. 9).
  • a certain detection area for example, the timing p 0 in FIG. 9.
  • noise in the coordinate information is further removed, and the accuracy of matching between the representative coordinates and the actual position of the detection area is further improved.
  • the ultrasonic unit (acoustic signal processing unit) generates partial image data for displaying a certain detection area based only on the photoacoustic signal obtained in the detection area.
  • the ultrasonic unit (acoustic signal processing unit) generates partial image data for displaying a certain detection area using the photoacoustic signal obtained in the detection area and the other detection areas.
  • the present invention can also be applied to a case where it is performed based on the obtained photoacoustic signal.
  • FIG. 10 is a block diagram illustrating a configuration of the photoacoustic image generation apparatus 10 of the present embodiment.
  • This embodiment is different from the first embodiment in that an ultrasonic image is generated in addition to the photoacoustic image. Therefore, a detailed description of the same components as those in the first embodiment will be omitted unless particularly necessary.
  • the photoacoustic image generation apparatus 10 of this embodiment also includes a probe 11, an ultrasonic unit 12, a laser unit 13, a display unit 14, a coordinate acquisition unit (15, 41 and 42), and an input unit 16. Is provided.
  • the ultrasonic unit 12 of the present embodiment includes a transmission control circuit 33, a data separation unit 34, an ultrasonic image reconstruction unit 35, a detection / logarithm conversion unit 36, And an ultrasonic image constructing means 37.
  • the probe 11 performs output (transmission) of ultrasonic waves to the subject and detection (reception) of reflected ultrasonic waves from the subject with respect to the transmitted ultrasonic waves.
  • the acoustic detection element that transmits and receives ultrasonic waves the acoustic detection element array described above may be used, or a new acoustic detection element array provided separately in the probe 11 for ultrasonic transmission and reception is used. Also good.
  • transmission and reception of ultrasonic waves may be separated. For example, ultrasonic waves may be transmitted from a position different from the probe 11, and reflected ultrasonic waves with respect to the transmitted ultrasonic waves may be received by the probe 11.
  • the trigger control circuit 30 sends an ultrasonic transmission trigger signal for instructing ultrasonic transmission to the transmission control circuit 33 when generating an ultrasonic image.
  • the transmission control circuit 33 Upon receiving this trigger signal, the transmission control circuit 33 transmits an ultrasonic wave from the probe 11.
  • the probe 11 detects the reflected ultrasonic wave from the subject after transmitting the ultrasonic wave.
  • the reflected ultrasonic waves detected by the probe 11 are input to the AD conversion means 22 via the receiving circuit 21.
  • the trigger control circuit 30 sends a sampling trigger signal to the AD conversion means 22 in synchronization with the timing of ultrasonic transmission, and starts sampling of reflected ultrasonic waves.
  • the reflected ultrasonic waves reciprocate between the probe 11 and the ultrasonic reflection position, whereas the photoacoustic signal is one way from the generation position to the probe 11. Since the detection of the reflected ultrasonic wave takes twice as long as the detection of the photoacoustic signal generated at the same depth position, the sampling clock of the AD conversion means 22 is half the time when the photoacoustic signal is sampled, for example, It may be 20 MHz.
  • the AD conversion means 22 stores the reflected ultrasonic sampling signal in the reception memory 23. Either sampling of the photoacoustic signal or sampling of the reflected ultrasonic wave may be performed first.
  • the data separating means 34 separates the photoacoustic signal sampling signal and the reflected ultrasonic sampling signal stored in the reception memory 23.
  • the data separation unit 34 inputs a sampling signal of the separated photoacoustic signal to the photoacoustic image reconstruction unit 24.
  • the generation of the photoacoustic image is the same as that in the first embodiment.
  • the data separation unit 34 inputs the separated reflected ultrasound sampling signal to the ultrasound image reconstruction unit 35.
  • the ultrasonic image reconstruction means 35 generates data of each line of the ultrasonic image based on the reflected ultrasonic waves (its sampling signals) detected by the plurality of acoustic detection elements of the probe 11. For the generation of the data of each line, a delay addition method or the like can be used as in the generation of the data of each line in the photoacoustic image reconstruction means 24.
  • the detection / logarithm conversion means 36 obtains the envelope of the data of each line output from the ultrasonic image reconstruction means 35 and logarithmically transforms the obtained envelope.
  • the ultrasonic image construction means 37 generates an ultrasonic image based on the data of each line subjected to logarithmic transformation.
  • the image synthesis means 38 synthesizes, for example, a photoacoustic image and an ultrasonic image.
  • the image composition unit 38 performs image composition by superimposing a photoacoustic image and an ultrasonic image, for example.
  • the synthesized image is displayed on the display means 14. It is also possible to display the photoacoustic image and the ultrasonic image side by side on the display unit 14 without performing image synthesis, or to switch between the photoacoustic image and the ultrasonic image.
  • partial image data that detects photoacoustic waves for each detection region, sets representative coordinates, and displays each detection region. Since the volume data is generated by associating each of the detection coordinates with the representative coordinates set in each detection region, the same effect as in the first embodiment can be obtained.
  • the photoacoustic measurement device of the present embodiment generates an ultrasonic image in addition to the photoacoustic image. Therefore, by referring to the ultrasonic image, a portion that cannot be imaged in the photoacoustic image can be observed.

Landscapes

  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Molecular Biology (AREA)
  • Pathology (AREA)
  • Engineering & Computer Science (AREA)
  • Biomedical Technology (AREA)
  • Acoustics & Sound (AREA)
  • Medical Informatics (AREA)
  • Biophysics (AREA)
  • Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Ultra Sonic Daignosis Equipment (AREA)
  • Investigating Or Analyzing Materials By The Use Of Ultrasonic Waves (AREA)

Abstract

[Problem] To enable the internal structure of a subject to be depicted with greater accuracy even when volume data is generated on the basis of a photoacoustic signal detection carried out in multiple stages. [Solution] This photoacoustic image generation device (10) is provided with: an acoustic detection unit (20) which comprises a plurality of acoustic detection elements (20c) and which divides an imaging region into multiple detection regions (AT1 and BT2), and detects the photoacoustic waves for each detection region (AT1 and BT2); a coordinate acquisition unit which acquires coordinates; a control means (29) which sets, for each detection region, representative coordinates that represent the coordinates of the detection region, on the basis of the coordinates from the acoustic detection unit (20) during the detection of the photoacoustic waves; and an acoustic signal processing unit (12) which associates the respective partial image data (IMa and IMb) representing each detection region with the representative coordinates set for each detection region, and generates volume data.

Description

光音響画像生成装置および光音響画像生成方法Photoacoustic image generation apparatus and photoacoustic image generation method
 本発明は、光の吸収に起因して発生した光音響波に基づいて光音響画像を生成する光音響画像生成装置および光音響画像生成方法に関するものである。 The present invention relates to a photoacoustic image generation apparatus and a photoacoustic image generation method for generating a photoacoustic image based on a photoacoustic wave generated due to light absorption.
 光音響分光法は、所定の波長(例えば、可視光、近赤外光又は中間赤外光の波長帯域)を有する光を被検体に照射し、被検体内の特定物質がこの光のエネルギーを吸収した結果生じる弾性波である光音響波を検出して、その特定物質の濃度または分布を計測するものである(例えば特許文献1)。被検体内の特定物質とは、例えば被検体が人体である場合には、血液中に含まれるグルコースやヘモグロビンなどである。さらに、光音響波を検出しその検出信号に基づいて光音響画像を生成する技術は、光音響イメージング(PAI:Photoacoustic Imaging)或いは光音響トモグラフィー(PAT:Photo
Acoustic Tomography)と呼ばれる。
Photoacoustic spectroscopy irradiates a subject with light having a predetermined wavelength (for example, visible light, near-infrared light, or mid-infrared wavelength band), and a specific substance in the subject radiates the energy of this light. A photoacoustic wave, which is an elastic wave generated as a result of absorption, is detected, and the concentration or distribution of the specific substance is measured (for example, Patent Document 1). The specific substance in the subject is, for example, glucose or hemoglobin contained in blood when the subject is a human body. Furthermore, a technique for detecting a photoacoustic wave and generating a photoacoustic image based on the detection signal is photoacoustic imaging (PAI) or photoacoustic tomography (PAT: Photo).
Called Acoustic Tomography).
 例えば、特許文献1には、複数の部分領域に分割された画像化対象範囲の光音響画像の生成において、各部分領域に対応した音響検出素子を使用して1フレームの光音響画像の生成に供する光音響波(或いは光音響信号)を複数回に分けて検出し、これらの光音響信号をすべて一旦メモリに保存し、並列にサンプリング可能なデータ数より多い数のデータを上記メモリから読み出して位相整合加算する方法が開示されている。特許文献1の方法によれば、並列にサンプリング可能なデータ数が制限されている場合であっても、より高い分解能で光音響画像を生成することが可能となる。 For example, in Patent Document 1, in the generation of a photoacoustic image of an imaging target range divided into a plurality of partial areas, a one-frame photoacoustic image is generated using an acoustic detection element corresponding to each partial area. The photoacoustic wave (or photoacoustic signal) to be provided is detected in multiple times, all these photoacoustic signals are temporarily stored in memory, and more data than the number of data that can be sampled in parallel are read from the memory. A method for phase matching addition is disclosed. According to the method of Patent Document 1, it is possible to generate a photoacoustic image with higher resolution even when the number of data that can be sampled in parallel is limited.
特開2012-005623号公報JP 2012-005623 A
 ところで、PAIにおいて三次元のボリュームデータを生成する場合において、特許文献1にあるように複数回に分けて検出された光音響信号をどのように処理するかについては、詳細な議論はなされていない。例えば、生成された1フレームの光音響画像を単純に並べてボリュームデータを生成する方法が考えられる。しかしながら、特許文献1で生成された1フレームの光音響画像は、もともと複数回に分けて検出された光音響信号に基づいて生成された画像であるから、上記のような方法ではボリュームデータが被検体内部の構造を正確に表現できないという問題が生じ得る。 By the way, in the case of generating three-dimensional volume data in PAI, there is no detailed discussion on how to process the photoacoustic signals detected in multiple times as disclosed in Patent Document 1. . For example, a method of generating volume data by simply arranging the generated photoacoustic images of one frame can be considered. However, since the 1-frame photoacoustic image generated in Patent Document 1 is an image generated based on the photoacoustic signal that was originally detected in a plurality of times, volume data is not covered by the method described above. There may be a problem that the structure inside the specimen cannot be expressed accurately.
 本発明は上記問題に鑑みてなされたものであり、複数回に分けて検出された光音響信号に基づいてボリュームデータを生成する場合であっても、被検体内部の構造をより正確に表現することを可能とする光音響画像生成装置および光音響画像生成方法を提供することを目的とするものである。 The present invention has been made in view of the above problems, and even when volume data is generated based on photoacoustic signals detected in multiple times, the structure inside the subject is expressed more accurately. It is an object of the present invention to provide a photoacoustic image generation apparatus and a photoacoustic image generation method that enable this.
 上記課題を解決するために、本発明に係る光音響画像生成装置は、
 被検体内で発生した光音響波を検出して、この光音響波の光音響信号に基づいて光音響画像を生成する光音響画像生成装置において、
 複数の音響検出素子を有する音響検出部であって、並列して光音響波を検出する一部の音響検出素子群を順次選択しながら、複数の音響検出素子に対応する画像化領域を複数の検出領域に分けて光音響波を検出領域ごとに検出する音響検出部と、
 空間における音響検出部の座標を取得する座標取得部と、
 座標取得部により取得された光音響波を検出する際の音響検出部の座標に基づいて、各検出領域の座標を代表する代表座標を検出領域ごとに設定する座標設定部と、
 光音響信号に基づいて生成された光音響画像データのうち各検出領域を表示する部分画像データと、各検出領域に設定された代表座標とをそれぞれ対応させて光音響画像のボリュームデータを生成する音響信号処理ユニットとを備えることを特徴とするものである。
In order to solve the above problems, a photoacoustic image generation apparatus according to the present invention includes:
In a photoacoustic image generation device that detects a photoacoustic wave generated in a subject and generates a photoacoustic image based on a photoacoustic signal of the photoacoustic wave,
An acoustic detection unit having a plurality of acoustic detection elements, wherein a plurality of imaging regions corresponding to the plurality of acoustic detection elements are selected while sequentially selecting some acoustic detection element groups that detect photoacoustic waves in parallel. An acoustic detection unit that detects photoacoustic waves for each detection region by dividing the detection region;
A coordinate acquisition unit for acquiring the coordinates of the acoustic detection unit in space;
A coordinate setting unit that sets, for each detection region, representative coordinates that represent the coordinates of each detection region, based on the coordinates of the acoustic detection unit when detecting the photoacoustic wave acquired by the coordinate acquisition unit;
Of the photoacoustic image data generated based on the photoacoustic signal, partial image data for displaying each detection area and the representative coordinates set in each detection area are associated with each other to generate volume data of the photoacoustic image. And an acoustic signal processing unit.
 そして、本発明に係る光音響画像生成装置において、音響信号処理ユニットが、ある検出領域を表示する部分画像データの生成を、当該検出領域で得られた光音響信号および他の検出領域で得られた光音響信号に基づいて行うものである構成を採用できる。この場合において、音響信号処理ユニットが、画像化領域に相当する一組の検出領域で得られた光音響信号に基づいて上記部分画像データの生成を行うものである構成を採用できる。 In the photoacoustic image generation device according to the present invention, the acoustic signal processing unit can generate partial image data for displaying a certain detection region in the photoacoustic signal obtained in the detection region and the other detection regions. A configuration that is performed based on the photoacoustic signal can be employed. In this case, it is possible to employ a configuration in which the acoustic signal processing unit generates the partial image data based on the photoacoustic signal obtained in a set of detection areas corresponding to the imaging area.
 或いは、本発明に係る光音響画像生成装置において、音響信号処理ユニットが、ある検出領域を表示する部分画像データの生成を、当該検出領域で得られた光音響信号のみに基づいて行うものである構成を採用できる。 Alternatively, in the photoacoustic image generation apparatus according to the present invention, the acoustic signal processing unit generates partial image data for displaying a certain detection area based only on the photoacoustic signal obtained in the detection area. Configuration can be adopted.
 また、本発明に係る光音響画像生成装置において、座標設定部が、ある検出領域で光音響波が検出されている間に取得した複数の座標を使用して算出した算出座標を、代表座標として当該検出領域に設定するものである構成を採用できる。この場合において、座標設定部が、ある検出領域で光音響波が検出される期間の前後の直近に取得した座標も使用して、算出座標を求めるものである構成を採用できる。 Further, in the photoacoustic image generation apparatus according to the present invention, the coordinate setting unit calculates, as representative coordinates, calculated coordinates calculated using a plurality of coordinates acquired while a photoacoustic wave is detected in a certain detection region. A configuration that is set in the detection region can be employed. In this case, it is possible to adopt a configuration in which the coordinate setting unit obtains the calculated coordinates using the coordinates acquired immediately before and after the period in which the photoacoustic wave is detected in a certain detection region.
 或いは、本発明に係る光音響画像生成装置において、座標設定部が、ある検出領域で光音響波が検出されている間に取得した座標の中の1つの座標を、そのまま代表座標として当該検出領域に設定するものである構成を採用できる。 Alternatively, in the photoacoustic image generation apparatus according to the present invention, the coordinate setting unit directly uses one of the coordinates acquired while the photoacoustic wave is detected in a certain detection area as a representative coordinate. It is possible to adopt a configuration that is set to
 また、本発明に係る光音響画像生成装置において、音響検出素子群を構成する複数の音響検出素子が連続している構成を採用できる。この場合において、座標取得部が、座標を読み取る読取点を複数有し、座標設定部が、複数の読取点で読み取られた座標に基づいて代表座標を設定するものである構成を採用できる。さらに、複数の読取点が各音響検出素子群に対応して設けられている構成を採用できる。 Further, in the photoacoustic image generation apparatus according to the present invention, a configuration in which a plurality of acoustic detection elements constituting the acoustic detection element group is continuous can be employed. In this case, it is possible to adopt a configuration in which the coordinate acquisition unit has a plurality of reading points for reading the coordinates, and the coordinate setting unit sets the representative coordinates based on the coordinates read by the plurality of reading points. Furthermore, a configuration in which a plurality of reading points are provided corresponding to each acoustic detection element group can be employed.
 或いは、本発明に係る光音響画像生成装置において、音響検出素子群はN個あり、それぞれの前記音響検出素子群に関して、n番目の音響検出素子群が、n、N+n、2N+n、…、(Q-2)N+nおよび(Q-1)N+n番目の音響検出素子から構成されてもよい。なお、Qは、音響検出部が有する複数の音響検出素子の総数をNで割ったときの商を表す。 Alternatively, in the photoacoustic image generation apparatus according to the present invention, there are N acoustic detection element groups, and for each of the acoustic detection element groups, the nth acoustic detection element group is n, N + n, 2N + n,. -2) N + n and (Q-1) N + nth acoustic detection elements may be used. Note that Q represents a quotient when the total number of a plurality of sound detection elements included in the sound detection unit is divided by N.
 また、本発明に係る光音響画像生成装置において、音響検出部は、被検体に対して送信された超音波に対する反射超音波を検出するものであり、
 音響信号処理ユニットは、反射超音波の超音波信号に基づいて超音波画像を生成するものであることが好ましい。
Moreover, in the photoacoustic image generation apparatus according to the present invention, the acoustic detection unit detects reflected ultrasonic waves with respect to the ultrasonic waves transmitted to the subject,
It is preferable that the acoustic signal processing unit generates an ultrasonic image based on an ultrasonic signal of reflected ultrasonic waves.
 本発明に係る光音響画像生成方法は、
 被検体内で発生した光音響波を検出して、この光音響波の光音響信号に基づいて光音響画像を生成する光音響画像生成方法において、
 複数の音響検出素子を有する音響検出部を使用して、並列して光音響波を検出する一部の音響検出素子群を順次選択しながら、複数の音響検出素子に対応する画像化領域を複数の検出領域に分けて光音響波を検出領域ごとに検出し、
 光音響波を検出する際の空間における音響検出部の座標に基づいて、各検出領域の座標を代表する代表座標を検出領域ごとに設定し、
 光音響信号に基づいて生成された光音響画像データのうち各検出領域を表示する部分画像データと、各検出領域に設定された代表座標とをそれぞれ対応させて光音響画像のボリュームデータを生成することを特徴とするものである。
A photoacoustic image generation method according to the present invention includes:
In a photoacoustic image generation method for detecting a photoacoustic wave generated in a subject and generating a photoacoustic image based on a photoacoustic signal of the photoacoustic wave,
Using a sound detection unit having a plurality of sound detection elements, a plurality of imaging regions corresponding to the plurality of sound detection elements are selected while sequentially selecting some sound detection element groups that detect photoacoustic waves in parallel. The photoacoustic wave is detected for each detection area divided into detection areas,
Based on the coordinates of the acoustic detection unit in the space when detecting the photoacoustic wave, representative coordinates representing the coordinates of each detection region are set for each detection region,
Of the photoacoustic image data generated based on the photoacoustic signal, partial image data for displaying each detection area and the representative coordinates set in each detection area are associated with each other to generate volume data of the photoacoustic image. It is characterized by this.
 そして、本発明に係る光音響画像生成方法において、ある検出領域を表示する部分画像データの生成を、当該検出領域で得られた光音響信号および他の検出領域で得られた光音響信号に基づいて行う構成を採用できる。この場合において、画像化領域に相当する一組の検出領域で得られた光音響信号に基づいて前記部分画像データの生成を行う構成を採用できる。 Then, in the photoacoustic image generation method according to the present invention, the generation of partial image data for displaying a certain detection region is based on the photoacoustic signal obtained in the detection region and the photoacoustic signal obtained in another detection region. Can be adopted. In this case, a configuration in which the partial image data is generated based on the photoacoustic signal obtained in a set of detection regions corresponding to the imaging region can be employed.
 或いは、本発明に係る光音響画像生成方法において、ある検出領域を表示する部分画像データの生成を、当該検出領域で得られた光音響信号のみに基づいて行う構成を採用できる。 Alternatively, in the photoacoustic image generation method according to the present invention, it is possible to adopt a configuration in which partial image data for displaying a certain detection region is generated based only on the photoacoustic signal obtained in the detection region.
 また、本発明に係る光音響画像生成方法において、ある検出領域で光音響波が検出されている間に取得した複数の座標を使用して算出した算出座標を、代表座標として当該検出領域に設定する構成を採用できる。この場合において、ある検出領域で光音響波が検出される期間の前後の直近に取得した座標も使用して、算出座標を求める構成を採用できる。 In the photoacoustic image generation method according to the present invention, calculated coordinates calculated using a plurality of coordinates acquired while a photoacoustic wave is detected in a certain detection area are set as representative coordinates in the detection area. Can be adopted. In this case, it is possible to employ a configuration in which calculated coordinates are obtained using coordinates acquired immediately before and after a period in which a photoacoustic wave is detected in a certain detection region.
 或いは、本発明に係る光音響画像生成方法において、ある検出領域で光音響波が検出されている間に取得した座標の中の1つの座標を、そのまま代表座標として当該検出領域に設定する構成を採用できる。 Alternatively, in the photoacoustic image generation method according to the present invention, a configuration in which one of coordinates acquired while a photoacoustic wave is detected in a certain detection area is set as a representative coordinate in the detection area as it is. Can be adopted.
 また、本発明に係る光音響画像生成方法において、音響検出素子群を構成する複数の音響検出素子が連続している構成を採用できる。 Further, in the photoacoustic image generation method according to the present invention, a configuration in which a plurality of acoustic detection elements constituting the acoustic detection element group is continuous can be employed.
 本発明に係る光音響画像生成装置および光音響画像生成方法では、検出領域ごとに光音響波を検出しかつ代表座標を設定し、各検出領域を表示する部分画像データと各検出領域に設定された代表座標とをそれぞれ対応させてボリュームデータを生成するから、検出領域ごとに(つまり異なる時間に)得られた光音響信号に基づいて生成された1フレームの光音響画像データを単に並べるよりも、ボリュームデータ内における光音響画像データの位置の精度を高めることができる。この結果、複数回に分けて検出された光音響信号に基づいてボリュームデータを生成する場合であっても、被検体内部の構造をより正確に表現することが可能となる。 In the photoacoustic image generation apparatus and the photoacoustic image generation method according to the present invention, the photoacoustic wave is detected for each detection region, the representative coordinates are set, and the partial image data for displaying each detection region and each detection region are set. Since the volume data is generated in correspondence with each of the representative coordinates, rather than simply arranging one frame of photoacoustic image data generated based on the photoacoustic signal obtained for each detection region (that is, at different times). The accuracy of the position of the photoacoustic image data in the volume data can be increased. As a result, even when volume data is generated based on photoacoustic signals detected in a plurality of times, the structure inside the subject can be expressed more accurately.
第1の実施形態の光音響画像生成装置の構成を示す概略図である。It is the schematic which shows the structure of the photoacoustic image generating apparatus of 1st Embodiment. 音響検出部の構成を示す概略図である。It is the schematic which shows the structure of a sound detection part. 第1の実施形態における部分画像データの生成過程を示す概念図である。It is a conceptual diagram which shows the production | generation process of the partial image data in 1st Embodiment. ボリュームデータに格納された部分画像データを示す概念図である。It is a conceptual diagram which shows the partial image data stored in volume data. 第1の実施形態の光音響画像生成方法の工程を示すフローチャートである。It is a flowchart which shows the process of the photoacoustic image generation method of 1st Embodiment. 第1の実施形態におけるレーザ光の出射、光音響信号の検出および座標の設定についてのタイミングチャートである。It is a timing chart about emission of a laser beam, detection of a photoacoustic signal, and setting of coordinates in a 1st embodiment. 第2の実施形態における部分画像データの生成過程を示す概念図である。It is a conceptual diagram which shows the production | generation process of the partial image data in 2nd Embodiment. 第2の実施形態の光音響画像生成方法の工程を示すフローチャートである。It is a flowchart which shows the process of the photoacoustic image generation method of 2nd Embodiment. 第3の実施形態におけるレーザ光の出射、光音響信号の検出および座標の設定についてのタイミングチャートである。It is a timing chart about emission of a laser beam, detection of a photoacoustic signal, and setting of coordinates in a 3rd embodiment. 第4の実施形態の光音響画像生成装置の構成を示す概略図である。It is the schematic which shows the structure of the photoacoustic image generating apparatus of 4th Embodiment.
 以下、本発明の実施形態について図面を用いて説明するが、本発明はこれに限られるものではない。なお、視認しやすくするため、図面中の各構成要素の縮尺等は実際のものとは適宜異ならせてある。 Hereinafter, embodiments of the present invention will be described with reference to the drawings, but the present invention is not limited thereto. In addition, for easy visual recognition, the scale of each component in the drawings is appropriately changed from the actual one.
 「第1の実施形態」
 まず、本発明の第1の実施形態を詳細に説明する。図1は、本実施形態の光音響画像生成装置の構成を示す概略ブロック図である。図2は、プローブ内の音響検出部の構成を示す概略図である。
“First Embodiment”
First, the first embodiment of the present invention will be described in detail. FIG. 1 is a schematic block diagram showing the configuration of the photoacoustic image generation apparatus of the present embodiment. FIG. 2 is a schematic diagram illustrating a configuration of an acoustic detection unit in the probe.
 本実施形態の光音響画像生成装置10は、図1に示されるように、プローブ11、超音波ユニット12、レーザユニット13、表示手段14、座標取得部(15、41および42)並びに入力手段16を備える。 As shown in FIG. 1, the photoacoustic image generation apparatus 10 of the present embodiment includes a probe 11, an ultrasonic unit 12, a laser unit 13, a display unit 14, a coordinate acquisition unit (15, 41 and 42), and an input unit 16. Is provided.
 一方、本実施形態における光音響画像生成方法は、128個の音響検出素子20cを有する音響検出部20を使用して、並列して光音響波を検出する一部の音響検出素子群を順次選択しながら、複数の音響検出素子20cに対応する画像化領域を複数の検出領域に分けて光音響波を検出領域ごとに検出し、光音響波を検出する際の空間における音響検出部20の座標に基づいて、各検出領域の座標を代表する代表座標を検出領域ごとに設定し、光音響信号に基づいて生成された光音響画像データのうち各検出領域を表示する部分画像データと、各検出領域に設定された代表座標とをそれぞれ対応させて光音響画像のボリュームデータを生成するものである。 On the other hand, the photoacoustic image generation method in this embodiment uses the acoustic detection unit 20 having 128 acoustic detection elements 20c to sequentially select some acoustic detection element groups that detect photoacoustic waves in parallel. However, the imaging region corresponding to the plurality of acoustic detection elements 20c is divided into a plurality of detection regions, the photoacoustic wave is detected for each detection region, and the coordinates of the acoustic detection unit 20 in the space when the photoacoustic wave is detected Based on the above, representative coordinates representing the coordinates of each detection area are set for each detection area, and among the photoacoustic image data generated based on the photoacoustic signal, partial image data for displaying each detection area, and each detection Volume data of the photoacoustic image is generated by associating the representative coordinates set in the region with each other.
 <プローブ>
 プローブ11は、例えば、レーザユニット13から出力されたレーザ光Lを被検体Mまで導光する光ファイバ40、および、被検体Mからの音響波Uを検出し、検出した音響波Uの強度に応じた電気信号(音響信号)を生成する音響検出部20を有する。なお本明細書において、「音響波」とは超音波および光音響波を含む意味である。ここで、「超音波」とは、圧電素子等の音響波発生装置の振動により被検体内に発生した弾性波およびその反射波を意味し、「光音響波」とは、光の照射による光音響効果により被検体内に発生した弾性波を意味する。プローブ11は、例えばハンドヘルド型の探触子であり、使用者が手動で走査可能となるように構成されている。なお、走査は、手動による走査に限られず、メカニカル的な機構によって実施してもよい。プローブ11は、セクタ走査タイプ、リニア走査タイプ、コンベックス走査タイプ等の中から診断対象となる被検体Mに応じて適宜選択される。なお、本実施形態では、座標取得部の一部を構成する磁気センサ42がプローブ11に内蔵されている。
<Probe>
The probe 11 detects, for example, the optical fiber 40 that guides the laser light L output from the laser unit 13 to the subject M, and the acoustic wave U from the subject M, and sets the intensity of the detected acoustic wave U. A sound detection unit 20 that generates a corresponding electrical signal (acoustic signal) is included. In the present specification, “acoustic wave” means an ultrasonic wave and a photoacoustic wave. Here, “ultrasonic wave” means an elastic wave generated in a subject due to vibration of an acoustic wave generator such as a piezoelectric element and its reflected wave, and “photoacoustic wave” means light generated by light irradiation. It means an elastic wave generated in the subject due to the acoustic effect. The probe 11 is, for example, a handheld probe, and is configured so that a user can manually scan. The scanning is not limited to manual scanning, and may be performed by a mechanical mechanism. The probe 11 is appropriately selected from a sector scan type, a linear scan type, a convex scan type, and the like according to the subject M to be diagnosed. In the present embodiment, a magnetic sensor 42 that constitutes a part of the coordinate acquisition unit is built in the probe 11.
 音響検出部20は、例えばバッキング材、検出素子アレイ20a、検出素子アレイ20aの制御回路、マルチプレクサ20b、音響整合層および音響レンズから構成される。検出素子アレイ20aは、本実施形態では、128個の音響検出素子20cが一次元的に配列したものであり、実際に検出した音響波を電気信号に変換する。なお、音響検出素子20cの個数および配列についてはこれに限られず、例えば音響検出素子20cの個数は192個でもよいし、音響検出素子20cは二次元的に配列されてもよい。音響検出素子20cは、例えば、圧電セラミクス、またはポリフッ化ビニリデン(PVDF)のような高分子フィルムから構成される圧電素子である。マルチプレクサ20bは、並列して音響波を検出する一部の音響検出素子20cごとに、当該音響検出素子20cと超音波ユニット12とを選択的に接続する。 The acoustic detection unit 20 includes, for example, a backing material, a detection element array 20a, a control circuit for the detection element array 20a, a multiplexer 20b, an acoustic matching layer, and an acoustic lens. In the present embodiment, the detection element array 20a is a one-dimensional array of 128 acoustic detection elements 20c, and converts actually detected acoustic waves into electrical signals. Note that the number and arrangement of the acoustic detection elements 20c are not limited to this. For example, the number of acoustic detection elements 20c may be 192, or the acoustic detection elements 20c may be two-dimensionally arranged. The acoustic detection element 20c is a piezoelectric element composed of a polymer film such as piezoelectric ceramics or polyvinylidene fluoride (PVDF). The multiplexer 20b selectively connects the acoustic detection element 20c and the ultrasonic unit 12 for each of the acoustic detection elements 20c that detect acoustic waves in parallel.
 音響検出部20は、マルチプレクサ20bによる選択的接続により、並列して光音響波を検出する音響検出素子群に対応させて、画像化領域を複数の検出領域に分割し、光音響画像データの生成に使用される光音響波を検出領域ごとに検出する。なお、音響検出素子群とは、複数の音響検出素子20cのうち並列して光音響波を検出する音響検出素子の集合である。画像化領域とは、検出素子アレイ20aによって規定される1フレームの光音響画像に表示される対象となる被検体の領域である。 The acoustic detection unit 20 generates photoacoustic image data by dividing the imaging region into a plurality of detection regions corresponding to acoustic detection element groups that detect photoacoustic waves in parallel by selective connection by the multiplexer 20b. The photoacoustic wave used for the detection is detected for each detection region. The acoustic detection element group is a set of acoustic detection elements that detect photoacoustic waves in parallel among the plurality of acoustic detection elements 20c. The imaging region is a region of a subject to be displayed in a one-frame photoacoustic image defined by the detection element array 20a.
 具体的には以下の通りである。128個の音響検出素子20cは、マルチプレクサ20bによる選択的接続により、音響検出素子群ごとに駆動が制御される。つまり、各検出領域は、画像化領域のうち、並列して超音波ユニット12へ光音響信号の送信が可能な状態にある一部(128個未満)の音響検出素子20cに対応する領域である。そして本実施形態では、音響検出素子群として検出素子アレイ20aは2つの素子領域(素子領域Aまたは素子領域B)に分けられ、例えば図2のAに示されるように、それぞれの素子領域は64個の音響検出素子20cから構成される。なお、各素子領域は、他の素子領域と重複していてもよいが、本実施形態では重複していないものとする。また、各素子領域を構成する音響検出素子の個数は、均等であることが好ましいが、必ずしも厳密に均等である必要はない。 Specifically, it is as follows. The drive of the 128 acoustic detection elements 20c is controlled for each acoustic detection element group by selective connection by the multiplexer 20b. That is, each detection region is a region corresponding to a part (less than 128) of acoustic detection elements 20c in a state in which photoacoustic signals can be transmitted to the ultrasound unit 12 in parallel in the imaging region. . In this embodiment, the detection element array 20a as an acoustic detection element group is divided into two element regions (element region A or element region B). For example, as shown in FIG. It is comprised from the one acoustic detection element 20c. In addition, although each element area | region may overlap with another element area | region, it shall not overlap in this embodiment. Further, the number of acoustic detection elements constituting each element region is preferably equal, but is not necessarily strictly equal.
 そして、光音響波の検出は、マルチプレクサ20bによる選択的接続により、各素子領域が順次超音波ユニット12に接続されて素子領域ごとに順番に行われる。例えば本実施形態の場合には、マルチプレクサ20bのチャンネル(ch)数も64であり、64ch分のデータが並列にサンプリング可能である。そして、まず素子領域Aで光音響波が検出され、マルチプレクサ20bが接続を素子領域Bに切り替えた後、素子領域Bで光音響波が検出さる。各素子領域で検出された光音響波の信号(光音響信号)は順次超音波ユニット12の受信回路21へ送信される。このように、並列して光音響波を検出する音響検出素子の個数、つまり受信におけるチャンネル(ch)数を減らすことにより、AD変換手段22のch数を減らすことができ、コストを削減することが可能となる。そして、プローブ11が走査される場合には、上記のような素子領域ごとの光音響波の検出が順番に繰り返される。したがって、プローブ11が移動していることにより同じ素子領域であっても複数の検出領域で光音響波の検出が行われる。 The detection of the photoacoustic wave is performed in order for each element region by sequentially connecting each element region to the ultrasonic unit 12 by selective connection by the multiplexer 20b. For example, in the case of this embodiment, the number of channels (ch) of the multiplexer 20b is 64, and data for 64 channels can be sampled in parallel. First, a photoacoustic wave is detected in the element region A, and after the multiplexer 20b switches the connection to the element region B, the photoacoustic wave is detected in the element region B. A photoacoustic wave signal (photoacoustic signal) detected in each element region is sequentially transmitted to the receiving circuit 21 of the ultrasonic unit 12. In this way, by reducing the number of acoustic detection elements that detect photoacoustic waves in parallel, that is, the number of channels (ch) in reception, the number of channels of the AD conversion means 22 can be reduced, thereby reducing costs. Is possible. And when the probe 11 is scanned, the detection of the photoacoustic wave for every element area | region as mentioned above is repeated in order. Therefore, since the probe 11 is moved, photoacoustic waves are detected in a plurality of detection regions even in the same element region.
 なお、素子領域の分割は、図2のAのように音響検出素子全体を2つに分ける態様の他、図2のBのように音響検出素子全体を3つ或いは4つ以上に分ける態様でもよい。この場合、光音響波の検出は、マルチプレクサ20bによる選択的接続により、例えば素子領域A、素子領域Bおよび素子領域Cの順番に行われる。また、素子領域の分割は、図2のAや図2のBのように各素子領域を構成する音響検出素子が配列した方向に連続している態様の他、図2のCのように1組の音響検出素子群が他の音響検出素子群によって離間されている態様でもよい。すなわち、検出素子アレイ20aがN分割されて素子領域(音響検出素子群)がN個あり、それぞれの素子領域に関して、n番目の素子領域が、n、N+n、2N+n、…、(Q-2)N+nおよび(Q-1)N+n番目の音響検出素子から構成から構成される態様でもよい。なお、Qは、音響検出部が有する複数の音響検出素子の総数をNで割ったときの商を表す。もし余りが生じたときには、各素子領域に可能な限り均等に組み入れればよい。この場合には例えば、あるタイミングでは1、N+1、2N+1、…の素子領域で音響信号を検出し、次のタイミングでは2、N+2、2N+2、…の素子領域で音響信号を検出することになる。例えば図2のCは、素子領域が2つであり、素子領域A(n=1)が奇数ch(左から1、3、5、…、125および127番目)の音響検出素子から構成され、素子領域B(n=2)が偶数ch(左から2、4、6、…126および128番目)の音響検出素子から構成される態様を表す。また、素子領域が3つである場合には、例えば、1番目の素子領域は左から1、4、7、…、121および124番目の音響検出素子並びに余りの127番目の音響検出素子から構成され、2番目の素子領域は左から2、5、8、…、122および125番目の音響検出素子並びに余りの128番目の音響検出素子から構成され、3番目の素子領域は左から3、6、9、…、123および126番目の音響検出素子から構成される。 The element area may be divided by dividing the entire acoustic detection element into two as shown in FIG. 2A, or dividing the entire acoustic detection element into three or four or more as shown in B of FIG. Good. In this case, photoacoustic waves are detected in the order of, for example, the element region A, the element region B, and the element region C by selective connection by the multiplexer 20b. In addition, the element region is divided as shown in FIG. 2C, in addition to the aspect in which the acoustic detection elements constituting each element region are continuous in the arrangement direction as shown in FIG. 2A and FIG. 2B. A mode in which the set of acoustic detection element groups is separated by another acoustic detection element group may be employed. That is, the detection element array 20a is divided into N, and there are N element regions (acoustic detection element groups). For each element region, the nth element region is n, N + n, 2N + n, (Q-2). It may be configured from N + n and (Q−1) N + nth acoustic detection elements. Note that Q represents a quotient when the total number of a plurality of sound detection elements included in the sound detection unit is divided by N. If there is a remainder, it should be incorporated into each element region as evenly as possible. In this case, for example, an acoustic signal is detected in an element region of 1, N + 1, 2N + 1,... At a certain timing, and an acoustic signal is detected in an element region of 2, N + 2, 2N + 2,. For example, C in FIG. 2 has two element regions, and the element region A (n = 1) is composed of odd-numbered ch (1, 3, 5,..., 125 and 127th from the left) acoustic detection elements. The element area | region B (n = 2) represents the aspect comprised from the acoustic detection element of even-numbered ch (2, 4, 6, ... 126 and 128th from the left). When there are three element regions, for example, the first element region is composed of the first, fourth, seventh,..., 121 and 124th acoustic detection elements from the left and the remaining 127th acoustic detection element. The second element region is composed of the 2, 5, 8, ..., 122 and 125th acoustic detection elements from the left and the remaining 128th acoustic detection element, and the third element region is 3, 6 from the left. , 9,..., 123, and 126, and 126th acoustic detection element.
 光ファイバ40は、レーザユニット13から出力されたレーザ光Lを検出素子アレイ20aの近傍まで導光する。光ファイバ40は、特に限定されず、石英ファイバ等の公知のものを使用することができる。レーザ光Lは、検出素子アレイ20aの近傍まで導光された後、少なくとも選択的に接続されている素子領域に対向している検出領域を含む範囲に照射される。なお、レーザ光Lを検出素子アレイ20aの近傍まで導光するための光学素子としては、レーザ光Lが均一に被検体に出射するように、光ファイバの他に導光板や拡散板を使用することもできる。 The optical fiber 40 guides the laser light L output from the laser unit 13 to the vicinity of the detection element array 20a. The optical fiber 40 is not particularly limited, and a known fiber such as a quartz fiber can be used. The laser beam L is guided to the vicinity of the detection element array 20a, and then irradiated to a range including a detection region facing at least a selectively connected element region. As an optical element for guiding the laser light L to the vicinity of the detection element array 20a, a light guide plate or a diffusion plate is used in addition to the optical fiber so that the laser light L is uniformly emitted to the subject. You can also.
 <レーザユニット>
 レーザユニット13は、例えばレーザ光Lを発する光源を有し、被検体Mに照射する光としてレーザ光Lを出力する。レーザユニット13は、例えば、超音波ユニット12の制御手段29からのトリガ信号を受けてレーザ光Lを出力するように構成されている。レーザユニット13が出力するレーザ光Lは、例えば光ファイバ40などの導光部を用いてプローブ11の検出素子アレイ20a近傍まで導光される。レーザユニット13は、レーザ光として1~100nsecのパルス幅を有するパルス光を出力するものであることが好ましい。
<Laser unit>
The laser unit 13 includes a light source that emits laser light L, for example, and outputs the laser light L as light to be irradiated on the subject M. The laser unit 13 is configured to output a laser beam L in response to, for example, a trigger signal from the control unit 29 of the ultrasonic unit 12. The laser light L output from the laser unit 13 is guided to the vicinity of the detection element array 20a of the probe 11 using a light guide unit such as an optical fiber 40, for example. The laser unit 13 preferably outputs pulsed light having a pulse width of 1 to 100 nsec as laser light.
 例えば本実施形態では、レーザユニット13は、Qスイッチアレキサンドライトレーザである。この場合、レーザ光Lのパルス幅は、例えばQスイッチによって制御される。レーザ光の波長は、計測の対象となる被検体内の物質の光吸収特性によって適宜決定される。例えば計測対象が生体内のヘモグロビンである場合(つまり、血管を撮像する場合)には、一般的にはその波長は近赤外波長域に属する波長であることが好ましい。近赤外波長域とはおよそ700~850nmの波長域を意味する。しかしながら、レーザ光の波長は当然これに限られるものではない。また、レーザ光Lは、単波長でもよいし、複数の波長(例えば750nmおよび800nm)を含んでもよい。さらに、レーザ光Lが複数の波長を含む場合には、これらの波長の光は、同時に被検体Mに照射されてもよいし、交互に切り替えられながら照射されてもよい。レーザユニット13は、アレキサンドライトレーザの他、同様に近赤外波長域のレーザ光を出力可能なYAG-SHG-OPOレーザやTi-Sapphireレーザとすることもできる。 For example, in this embodiment, the laser unit 13 is a Q-switch alexandrite laser. In this case, the pulse width of the laser light L is controlled by, for example, a Q switch. The wavelength of the laser light is appropriately determined according to the light absorption characteristics of the substance in the subject to be measured. For example, when the measurement target is hemoglobin in a living body (that is, when a blood vessel is imaged), generally, the wavelength is preferably a wavelength belonging to the near-infrared wavelength region. The near-infrared wavelength region means a wavelength region of about 700 to 850 nm. However, the wavelength of the laser beam is not limited to this. The laser beam L may be a single wavelength or may include a plurality of wavelengths (for example, 750 nm and 800 nm). Furthermore, when the laser light L includes a plurality of wavelengths, the light of these wavelengths may be irradiated to the subject M at the same time, or may be irradiated while being switched alternately. The laser unit 13 may be a YAG-SHG-OPO laser or a Ti-Sapphire laser that can output laser light in the near-infrared wavelength region in addition to the alexandrite laser.
 <座標取得部>
 座標取得部は、常に或いはプローブ11が走査されている間に、プローブ11(つまり音響検出部20)の実空間における位置およびその姿勢を規定する座標(以下、単に座標ともいう。)を順次取得する。
<Coordinate acquisition unit>
The coordinate acquisition unit sequentially or sequentially acquires coordinates (hereinafter also simply referred to as coordinates) that define the position and posture of the probe 11 (that is, the acoustic detection unit 20) in the real space and while the probe 11 is being scanned. To do.
 例えば本実施形態では、座標取得部は磁気センサユニットであり、この磁気センサユニットは、座標取得制御部15、トランスミッタ等の磁場発生部41および磁気センサ42から構成される。磁気センサユニットは、磁場発生部系の空間(磁場発生部が形成するパルス磁場上の空間)に対する相対的な磁気センサの位置(x,y,z)および姿勢(角度)(α,β,γ)を取得することができる。そして、この磁気センサの位置および姿勢がプローブの位置および姿勢と関連付けられる。「磁気センサの位置」とは、磁気センサが取得した磁場情報に基づいて定められる磁気センサの基準点の位置を意味する。また、「磁気センサの姿勢」とは、例えば、磁気センサに関する上記基準点を原点とする空間(磁気センサ系の空間)の傾きを意味する。なお、プローブ11の走査が平行移動のみである場合には、取得する情報は相対位置のみでもよい。 For example, in the present embodiment, the coordinate acquisition unit is a magnetic sensor unit, and this magnetic sensor unit includes a coordinate acquisition control unit 15, a magnetic field generation unit 41 such as a transmitter, and a magnetic sensor 42. The magnetic sensor unit includes a position (x, y, z) and a posture (angle) (α, β, γ) of the magnetic sensor relative to the space of the magnetic field generation unit system (the space on the pulsed magnetic field formed by the magnetic field generation unit). ) Can be obtained. The position and orientation of the magnetic sensor are associated with the position and orientation of the probe. The “magnetic sensor position” means the position of the reference point of the magnetic sensor determined based on the magnetic field information acquired by the magnetic sensor. Further, “the attitude of the magnetic sensor” means, for example, the inclination of the space (the space of the magnetic sensor system) with the reference point relating to the magnetic sensor as the origin. Note that when the scanning of the probe 11 is only parallel movement, the acquired information may be only the relative position.
 座標取得制御部15は、プローブ11の走査の前に原点リセットの走査が行われると、例えばその時のプローブ11の座標を磁場発生部系の空間における原点に設定する。この空間は、例えば、平行移動のみを考える場合には(x,y,z)の3軸系の空間であり、回転移動も考える場合には(x,y,z,α,β,γ)の6軸系の空間となる。検出素子アレイ20aのアレイ方向(音響検出素子20cが配列した方向)またはエレベーション方向(アレイ方向に垂直で検出素子アレイ20aの検出面に平行な方向)に空間の軸が沿うように原点を設定することが好ましい。座標取得部は、磁気センサユニットの他、加速度センサや赤外線センサ等を使用して座標を取得するように構成してもよい。 When the origin reset scanning is performed before the probe 11 is scanned, the coordinate acquisition control unit 15 sets the coordinates of the probe 11 at that time as the origin in the space of the magnetic field generation unit system, for example. This space is, for example, a (x, y, z) triaxial space when considering only parallel movement, and (x, y, z, α, β, γ) when considering rotational movement. This is a 6-axis system space. The origin is set so that the axis of the space is along the array direction of the detection element array 20a (direction in which the acoustic detection elements 20c are arranged) or the elevation direction (direction perpendicular to the array direction and parallel to the detection surface of the detection element array 20a). It is preferable to do. The coordinate acquisition unit may be configured to acquire coordinates using an acceleration sensor, an infrared sensor, or the like in addition to the magnetic sensor unit.
 座標取得部は、例えば所定の周期(座標取得周期)でプローブ11の座標を取得する。この座標取得周期が小さいほどプローブ11の正確な位置の把握が可能となる。例えば磁気センサユニットの座標取得周期は5msである。取得された座標は、制御手段29に送信される。この座標は、音響信号に基づいて三次元のボリュームデータを生成したり、当該ボリュームデータから断層データを生成したり、二次元の音響画像を位置に応じて順番に並べたりする際に使用される。 The coordinate acquisition unit acquires the coordinates of the probe 11 at a predetermined cycle (coordinate acquisition cycle), for example. The smaller the coordinate acquisition period, the more accurate the position of the probe 11 can be grasped. For example, the coordinate acquisition cycle of the magnetic sensor unit is 5 ms. The acquired coordinates are transmitted to the control means 29. These coordinates are used when generating three-dimensional volume data based on the acoustic signal, generating tomographic data from the volume data, or arranging two-dimensional acoustic images in order according to the position. .
 本実施形態では、座標取得部において座標を読み取る読取点となる磁気センサ42がプローブ11内に2つ設けられており(図1)、例えば1つは素子領域Aの近傍にもう1つは素子領域Bの近傍に配置されている。制御手段29は、読取タイミングごとに、複数の磁気センサで読み取られた座標に基づいて代表座標を設定し、例えば各素子領域について、近い方の磁気センサ42によって読み取られた座標を採用したり、2つの磁気センサ42で読み取られた座標の加重平均を取ったりする。このように、複数の磁気センサが各素子領域に対応して設けられていることにより、代表座標と検出領域の実際の位置との整合の精度がより向上する。なお、磁気センサ42は1つでもよい。 In the present embodiment, two magnetic sensors 42 serving as reading points for reading coordinates in the coordinate acquisition unit are provided in the probe 11 (FIG. 1), for example, one is in the vicinity of the element region A and the other is an element. It is arranged in the vicinity of the region B. The control means 29 sets representative coordinates based on the coordinates read by a plurality of magnetic sensors at each reading timing, for example, adopts the coordinates read by the closer magnetic sensor 42 for each element region, The weighted average of the coordinates read by the two magnetic sensors 42 is taken. Thus, by providing a plurality of magnetic sensors corresponding to each element region, the accuracy of matching between the representative coordinates and the actual position of the detection region is further improved. One magnetic sensor 42 may be provided.
 <超音波ユニット>
 超音波ユニット12は、受信回路21、AD変換手段22、受信メモリ23、光音響画像再構成手段24、検波・対数変換手段27、光音響画像構築手段28、制御手段29、画像合成手段38および観察方式選択手段39を有する。超音波ユニット12は、本発明における音響信号処理ユニットに相当する。
<Ultrasonic unit>
The ultrasonic unit 12 includes a reception circuit 21, an AD conversion unit 22, a reception memory 23, a photoacoustic image reconstruction unit 24, a detection / logarithm conversion unit 27, a photoacoustic image construction unit 28, a control unit 29, an image synthesis unit 38, and Observation method selection means 39 is provided. The ultrasonic unit 12 corresponds to an acoustic signal processing unit in the present invention.
 制御手段29は、光音響画像生成装置10の各部を制御するものであり、本実施形態では例えばトリガ制御回路30を備える。トリガ制御回路30は、例えば光音響画像生成装置の起動の際に、レーザユニット13に光トリガ信号を送る。これによりレーザユニット13で、フラッシュランプが点灯し、レーザロッドの励起が開始される。そして、レーザロッドの励起状態は維持され、レーザユニット13はレーザ光を出力可能な状態となる。 The control means 29 controls each part of the photoacoustic image generation apparatus 10, and includes a trigger control circuit 30 in the present embodiment, for example. The trigger control circuit 30 sends a light trigger signal to the laser unit 13 when the photoacoustic image generation apparatus is activated, for example. As a result, the flash lamp is turned on in the laser unit 13 and the excitation of the laser rod is started. And the excitation state of a laser rod is maintained and the laser unit 13 will be in the state which can output a laser beam.
 そして、制御手段29は、その後トリガ制御回路30からレーザユニット13へQswトリガ信号を送信する。つまり、制御手段29は、このQswトリガ信号によってレーザユニット13からのレーザ光の出力タイミングを制御している。Qswトリガ信号の送信は、一定の時間間隔で送信してもよいし、座標取得部から得られる座標に基づいて一定の座標間隔で送信してもよい。また本実施形態では、制御手段29は、Qswトリガ信号の送信と同時にサンプリングトリガ信号をAD変換手段22に送信する。サンプリングトリガ信号は、AD変換手段22における光音響信号のサンプリングの開始タイミングの合図となる。このように、サンプリングトリガ信号を使用することにより、レーザ光の出力と同期して光音響信号をサンプリングすることが可能となる。 The control means 29 then transmits a Qsw trigger signal from the trigger control circuit 30 to the laser unit 13. That is, the control means 29 controls the output timing of the laser light from the laser unit 13 by this Qsw trigger signal. The transmission of the Qsw trigger signal may be transmitted at regular time intervals, or may be transmitted at regular coordinate intervals based on the coordinates obtained from the coordinate acquisition unit. In the present embodiment, the control unit 29 transmits the sampling trigger signal to the AD conversion unit 22 simultaneously with the transmission of the Qsw trigger signal. The sampling trigger signal serves as a cue for the start timing of the photoacoustic signal sampling in the AD conversion means 22. As described above, by using the sampling trigger signal, it is possible to sample the photoacoustic signal in synchronization with the output of the laser beam.
 さらに、制御手段29は、Qswトリガ信号の送信と同時に座標取得制御部15からプローブ11の座標(より正確には、磁気センサと検出素子アレイ20aとの距離も考慮した素子領域の座標)を取得し、その座標に基づいてその時光音響波の検出が行われている検出領域に代表座標を設定する。つまり制御手段29は、本発明における座標設定部に相当する。これにより、レーザ光の出力、検出領域ごとの光音響波の検出および座標の設定の3つのタイミングの同期が可能となる。なお、座標取得部の動作にラグがある場合には、制御手段29は、Qswトリガ信号の送信よりも前に、座標を取得するべき旨の座標取得部への命令を行う。代表座標は、検出領域の空間的な位置を代表して示す座標であり、上記のようにして検出領域ごとに設定される。設定された代表座標の情報は、受信メモリ23へ送信され、その検出領域で得られた光音響信号と関連付けられて記憶される。代表座標は、光音響波を検出する際に座標取得部により取得された音響検出部20の座標に基づいて決定される。例えば、代表座標は、ある検出領域で光音響波が検出されている間に取得した複数の座標を使用して算出した算出座標(例えば、平均値、加重平均値、中央値および最頻値など)とすることができる。また、代表座標は、ある検出領域で光音響波が検出されている間に取得した座標の中の1つの座標そのものとすることもできる。なお、本実施形態では後者を採用するものとし、前者の代表座標については第3の実施形態で詳細に説明する。 Furthermore, the control means 29 acquires the coordinates of the probe 11 (more precisely, the coordinates of the element region in consideration of the distance between the magnetic sensor and the detection element array 20a) from the coordinate acquisition control unit 15 simultaneously with the transmission of the Qsw trigger signal. Then, based on the coordinates, the representative coordinates are set in the detection area where the photoacoustic wave is detected at that time. That is, the control means 29 corresponds to a coordinate setting unit in the present invention. This makes it possible to synchronize the three timings of laser light output, photoacoustic wave detection for each detection region, and coordinate setting. When there is a lag in the operation of the coordinate acquisition unit, the control unit 29 issues a command to the coordinate acquisition unit that the coordinate should be acquired before the transmission of the Qsw trigger signal. The representative coordinates are coordinates representative of the spatial position of the detection area, and are set for each detection area as described above. Information on the set representative coordinates is transmitted to the reception memory 23 and stored in association with the photoacoustic signal obtained in the detection area. The representative coordinates are determined based on the coordinates of the acoustic detection unit 20 acquired by the coordinate acquisition unit when detecting the photoacoustic wave. For example, the representative coordinates are calculated coordinates calculated using a plurality of coordinates acquired while photoacoustic waves are detected in a certain detection area (for example, average value, weighted average value, median value, mode value, etc. ). In addition, the representative coordinate may be one of the coordinates acquired while the photoacoustic wave is detected in a certain detection area. In the present embodiment, the latter is adopted, and the former representative coordinates will be described in detail in the third embodiment.
 例えば、制御手段29は、プローブ11に設けられた所定のスイッチが押された時に、Qswトリガ信号の送信を開始するように構成することができる。このように構成すれば、スイッチが押された時のプローブ11の位置をプローブ走査の開始地点として取り扱うことができる。さらに次に当該スイッチが押された時にQswトリガ信号の送信を終了するように構成すれば、その時のプローブ11の位置をプローブ走査の終了地点として取り扱うことができる。 For example, the control means 29 can be configured to start transmission of a Qsw trigger signal when a predetermined switch provided on the probe 11 is pressed. If comprised in this way, the position of the probe 11 when a switch is pushed can be handled as a starting point of probe scanning. Further, if the transmission of the Qsw trigger signal is terminated when the switch is pressed next time, the position of the probe 11 at that time can be handled as the end point of the probe scan.
 受信回路21は、プローブ11で検出された光音響信号を受信する。受信回路21で受信された光音響信号はAD変換手段22に送信される。 The receiving circuit 21 receives the photoacoustic signal detected by the probe 11. The photoacoustic signal received by the receiving circuit 21 is transmitted to the AD conversion means 22.
 AD変換手段22は、サンプリング手段であり、受信回路21が受信した光音響信号をサンプリングしてデジタル信号に変換する。例えば、AD変換手段22は、サンプリング制御部およびAD変換器を有する。受信回路21によって受信された受信信号は、AD変換器によってデジタル化されたサンプリング信号に変換される。AD変換器は、サンプリング制御部によって制御されており、サンプリング制御部がサンプリングトリガ信号を受信したときに、サンプリングを行うように構成されている。AD変換手段22は、例えば外部から入力する所定周波数のADクロック信号に基づいて、所定のサンプリング周期で受信信号をサンプリングする。 The AD conversion means 22 is a sampling means, which samples the photoacoustic signal received by the receiving circuit 21 and converts it into a digital signal. For example, the AD conversion unit 22 includes a sampling control unit and an AD converter. The reception signal received by the reception circuit 21 is converted into a sampling signal digitized by an AD converter. The AD converter is controlled by a sampling control unit, and is configured to perform sampling when the sampling control unit receives a sampling trigger signal. The AD converter 22 samples the received signal at a predetermined sampling period based on, for example, an AD clock signal having a predetermined frequency input from the outside.
 受信メモリ23は、AD変換手段22でサンプリングされた光音響信号(つまり上記サンプリング信号)と制御手段29から送信された代表座標の情報とを関連付けて記憶する。そして、受信メモリ23は、プローブ11によって検出された光音響信号を光音響画像再構成手段24に出力する。 The reception memory 23 stores the photoacoustic signal (that is, the sampling signal) sampled by the AD conversion means 22 and the representative coordinate information transmitted from the control means 29 in association with each other. Then, the reception memory 23 outputs the photoacoustic signal detected by the probe 11 to the photoacoustic image reconstruction unit 24.
 光音響画像再構成手段24は、受信メモリ23から検出領域ごとに得られた光音響信号を順次読み出し、この光音響信号に基づいて検出領域ごとに、検出領域を表示する部分画像データの各ラインの信号データを生成する。具体的には光音響画像再構成手段24は、検出領域ごとに得られた64chのデータを、音響検出素子の位置に応じた遅延時間で加算し、1ライン分の信号データを生成する(遅延加算法)。光音響画像再構成手段24は、遅延加算法に代えて、CBP法(Circular Back Projection)により再構成を行ってもよい。あるいは光音響画像再構成手段24は、ハフ変換法又はフーリエ変換法を用いて再構成を行ってもよい。 The photoacoustic image reconstruction means 24 sequentially reads out the photoacoustic signal obtained for each detection area from the reception memory 23, and each line of partial image data for displaying the detection area for each detection area based on this photoacoustic signal. Signal data is generated. Specifically, the photoacoustic image reconstruction unit 24 adds the 64ch data obtained for each detection region with a delay time corresponding to the position of the acoustic detection element, and generates signal data for one line (delay). Addition method). The photoacoustic image reconstruction unit 24 may perform reconstruction by a CBP method (Circular Back Projection) instead of the delay addition method. Alternatively, the photoacoustic image reconstruction unit 24 may perform reconstruction using the Hough transform method or the Fourier transform method.
 検波・対数変換手段27は、各ラインの信号データの包絡線を求め、求めた包絡線を対数変換する。 The detection / logarithm conversion means 27 obtains an envelope of the signal data of each line, and logarithmically converts the obtained envelope.
 光音響画像構築手段28は、対数変換が施された各ラインの信号データに基づいて、光音響画像データを構築する。つまり、光音響画像構築手段28は、各ラインの信号データを画像データに変換し、検出領域ごとに部分画像データを生成する。本実施形態では、上記のように、部分画像データの基となる各ラインの信号データの再構成が、当該部分画像データに係る検出領域で得られた光音響信号のみに基づいて行われる。つまり、ある検出領域を表示する部分画像データの生成は、当該検出領域で得られた光音響信号のみに基づいて行われる。図3は、素子領域Aによって所定の検出領域で得られた64chの光音響信号Saから当該検出領域を表示する部分画像データIMaが生成され、素子領域Bによって他の検出領域で得られた64chの光音響信号Sbから当該検出領域を表示する部分画像データIMbが生成される過程を概念的に示す。なお、図3においては、図3のAの光音響信号から図3のBの部分画像データを生成する際における再構成の過程の図示は省略している。 The photoacoustic image construction means 28 constructs photoacoustic image data based on the signal data of each line subjected to logarithmic transformation. That is, the photoacoustic image construction unit 28 converts the signal data of each line into image data, and generates partial image data for each detection region. In the present embodiment, as described above, the reconstruction of the signal data of each line that is the basis of the partial image data is performed based only on the photoacoustic signal obtained in the detection region related to the partial image data. That is, the generation of partial image data for displaying a certain detection area is performed based only on the photoacoustic signal obtained in the detection area. FIG. 3 shows the case where the partial image data IMa for displaying the detection area is generated from the 64 ch photoacoustic signal Sa obtained in the predetermined detection area by the element area A, and the 64 ch obtained in the other detection areas by the element area B. A conceptual process of generating partial image data IMb for displaying the detection area from the photoacoustic signal Sb of FIG. In FIG. 3, the reconstruction process in generating the partial image data of B of FIG. 3 from the photoacoustic signal of A of FIG. 3 is omitted.
 そして、光音響画像構築手段28は、順次生成された各部分画像データと各検出領域に設定された代表座標とをそれぞれ対応させてボリュームデータを生成する。図4は、ボリュームデータに格納された部分画像データを示す概念図である。図4では、画像化領域に相当する一組の検出領域Rごとに部分画像データが順に並べられている。本実施形態では、図4に示されるように1組の検出領域であっても、素子領域Bに対向していた検出領域に係る部分画像データIMbが、素子領域Aに対向していた検出領域に係る部分画像データIMaに比べ、ボリュームデータ内でプローブ11の走査方向にずれて格納される。これは、プローブ11が走査されているため、素子領域Bで検出する時の検出領域の位置が素子領域Aで検出する時の検出領域の位置に対してプローブ11の走査方向にずれるためである。画像化領域に相当する一組の検出領域とは、各検出領域に係る素子領域が異なるものの組合せをいう。例えば本実施形態では、一組の検出領域Rは、素子領域Aに対応する検出領域および素子領域Bに対応する検出領域の組合せである。光音響画像構築手段28は、例えば光音響信号(ピーク部分)の時間軸方向の位置を光音響画像における深さ方向の位置に変換して光音響画像を構築する。 Then, the photoacoustic image construction means 28 generates volume data by associating the sequentially generated partial image data with the representative coordinates set in each detection area. FIG. 4 is a conceptual diagram showing partial image data stored in volume data. In FIG. 4, the partial image data is arranged in order for each set of detection regions R corresponding to the imaging region. In the present embodiment, the partial image data IMb related to the detection region facing the element region B is the detection region facing the element region A even in one set of detection regions as shown in FIG. Compared to the partial image data IMa related to the above, the volume data is stored while being shifted in the scanning direction of the probe 11. This is because, since the probe 11 is scanned, the position of the detection region when detecting in the element region B is shifted in the scanning direction of the probe 11 with respect to the position of the detection region when detecting in the element region A. . A set of detection regions corresponding to the imaging region means a combination of elements having different element regions related to the detection regions. For example, in the present embodiment, the set of detection regions R is a combination of a detection region corresponding to the element region A and a detection region corresponding to the element region B. The photoacoustic image construction means 28 constructs a photoacoustic image by converting, for example, a position in the time axis direction of the photoacoustic signal (peak portion) into a position in the depth direction in the photoacoustic image.
 観察方式選択手段39は、光音響画像の表示態様を選択するものである。光音響信号についてのボリュームデータの表示態様としては、例えば三次元画像としての態様、断面画像としての態様および所定の軸上のグラフとしての態様が挙げられる。いずれの態様によって表示するかは、初期設定或いは使用者による入力手段16からの入力に従って選択される。 The observation method selection means 39 is for selecting the display mode of the photoacoustic image. Examples of the volume data display mode for the photoacoustic signal include a mode as a three-dimensional image, a mode as a cross-sectional image, and a mode as a graph on a predetermined axis. The display mode is selected according to the initial setting or the input from the input unit 16 by the user.
 画像合成手段38は、生成されたボリュームデータに必要な処理(例えばスケールの補正およびボクセル値に応じた色付け等)を施す。 The image composition unit 38 performs necessary processing (for example, scale correction and coloring according to the voxel value) on the generated volume data.
 選択された観察方法に従って生成された光音響画像データが、表示手段14に表示するための最終的な画像(表示画像)となる。なお、上記の光音響画像データの生成方法において、一旦光音響画像データが生成された後、使用者が必要に応じて当該画像を回転させたり移動させたりすることも当然可能である。 The photoacoustic image data generated according to the selected observation method is the final image (display image) to be displayed on the display means 14. In the photoacoustic image data generation method described above, it is naturally possible for the user to rotate or move the image as necessary after the photoacoustic image data is once generated.
 以下、図5および図6を用いて光音響画像生成方法の手順について説明する。図5は、本実施形態の光音響画像生成方法の工程を示すフローチャートである。図6は、本実施形態におけるレーザ光の出射、光音響信号の検出および座標の設定についてのタイミングチャートである。図6において、LTはレーザ光の出射のタイミング(繰り返し周波数15Hz、つまり繰り返し周期15ms)、AT、AT、…、ATは素子領域Aにおける光音響波の検出のタイミングおよび検出期間、BT、BT、…、BTは素子領域Bにおける光音響波の検出のタイミングおよび検出期間、PTは座標取得部による座標の取得のタイミングを表す。 Hereinafter, the procedure of the photoacoustic image generation method will be described with reference to FIGS. 5 and 6. FIG. 5 is a flowchart showing the steps of the photoacoustic image generation method of the present embodiment. FIG. 6 is a timing chart for laser beam emission, photoacoustic signal detection, and coordinate setting in this embodiment. In FIG. 6, LT is the laser beam emission timing (repetition frequency 15 Hz, that is, repetition period 15 ms), AT 1 , AT 2 ,..., AT n are the photoacoustic wave detection timing and detection period in the element region A, 1 , BT 2 ,..., BT n are the photoacoustic wave detection timing and detection period in the element region B, and PT is the coordinate acquisition timing by the coordinate acquisition unit.
 まず、装置10の使用者が、プローブ11を被検体Mに当て、プローブ11の所定のスイッチを押すことにより、この地点がプローブ11の走査開始地点に設定されるとともに、1番目(i=1)の1組の検出領域において、1フレーム分の光音響画像データの生成に使用される光音響波の検出が開始される(STEP1および2)。 First, when the user of the apparatus 10 places the probe 11 on the subject M and presses a predetermined switch of the probe 11, this point is set as the scanning start point of the probe 11 and the first (i = 1). ) Starts detection of photoacoustic waves used for generating photoacoustic image data for one frame (STEPs 1 and 2).
 まず、素子領域Aに対向している1番目(j=1)の検出領域で、レーザ光が出射され、これに同期して光音響波が検出されおよび座標が1回取得される(STEP3および4、並びに図6のi=1およびj=1)。本実施形態では、ある検出領域で光音響波が検出されている間に取得した座標の中の1つの座標を、そのまま代表座標として当該検出領域に設定するため、上記の取得した座標が1番目の検出領域に設定される代表座標となる。例えば図6では、光音響波の検出期間ATの間にレーザ光の出射タイミングLTに同期したタイミングPTにおいて1回座標が取得されている。次に、1番目の検出領域で検出された光音響信号と上記代表座標が関連付けられてメモリに保存され(STEP5)、1番目の検出領域で得られた光音響信号のみに基づいて当該検出領域を表示する部分画像データが生成される(STEP6)。その後、1番目の検出領域を表示する部分画像データと1番目の検出領域に設定された代表座標が対応されてボリュームデータに格納される(STEP7)。そして、検出素子アレイ20aにおいて音響検出素子群が素子領域Bに属する素子に切り替わり(STEP8および9)、素子領域Bに対向している2番目(j=2)の検出領域における光音響波の検出が行われる。1番目の検出領域と同様に、STEP4から7までを繰り返した後、プローブ11の走査が終了していない場合には、検出素子アレイ20aにおいて音響検出素子群が素子領域Aに属する素子に切り替わり(STEP10および11)、2番目(i=2)の1組の検出領域において、1フレーム分の光音響画像データの生成に使用される光音響波の検出が開始される(STEP2)。上記の手順をプローブ11の走査が終了するまで繰り返し、プローブ11の走査が終了した場合には、光音響波の検出が終了する。例えば図6では、n番目(i=n)の1組の検出領域まで、光音響波の検出が行われた状態が示されている。プローブ11の走査の終了は、例えばプローブ11の所定のスイッチが次に押されたことや、プローブ11の走査速度を自動検知してその速度がゼロになったこと等を基準に判断することができる。 First, in the first (j = 1) detection area facing the element area A, laser light is emitted, photoacoustic waves are detected in synchronization with this, and coordinates are acquired once (STEP 3 and 4 and i = 1 and j = 1 in FIG. In the present embodiment, one of the coordinates acquired while a photoacoustic wave is detected in a certain detection area is set as the representative coordinate as it is in the detection area. It becomes the representative coordinates set in the detection area. In FIG. 6, for example, once the coordinates at the timing PT in synchronism with the emission timing LT of laser light during the detection period AT 1 of the photoacoustic wave is acquired. Next, the photoacoustic signal detected in the first detection area and the representative coordinates are associated and stored in the memory (STEP 5), and the detection area is based only on the photoacoustic signal obtained in the first detection area. Is generated (STEP 6). Thereafter, the partial image data for displaying the first detection area is associated with the representative coordinates set in the first detection area and stored in the volume data (STEP 7). Then, in the detection element array 20a, the acoustic detection element group is switched to elements belonging to the element region B (STEPs 8 and 9), and photoacoustic waves are detected in the second (j = 2) detection region facing the element region B. Is done. Similarly to the first detection region, after repeating steps 4 to 7, if the scanning of the probe 11 is not completed, the acoustic detection element group is switched to an element belonging to the element region A in the detection element array 20a ( (Steps 10 and 11) In the second (i = 2) pair of detection areas, detection of photoacoustic waves used for generating photoacoustic image data for one frame is started (STEP 2). The above procedure is repeated until the scanning of the probe 11 is completed, and when the scanning of the probe 11 is completed, the detection of the photoacoustic wave is completed. For example, FIG. 6 shows a state in which photoacoustic waves are detected up to an n-th (i = n) set of detection regions. The end of scanning of the probe 11 can be determined based on, for example, that a predetermined switch of the probe 11 has been pressed next, or that the scanning speed of the probe 11 has been automatically detected and the speed has become zero. it can.
 以上のように、本実施形態に係る光音響画像生成装置および光音響画像生成方法では、検出領域ごとに光音響波を検出しかつ代表座標を設定し、各検出領域を表示する部分画像データと各検出領域に設定された代表座標とをそれぞれ対応させてボリュームデータを生成するから、検出領域ごとに(つまり異なる時間に)得られた光音響信号に基づいて生成された1フレームの光音響画像データを単に並べるよりも、ボリュームデータ内における光音響画像データの位置の精度を高めることができる。これは、本発明により、プローブ11の走査に起因する各検出領域のずれを反映させて正確なボリュームデータの生成が可能となるためである。例えば、図6の例では、5msの座標取得周期の磁気センサユニットを使用すれば、部分画像データに対応する代表座標とこの部分画像データに係る検出領域の実際の座標とを5ms以内で整合させることが可能となる。一方、検出領域ごとに得られた光音響信号に基づいて生成された1フレームの光音響画像データを単に並べた場合には、図6の例では、上記整合の精度がおよそ35ms以内となる。この結果、複数回に分けて検出された光音響信号に基づいてボリュームデータを生成する場合であっても、被検体内部の構造をより正確に表現することが可能となる。 As described above, in the photoacoustic image generation device and the photoacoustic image generation method according to the present embodiment, the photoacoustic wave is detected for each detection region, the representative coordinates are set, and the partial image data for displaying each detection region Since the volume data is generated in correspondence with the representative coordinates set in each detection region, one frame of the photoacoustic image generated based on the photoacoustic signal obtained for each detection region (that is, at a different time). The accuracy of the position of the photoacoustic image data in the volume data can be improved rather than simply arranging the data. This is because according to the present invention, accurate volume data can be generated by reflecting the shift of each detection area caused by the scanning of the probe 11. For example, in the example of FIG. 6, if a magnetic sensor unit having a coordinate acquisition period of 5 ms is used, the representative coordinates corresponding to the partial image data and the actual coordinates of the detection area related to the partial image data are matched within 5 ms. It becomes possible. On the other hand, when one frame of photoacoustic image data generated based on the photoacoustic signal obtained for each detection region is simply arranged, in the example of FIG. 6, the accuracy of the matching is within about 35 ms. As a result, even when volume data is generated based on photoacoustic signals detected in a plurality of times, the structure inside the subject can be expressed more accurately.
 「第2の実施形態」
 次に、本発明の第2の実施形態を詳細に説明する。本実施形態は、超音波ユニット(音響信号処理ユニット)が、ある検出領域を表示する部分画像データの生成を、当該検出領域で得られた光音響信号および他の検出領域で得られた光音響信号に基づいて行うものである点で、第1の実施形態と異なる。したがって、第1の実施形態と同様の構成要素についての詳細な説明は、特に必要のない限り省略する。図7は、本実施形態における部分画像データの生成過程を示す概念図である。
“Second Embodiment”
Next, a second embodiment of the present invention will be described in detail. In the present embodiment, the ultrasonic unit (acoustic signal processing unit) generates partial image data for displaying a certain detection area, the photoacoustic signal obtained in the detection area, and the photoacoustic obtained in another detection area. This is different from the first embodiment in that it is performed based on a signal. Therefore, a detailed description of the same components as those in the first embodiment is omitted unless particularly necessary. FIG. 7 is a conceptual diagram illustrating a partial image data generation process in the present embodiment.
 本実施形態の光音響画像生成装置10も、図1に示されるように、プローブ11、超音波ユニット12、レーザユニット13、表示手段14、座標取得部(15、41および42)並びに入力手段16を備える。 As shown in FIG. 1, the photoacoustic image generation apparatus 10 of this embodiment also includes a probe 11, an ultrasonic unit 12, a laser unit 13, a display unit 14, a coordinate acquisition unit (15, 41 and 42), and an input unit 16. Is provided.
 <プローブ>
 本実施形態では、検出素子アレイ20aの領域分割は、図2のA又は図2のBのように各素子領域は明確に分離されていることが好ましい。
<Probe>
In the present embodiment, it is preferable that the element regions of the detection element array 20a are clearly separated as shown in A of FIG. 2 or B of FIG.
 <超音波ユニット>
 超音波ユニット12は、受信回路21、AD変換手段22、受信メモリ23、光音響画像再構成手段24、検波・対数変換手段27、光音響画像構築手段28、制御手段29、画像合成手段38および観察方式選択手段39を有する。そして、超音波ユニット12は、ある検出領域を表示する部分画像データの生成を、当該検出領域で得られた光音響信号および他の検出領域で得られた光音響信号に基づいて行う。
<Ultrasonic unit>
The ultrasonic unit 12 includes a reception circuit 21, an AD conversion unit 22, a reception memory 23, a photoacoustic image reconstruction unit 24, a detection / logarithm conversion unit 27, a photoacoustic image construction unit 28, a control unit 29, an image synthesis unit 38, and Observation method selection means 39 is provided. Then, the ultrasonic unit 12 generates partial image data for displaying a certain detection area based on the photoacoustic signal obtained in the detection area and the photoacoustic signal obtained in another detection area.
 検出領域ごとに光音響波の検出を行うことは第1の実施形態と同様であるが、本実施形態では光音響画像再構成手段24は、1組の検出領域全体で光音響波の検出が終了するまで光音響信号の再構成を行わない。図7に示されるように、光音響画像再構成手段24は、1組の検出領域全体(素子領域Aに対向する検出領域および素子領域Bに対向する検出領域)で光音響信号SaおよびSb(それぞれ64chである。)が得られてから(図7のA)、これらの光音響信号を素子領域の並びに合わせて並べて1つにまとめる(図7のB)。そして、光音響画像再構成手段24は、1フレーム分の光音響画像の基となる信号データを生成するように、上記まとめられた光音響信号全体(128ch)を使用して光音響信号を再構成する。具体的には、上記まとめられた光音響信号全体(128ch)のうち光音響信号Saの部分を1~64chとし、光音響信号Sbの部分を65~128chとすると、例えば1~64ch、2~65ch、3~66ch、…、65~128chのようなチャンネルの組合せで再構成を行う。なお、図7においては、図7のBの光音響信号から図7のCの光音響画像データを生成する際における再構成の過程の図示は省略している。 The detection of the photoacoustic wave for each detection area is the same as in the first embodiment, but in this embodiment, the photoacoustic image reconstruction means 24 detects the photoacoustic wave in the entire detection area. The photoacoustic signal is not reconstructed until the end. As shown in FIG. 7, the photoacoustic image reconstruction means 24 uses the photoacoustic signals Sa and Sb (detection region facing the element region A and detection region facing the element region B) in a whole set of detection regions. (A in FIG. 7), these photoacoustic signals are arranged side by side in the element region and combined into one (B in FIG. 7). Then, the photoacoustic image reconstruction means 24 reconstructs the photoacoustic signal by using the entire collected photoacoustic signal (128ch) so as to generate signal data that is the basis of the photoacoustic image for one frame. Constitute. Specifically, if the photoacoustic signal Sa portion is 1 to 64 ch and the photoacoustic signal Sb portion is 65 to 128 ch in the total photoacoustic signal (128 ch), for example, 1 to 64 ch, 2 to Reconfiguration is performed with channel combinations such as 65 ch, 3 to 66 ch,..., 65 to 128 ch. In FIG. 7, illustration of the reconstruction process when generating the photoacoustic image data of C of FIG. 7 from the photoacoustic signal of B of FIG. 7 is omitted.
 なお、ある検出領域における再構成された信号データを生成する際に、必ずしも1組の検出領域で得られた光音響信号すべてを1つにまとめる必要はない。例えば、ある検出領域における再構成された信号データを生成する際に、当該検出領域を基準に時系列的に直近の前後に得られた光音響信号を含めて光音響信号を1つにまとめる態様が考えられる。つまり、素子領域が図2のBのように3つに分かれている場合には、素子領域Aに対向する検出領域において再構成された信号データを生成する場合には、素子領域Aおよび素子領域Bに対向する検出領域で得られた光音響信号を1つにまとめ、素子領域Bに対向する検出領域において再構成された信号データを生成する場合には、素子領域A、素子領域Bおよび素子領域Cに対向する検出領域で得られた光音響信号を1つにまとめ、素子領域Cに対向する検出領域において再構成された信号データを生成する場合には、素子領域Bおよび素子領域Cに対向する検出領域で得られた光音響信号を1つにまとめる態様でもよい。 Note that when generating reconstructed signal data in a certain detection area, it is not always necessary to combine all the photoacoustic signals obtained in one set of detection areas. For example, when generating reconstructed signal data in a certain detection region, a mode in which the photoacoustic signals are combined into one, including the photoacoustic signals obtained immediately before and after in time series on the basis of the detection region Can be considered. That is, in the case where the element region is divided into three as shown in FIG. 2B, when the signal data reconstructed in the detection region facing the element region A is generated, the element region A and the element region In the case where the photoacoustic signals obtained in the detection area facing B are combined into one and the reconstructed signal data is generated in the detection area facing the element area B, the element area A, the element area B, and the element When the photoacoustic signals obtained in the detection area facing the area C are combined into one and the reconstructed signal data is generated in the detection area facing the element area C, the element areas B and C It is also possible to combine the photoacoustic signals obtained in the opposing detection areas into one.
 このように、上記まとめられた光音響信号全体を使用して光音響信号を再構成することにより、検出領域同士の境界付近の信号データをより正確に生成することができる。光音響画像再構成手段24は、これらの再構成で得られた各ラインの信号データを検波・対数変換手段27に送信する。 As described above, by reconstructing the photoacoustic signal using the entire photoacoustic signal summarized above, signal data near the boundary between the detection regions can be generated more accurately. The photoacoustic image reconstruction unit 24 transmits the signal data of each line obtained by these reconstructions to the detection / logarithm conversion unit 27.
 光音響画像構築手段28は、検波・対数変換手段27から受信した信号データに基づいて、1フレーム分の光音響画像データIMを生成する(図7のC)。ただし、光音響画像構築手段28は、部分画像データIMaおよびIMbを検出領域ごとに処理および管理するため、ボリュームデータに画像データを格納する際に、1フレーム分の光音響画像データIMを部分画像データIMaおよびIMbに分割する(図7のD)。これにより、第1の実施形態と同様に、各検出領域を表示する部分画像データと各検出領域に設定された代表座標とをそれぞれ対応させてボリュームデータを生成することができる。 The photoacoustic image construction means 28 generates photoacoustic image data IM for one frame based on the signal data received from the detection / logarithm conversion means 27 (C in FIG. 7). However, since the photoacoustic image construction means 28 processes and manages the partial image data IMa and IMb for each detection region, when storing the image data in the volume data, the photoacoustic image data IM for one frame is stored in the partial image. The data is divided into data IMa and IMb (D in FIG. 7). Thereby, similarly to the first embodiment, it is possible to generate the volume data by associating the partial image data for displaying each detection area with the representative coordinates set in each detection area.
 以下、図8を用いて光音響画像生成方法の手順について説明する。図8は、本実施形態の光音響画像生成方法の工程を示すフローチャートである。なお、本実施形態におけるレーザ光の出射、光音響信号の検出および座標の設定についてのタイミングチャートは図6と同様である。 Hereinafter, the procedure of the photoacoustic image generation method will be described with reference to FIG. FIG. 8 is a flowchart showing the steps of the photoacoustic image generation method of the present embodiment. Note that the timing chart regarding the emission of laser light, the detection of photoacoustic signals, and the setting of coordinates in this embodiment is the same as that in FIG.
 まず、装置10の使用者が、プローブ11を被検体Mに当て、プローブ11の所定のスイッチを押すことにより、この地点がプローブ11の走査開始地点に設定されるとともに、1番目(i=1)の1組の検出領域において、1フレーム分の光音響画像データの生成に使用される光音響波の検出が開始される(STEP21および22)。 First, when the user of the apparatus 10 places the probe 11 on the subject M and presses a predetermined switch of the probe 11, this point is set as the scanning start point of the probe 11 and the first (i = 1). ) Starts detection of photoacoustic waves used for generating photoacoustic image data for one frame (STEPs 21 and 22).
 まず、素子領域Aに対向している1番目(j=1)の検出領域で、レーザ光が出射され、これに同期して光音響波が検出されおよび座標が1回取得される(STEP23および24、並びに図6のi=1およびj=1)。次に、1番目の検出領域で検出された光音響信号と上記代表座標が関連付けられてメモリに保存される(STEP25)。そして、検出素子アレイ20aにおいて音響検出素子群が素子領域Bに属する素子に切り替わり(STEP26および27)、素子領域Bに対向している2番目(j=2)の検出領域における光音響波の検出が、1番目の検出領域と同様に行われる(STEP24および25)。1番目の1組の検出領域における検出が一通り終わったら、当該1組の検出領域において得られた光音響信号全体に基づいて1フレームの光音響画像データが構築される(STEP26および28)。その後、この1フレームの光音響画像データは各検出領域に応じて部分画像データに分割され、各部分画像データと各検出領域に設定された代表座標がそれぞれ対応されてボリュームデータに格納される(STEP29および30)。そして、プローブ11の走査が終了していない場合には、検出素子アレイ20aにおいて音響検出素子群が素子領域Aに属する素子に切り替わり、2番目(i=2)の1組の検出領域において、1フレーム分の光音響画像データの生成に使用される光音響波の検出が開始される(STEP31、32、22および23)。上記の手順をプローブ11の走査が終了するまで繰り返し、プローブ11の走査が終了した場合には、光音響波の検出が終了する。 First, in the first (j = 1) detection region facing the element region A, laser light is emitted, a photoacoustic wave is detected in synchronization with this, and coordinates are acquired once ( STEP 23 and 24, and i = 1 and j = 1 in FIG. Next, the photoacoustic signal detected in the first detection region and the representative coordinate are associated and stored in the memory (STEP 25). Then, in the detection element array 20a, the acoustic detection element group is switched to an element belonging to the element region B (STEPs 26 and 27), and photoacoustic waves are detected in the second (j = 2) detection region facing the element region B. Is performed in the same manner as the first detection region (STEPs 24 and 25). When the detection in the first set of detection areas is completed, one frame of photoacoustic image data is constructed based on the entire photoacoustic signal obtained in the one set of detection areas (STEPs 26 and 28). Thereafter, the photoacoustic image data of one frame is divided into partial image data according to each detection area, and each partial image data and the representative coordinates set in each detection area are associated with each other and stored in volume data ( (STEP 29 and 30). If the scanning of the probe 11 is not completed, the acoustic detection element group is switched to an element belonging to the element area A in the detection element array 20a, and 1 in the second set of detection areas (i = 2). Detection of photoacoustic waves used to generate photoacoustic image data for a frame is started (STEPs 31, 32, 22 and 23). The above procedure is repeated until the scanning of the probe 11 is completed, and when the scanning of the probe 11 is completed, the detection of the photoacoustic wave is completed.
 以上のように、本実施形態に係る光音響画像生成装置および光音響画像生成方法においても、検出領域ごとに光音響波を検出しかつ代表座標を設定し、各検出領域を表示する部分画像データと各検出領域に設定された代表座標とをそれぞれ対応させてボリュームデータを生成するから、第1の実施形態と同様の効果が得られる。 As described above, also in the photoacoustic image generation apparatus and the photoacoustic image generation method according to the present embodiment, partial image data that detects photoacoustic waves for each detection region, sets representative coordinates, and displays each detection region. Since the volume data is generated by associating each of the detection coordinates with the representative coordinates set in each detection region, the same effect as in the first embodiment can be obtained.
 「第3の実施形態」
 次に、本発明の第3の実施形態を詳細に説明する。本実施形態は、制御手段29(座標設定部)が、ある検出領域で光音響波が検出されている間に取得した複数の座標を使用して算出した算出座標を、代表座標として当該検出領域に設定するものである点で、第1の実施形態と異なる。したがって、第1の実施形態と同様の構成要素についての詳細な説明は、特に必要のない限り省略する。図9は、本実施形態における実施形態におけるレーザ光の出射、光音響信号の検出および座標の設定についてのタイミングチャートである。
“Third Embodiment”
Next, a third embodiment of the present invention will be described in detail. In this embodiment, the control unit 29 (coordinate setting unit) uses, as representative coordinates, calculated coordinates calculated using a plurality of coordinates acquired while photoacoustic waves are detected in a certain detection area. This is different from the first embodiment in that it is set as follows. Therefore, a detailed description of the same components as those in the first embodiment is omitted unless particularly necessary. FIG. 9 is a timing chart for laser beam emission, photoacoustic signal detection, and coordinate setting in the present embodiment.
 本実施形態の光音響画像生成装置10も、図1に示されるように、プローブ11、超音波ユニット12、レーザユニット13、表示手段14、座標取得部(15、41および42)並びに入力手段16を備える。 As shown in FIG. 1, the photoacoustic image generation apparatus 10 of this embodiment also includes a probe 11, an ultrasonic unit 12, a laser unit 13, a display unit 14, a coordinate acquisition unit (15, 41 and 42), and an input unit 16. Is provided.
 制御手段29は、例えば素子領域Aにおける光音響波の検出の検出期間AT内のタイミングp~pにおいて取得した座標に基づいて、例えばこれらの座標の平均値、加重平均値、中央値および最頻値などを算出し、この算出値(算出座標)を代表座標として設定する。このように、複数の座標に基づいて代表座標を求めることにより、座標情報中のノイズが除去されて代表座標と検出領域の実際の位置との整合の精度がより向上する。 For example, based on the coordinates acquired at timings p 1 to p 4 within the detection period AT 1 for detecting the photoacoustic wave in the element region A, the control unit 29, for example, the average value, the weighted average value, and the median value of these coordinates The mode value is calculated, and the calculated value (calculated coordinate) is set as the representative coordinate. Thus, by obtaining representative coordinates based on a plurality of coordinates, noise in the coordinate information is removed, and the accuracy of matching between the representative coordinates and the actual position of the detection area is further improved.
 また、制御手段29は、ある検出領域で光音響波が検出される期間の前後の直近(例えば図9のタイミングp)に取得した座標も使用して、算出座標を求めてもよい。このように、算出に使用する座標情報を増やすことにより、座標情報中のノイズがより除去されて代表座標と検出領域の実際の位置との整合の精度がより向上する。光音響波が検出される期間の前の座標情報を取得するには、座標取得のタイミングを例えばレーザ出射のためのQswトリガ信号に同期させることが好ましい。 In addition, the control unit 29 may obtain the calculated coordinates using the coordinates acquired immediately before and after the period in which the photoacoustic wave is detected in a certain detection area (for example, the timing p 0 in FIG. 9). Thus, by increasing the coordinate information used for the calculation, noise in the coordinate information is further removed, and the accuracy of matching between the representative coordinates and the actual position of the detection area is further improved. In order to acquire coordinate information before the period in which the photoacoustic wave is detected, it is preferable to synchronize the timing of coordinate acquisition with, for example, a Qsw trigger signal for laser emission.
 また、上記の説明では、超音波ユニット(音響信号処理ユニット)が、ある検出領域を表示する部分画像データの生成を、当該検出領域で得られた光音響信号のみに基づいて行うものである場合について説明したが、本実施形態は、超音波ユニット(音響信号処理ユニット)が、ある検出領域を表示する部分画像データの生成を、当該検出領域で得られた光音響信号および他の検出領域で得られた光音響信号に基づいて行うものである場合についても適用可能である。 In the above description, the ultrasonic unit (acoustic signal processing unit) generates partial image data for displaying a certain detection area based only on the photoacoustic signal obtained in the detection area. In this embodiment, the ultrasonic unit (acoustic signal processing unit) generates partial image data for displaying a certain detection area using the photoacoustic signal obtained in the detection area and the other detection areas. The present invention can also be applied to a case where it is performed based on the obtained photoacoustic signal.
 「第4の実施形態」
 次に、本発明の第4の実施形態を詳細に説明する。図10は、本実施形態の光音響画像生成装置10の構成を示すブロック図である。本実施形態は、光音響画像に加えて超音波画像も生成する点で、第1の実施形態と異なる。したがって、第1の実施形態と同様の構成要素についての詳細な説明は、特に必要がない限り省略する。
“Fourth Embodiment”
Next, a fourth embodiment of the present invention will be described in detail. FIG. 10 is a block diagram illustrating a configuration of the photoacoustic image generation apparatus 10 of the present embodiment. This embodiment is different from the first embodiment in that an ultrasonic image is generated in addition to the photoacoustic image. Therefore, a detailed description of the same components as those in the first embodiment will be omitted unless particularly necessary.
 本実施形態の光音響画像生成装置10も、図1に示されるように、プローブ11、超音波ユニット12、レーザユニット13、表示手段14、座標取得部(15、41および42)並びに入力手段16を備える。 As shown in FIG. 1, the photoacoustic image generation apparatus 10 of this embodiment also includes a probe 11, an ultrasonic unit 12, a laser unit 13, a display unit 14, a coordinate acquisition unit (15, 41 and 42), and an input unit 16. Is provided.
 <超音波ユニット>
 本実施形態の超音波ユニット12は、図1に示す光音響画像生成装置の構成に加えて、送信制御回路33、データ分離手段34、超音波画像再構成手段35、検波・対数変換手段36、および超音波画像構築手段37を備える。
<Ultrasonic unit>
In addition to the configuration of the photoacoustic image generation apparatus shown in FIG. 1, the ultrasonic unit 12 of the present embodiment includes a transmission control circuit 33, a data separation unit 34, an ultrasonic image reconstruction unit 35, a detection / logarithm conversion unit 36, And an ultrasonic image constructing means 37.
 本実施形態では、プローブ11は、光音響信号の検出に加えて、被検体に対する超音波の出力(送信)、及び送信した超音波に対する被検体からの反射超音波の検出(受信)を行う。超音波の送受信を行う音響検出素子としては、前述した音響検出素子アレイを使用してもよいし、超音波の送受信用に別途プローブ11中に設けられた新たな音響検出素子アレイを使用してもよい。また、超音波の送受信は分離してもよい。例えばプローブ11とは異なる位置から超音波の送信を行い、その送信された超音波に対する反射超音波をプローブ11で受信してもよい。 In the present embodiment, in addition to the detection of the photoacoustic signal, the probe 11 performs output (transmission) of ultrasonic waves to the subject and detection (reception) of reflected ultrasonic waves from the subject with respect to the transmitted ultrasonic waves. As the acoustic detection element that transmits and receives ultrasonic waves, the acoustic detection element array described above may be used, or a new acoustic detection element array provided separately in the probe 11 for ultrasonic transmission and reception is used. Also good. In addition, transmission and reception of ultrasonic waves may be separated. For example, ultrasonic waves may be transmitted from a position different from the probe 11, and reflected ultrasonic waves with respect to the transmitted ultrasonic waves may be received by the probe 11.
 トリガ制御回路30は、超音波画像の生成時は、送信制御回路33に超音波送信を指示する旨の超音波送信トリガ信号を送る。送信制御回路33は、このトリガ信号を受けると、プローブ11から超音波を送信させる。プローブ11は、超音波の送信後、被検体からの反射超音波を検出する。 The trigger control circuit 30 sends an ultrasonic transmission trigger signal for instructing ultrasonic transmission to the transmission control circuit 33 when generating an ultrasonic image. Upon receiving this trigger signal, the transmission control circuit 33 transmits an ultrasonic wave from the probe 11. The probe 11 detects the reflected ultrasonic wave from the subject after transmitting the ultrasonic wave.
 プローブ11が検出した反射超音波は、受信回路21を介してAD変換手段22に入力される。トリガ制御回路30は、超音波送信のタイミングに合わせてAD変換手段22にサンプリグトリガ信号を送り、反射超音波のサンプリングを開始させる。ここで、反射超音波はプローブ11と超音波反射位置との間を往復するのに対し、光音響信号はその発生位置からプローブ11までの片道である。反射超音波の検出には、同じ深さ位置で生じた光音響信号の検出に比して2倍の時間がかかるため、AD変換手段22のサンプリングクロックは、光音響信号サンプリング時の半分、例えば20MHzとしてもよい。AD変換手段22は、反射超音波のサンプリング信号を受信メモリ23に格納する。光音響信号のサンプリングと、反射超音波のサンプリングとは、どちらを先に行ってもよい。 The reflected ultrasonic waves detected by the probe 11 are input to the AD conversion means 22 via the receiving circuit 21. The trigger control circuit 30 sends a sampling trigger signal to the AD conversion means 22 in synchronization with the timing of ultrasonic transmission, and starts sampling of reflected ultrasonic waves. Here, the reflected ultrasonic waves reciprocate between the probe 11 and the ultrasonic reflection position, whereas the photoacoustic signal is one way from the generation position to the probe 11. Since the detection of the reflected ultrasonic wave takes twice as long as the detection of the photoacoustic signal generated at the same depth position, the sampling clock of the AD conversion means 22 is half the time when the photoacoustic signal is sampled, for example, It may be 20 MHz. The AD conversion means 22 stores the reflected ultrasonic sampling signal in the reception memory 23. Either sampling of the photoacoustic signal or sampling of the reflected ultrasonic wave may be performed first.
 データ分離手段34は、受信メモリ23に格納された光音響信号のサンプリング信号と反射超音波のサンプリング信号とを分離する。データ分離手段34は、分離した光音響信号のサンプリング信号を光音響画像再構成手段24に入力する。光音響画像の生成は、第1の実施形態と同様である。一方、データ分離手段34は、分離した反射超音波のサンプリング信号を、超音波画像再構成手段35に入力する。 The data separating means 34 separates the photoacoustic signal sampling signal and the reflected ultrasonic sampling signal stored in the reception memory 23. The data separation unit 34 inputs a sampling signal of the separated photoacoustic signal to the photoacoustic image reconstruction unit 24. The generation of the photoacoustic image is the same as that in the first embodiment. On the other hand, the data separation unit 34 inputs the separated reflected ultrasound sampling signal to the ultrasound image reconstruction unit 35.
 超音波画像再構成手段35は、プローブ11の複数の音響検出素子で検出された反射超音波(そのサンプリング信号)に基づいて、超音波画像の各ラインのデータを生成する。各ラインのデータの生成には、光音響画像再構成手段24における各ラインのデータの生成と同様に、遅延加算法などを用いることができる。検波・対数変換手段36は、超音波画像再構成手段35が出力する各ラインのデータの包絡線を求め、求めた包絡線を対数変換する。 The ultrasonic image reconstruction means 35 generates data of each line of the ultrasonic image based on the reflected ultrasonic waves (its sampling signals) detected by the plurality of acoustic detection elements of the probe 11. For the generation of the data of each line, a delay addition method or the like can be used as in the generation of the data of each line in the photoacoustic image reconstruction means 24. The detection / logarithm conversion means 36 obtains the envelope of the data of each line output from the ultrasonic image reconstruction means 35 and logarithmically transforms the obtained envelope.
 超音波画像構築手段37は、対数変換が施された各ラインのデータに基づいて、超音波画像を生成する。 The ultrasonic image construction means 37 generates an ultrasonic image based on the data of each line subjected to logarithmic transformation.
 画像合成手段38は、例えば光音響画像と超音波画像とを合成する。画像合成手段38は、例えば光音響画像と超音波画像とを重畳することで画像合成を行う。合成された画像は、表示手段14に表示される。画像合成を行わずに、表示手段14に、光音響画像と超音波画像とを並べて表示し、或いは光音響画像と超音波画像とを切り替えて表示することも可能である。 The image synthesis means 38 synthesizes, for example, a photoacoustic image and an ultrasonic image. The image composition unit 38 performs image composition by superimposing a photoacoustic image and an ultrasonic image, for example. The synthesized image is displayed on the display means 14. It is also possible to display the photoacoustic image and the ultrasonic image side by side on the display unit 14 without performing image synthesis, or to switch between the photoacoustic image and the ultrasonic image.
 以上のように、本実施形態に係る光音響画像生成装置および光音響画像生成方法においても、検出領域ごとに光音響波を検出しかつ代表座標を設定し、各検出領域を表示する部分画像データと各検出領域に設定された代表座標とをそれぞれ対応させてボリュームデータを生成するから、第1の実施形態と同様の効果が得られる。 As described above, also in the photoacoustic image generation apparatus and the photoacoustic image generation method according to the present embodiment, partial image data that detects photoacoustic waves for each detection region, sets representative coordinates, and displays each detection region. Since the volume data is generated by associating each of the detection coordinates with the representative coordinates set in each detection region, the same effect as in the first embodiment can be obtained.
 さらに本実施形態の光音響計測装置は、光音響画像に加えて超音波画像を生成する。したがって、超音波画像を参照することで、光音響画像では画像化することができない部分を観察することができる。 Furthermore, the photoacoustic measurement device of the present embodiment generates an ultrasonic image in addition to the photoacoustic image. Therefore, by referring to the ultrasonic image, a portion that cannot be imaged in the photoacoustic image can be observed.

Claims (20)

  1.  被検体内で発生した光音響波を検出して、該光音響波の光音響信号に基づいて光音響画像を生成する光音響画像生成装置において、
     複数の音響検出素子を有する音響検出部であって、並列して光音響波を検出する一部の音響検出素子群を順次選択しながら、前記複数の音響検出素子に対応する画像化領域を複数の検出領域に分けて光音響波を前記検出領域ごとに検出する音響検出部と、
     空間における前記音響検出部の座標を取得する座標取得部と
     該座標取得部により取得された前記光音響波を検出する際の前記音響検出部の座標に基づいて、各検出領域の座標を代表する代表座標を前記検出領域ごとに設定する座標設定部と、
     前記光音響信号に基づいて生成された光音響画像データのうち前記各検出領域を表示する部分画像データと、前記各検出領域に設定された前記代表座標とをそれぞれ対応させて光音響画像のボリュームデータを生成する音響信号処理ユニットとを備えることを特徴とする光音響画像生成装置。
    In a photoacoustic image generation apparatus that detects a photoacoustic wave generated in a subject and generates a photoacoustic image based on a photoacoustic signal of the photoacoustic wave,
    An acoustic detection unit having a plurality of acoustic detection elements, wherein a plurality of imaging regions corresponding to the plurality of acoustic detection elements are selected while sequentially selecting some acoustic detection element groups that detect photoacoustic waves in parallel. An acoustic detection unit that detects photoacoustic waves for each of the detection regions divided into detection regions;
    A coordinate acquisition unit that acquires the coordinates of the acoustic detection unit in space, and the coordinates of each detection region based on the coordinates of the acoustic detection unit when detecting the photoacoustic wave acquired by the coordinate acquisition unit A coordinate setting unit for setting representative coordinates for each detection region;
    Of the photoacoustic image data generated based on the photoacoustic signal, the partial image data for displaying the detection areas and the representative coordinates set in the detection areas are associated with the volume of the photoacoustic image. A photoacoustic image generation apparatus comprising an acoustic signal processing unit for generating data.
  2.  前記音響信号処理ユニットが、ある検出領域を表示する部分画像データの生成を、当該検出領域で得られた光音響信号および他の検出領域で得られた光音響信号に基づいて行うものであることを特徴とする請求項1に記載の光音響画像生成装置。 The acoustic signal processing unit generates partial image data for displaying a certain detection area based on a photoacoustic signal obtained in the detection area and a photoacoustic signal obtained in another detection area. The photoacoustic image generating apparatus according to claim 1.
  3.  前記音響信号処理ユニットが、前記画像化領域に相当する一組の検出領域で得られた光音響信号に基づいて前記部分画像データの生成を行うものであることを特徴とする請求項2に記載の光音響画像生成装置。 The said acoustic signal processing unit produces | generates the said partial image data based on the photoacoustic signal obtained in a set of detection area | region equivalent to the said imaging area | region. Photoacoustic image generation apparatus.
  4.  前記音響信号処理ユニットが、ある検出領域を表示する部分画像データの生成を、当該検出領域で得られた光音響信号のみに基づいて行うものであることを特徴とする請求項1に記載の光音響画像生成装置。 2. The light according to claim 1, wherein the acoustic signal processing unit generates partial image data for displaying a certain detection area based only on a photoacoustic signal obtained in the detection area. Acoustic image generation device.
  5.  前記座標設定部が、ある検出領域で光音響波が検出されている間に取得した複数の座標を使用して算出した算出座標を、前記代表座標として当該検出領域に設定するものであることを特徴とする請求項1から4いずれかに記載の光音響画像生成装置。 The coordinate setting unit is configured to set a calculated coordinate calculated using a plurality of coordinates acquired while a photoacoustic wave is detected in a detection region as the representative coordinate in the detection region. The photoacoustic image generating device according to claim 1, wherein
  6.  前記座標設定部が、ある検出領域で光音響波が検出される期間の前後の直近に取得した座標も使用して、前記算出座標を求めるものであることを特徴とする請求項5に記載の光音響画像生成装置。 The said coordinate setting part calculates | requires the said calculated coordinate also using the coordinate acquired immediately before and after the period when a photoacoustic wave is detected in a certain detection area | region. Photoacoustic image generation apparatus.
  7.  前記座標設定部が、ある検出領域で光音響波が検出されている間に取得した座標の中の1つの座標を、そのまま前記代表座標として当該検出領域に設定するものであることを特徴とする請求項1から4いずれかに記載の光音響画像生成装置。 The coordinate setting unit sets one coordinate among coordinates acquired while a photoacoustic wave is detected in a certain detection area as the representative coordinate in the detection area. The photoacoustic image generating apparatus in any one of Claim 1 to 4.
  8.  前記音響検出素子群を構成する複数の音響検出素子が連続していることを特徴とする請求項1から7いずれかに記載の光音響画像生成装置。 The photoacoustic image generating apparatus according to any one of claims 1 to 7, wherein a plurality of acoustic detection elements constituting the acoustic detection element group are continuous.
  9.  前記座標取得部が、座標を読み取る読取点を複数有し、
     前記座標設定部が、前記複数の読取点で読み取られた座標に基づいて前記代表座標を設定するものであることを特徴とする請求項8に記載の光音響画像生成装置。
    The coordinate acquisition unit has a plurality of reading points for reading coordinates,
    The photoacoustic image generation apparatus according to claim 8, wherein the coordinate setting unit sets the representative coordinates based on the coordinates read at the plurality of reading points.
  10.  前記複数の読取点が前記各音響検出素子群に対応して設けられていることを特徴とする請求項9に記載の光音響画像生成装置。 The photoacoustic image generating apparatus according to claim 9, wherein the plurality of reading points are provided corresponding to each of the acoustic detection element groups.
  11.  前記音響検出素子群がN個あり、
     それぞれの前記音響検出素子群に関して、n番目の音響検出素子群が、n、N+n、2N+n、…、(Q-2)N+nおよび(Q-1)N+n番目の音響検出素子から構成されることを特徴とする請求項1から7いずれかに記載の光音響画像生成装置。
    (Qは、前記音響検出部が有する複数の音響検出素子の総数をNで割ったときの商を表す。)
    There are N acoustic detection element groups,
    For each of the acoustic detection element groups, the nth acoustic detection element group is composed of n, N + n, 2N + n, ..., (Q-2) N + n and (Q-1) N + nth acoustic detection elements. 8. The photoacoustic image generation apparatus according to claim 1, wherein
    (Q represents a quotient when the total number of the plurality of sound detection elements of the sound detection unit is divided by N.)
  12.  前記音響検出部が、前記被検体に対して送信された超音波に対する反射超音波を検出するものであり、
     前記音響信号処理ユニットが、前記反射超音波の超音波信号に基づいて超音波画像を生成するものであることを特徴とする請求項1から11いずれかに記載の光音響画像生成装置。
    The acoustic detection unit detects reflected ultrasonic waves with respect to ultrasonic waves transmitted to the subject,
    The photoacoustic image generation apparatus according to any one of claims 1 to 11, wherein the acoustic signal processing unit generates an ultrasonic image based on an ultrasonic signal of the reflected ultrasonic wave.
  13.  被検体内で発生した光音響波を検出して、該光音響波の光音響信号に基づいて光音響画像を生成する光音響画像生成方法において、
     複数の音響検出素子を有する音響検出部を使用して、並列して光音響波を検出する一部の音響検出素子群を順次選択しながら、前記複数の音響検出素子に対応する画像化領域を複数の検出領域に分けて光音響波を前記検出領域ごとに検出し、
     前記光音響波を検出する際の空間における前記音響検出部の座標に基づいて、各検出領域の座標を代表する代表座標を前記検出領域ごとに設定し、
     前記光音響信号に基づいて生成された光音響画像データのうち前記各検出領域を表示する部分画像データと、前記各検出領域に設定された前記代表座標とをそれぞれ対応させて光音響画像のボリュームデータを生成することを特徴とする光音響画像生成方法。
    In a photoacoustic image generation method for detecting a photoacoustic wave generated in a subject and generating a photoacoustic image based on a photoacoustic signal of the photoacoustic wave,
    Using an acoustic detection unit having a plurality of acoustic detection elements, an imaging region corresponding to the plurality of acoustic detection elements is selected while sequentially selecting some acoustic detection element groups that detect photoacoustic waves in parallel. A photoacoustic wave is detected for each of the detection regions divided into a plurality of detection regions,
    Based on the coordinates of the acoustic detection unit in the space when detecting the photoacoustic wave, set representative coordinates representing the coordinates of each detection region for each detection region,
    Of the photoacoustic image data generated based on the photoacoustic signal, the partial image data for displaying the detection areas and the representative coordinates set in the detection areas are associated with the volume of the photoacoustic image. A photoacoustic image generation method characterized by generating data.
  14.  ある検出領域を表示する部分画像データの生成を、当該検出領域で得られた光音響信号および他の検出領域で得られた光音響信号に基づいて行うことを特徴とする請求項13に記載の光音響画像生成方法。 14. The generation of partial image data for displaying a certain detection area is performed based on a photoacoustic signal obtained in the detection area and a photoacoustic signal obtained in another detection area. Photoacoustic image generation method.
  15.  前記画像化領域に相当する一組の検出領域で得られた光音響信号に基づいて前記部分画像データの生成を行うことを特徴とする請求項14に記載の光音響画像生成方法。 15. The photoacoustic image generation method according to claim 14, wherein the partial image data is generated based on a photoacoustic signal obtained in a set of detection areas corresponding to the imaging area.
  16.  ある検出領域を表示する部分画像データの生成を、当該検出領域で得られた光音響信号のみに基づいて行うことを特徴とする請求項13に記載の光音響画像生成方法。 14. The photoacoustic image generation method according to claim 13, wherein partial image data for displaying a certain detection area is generated based only on a photoacoustic signal obtained in the detection area.
  17.  ある検出領域で光音響波が検出されている間に取得した複数の座標を使用して算出した算出座標を、前記代表座標として当該検出領域に設定することを特徴とする請求項13から16いずれかに記載の光音響画像生成方法。 The calculated coordinates calculated using a plurality of coordinates acquired while photoacoustic waves are detected in a certain detection area are set as the representative coordinates in the detection area. The photoacoustic image generation method of crab.
  18.  ある検出領域で光音響波が検出される期間の前後の直近に取得した座標も使用して、前記算出座標を求めることを特徴とする請求項17に記載の光音響画像生成方法。 The photoacoustic image generation method according to claim 17, wherein the calculated coordinates are obtained also using coordinates acquired immediately before and after a period in which a photoacoustic wave is detected in a certain detection region.
  19.  ある検出領域で光音響波が検出されている間に取得した座標の中の1つの座標を、そのまま前記代表座標として当該検出領域に設定することを特徴とする請求項13から16いずれかに記載の光音響画像生成方法。 17. One coordinate among coordinates acquired while a photoacoustic wave is detected in a certain detection region is set as the representative coordinate in the detection region as it is. The photoacoustic image generation method.
  20.  前記音響検出素子群を構成する複数の音響検出素子が連続していることを特徴とする請求項13から19いずれかに記載の光音響画像生成方法。 The photoacoustic image generation method according to any one of claims 13 to 19, wherein a plurality of acoustic detection elements constituting the acoustic detection element group are continuous.
PCT/JP2013/005497 2012-09-28 2013-09-18 Photoacoustic image generation device, and photoacoustic image generation method WO2014050020A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2012215434A JP2014068701A (en) 2012-09-28 2012-09-28 Photoacoustic image generation device and photoacoustic image generation method
JP2012-215434 2012-09-28

Publications (1)

Publication Number Publication Date
WO2014050020A1 true WO2014050020A1 (en) 2014-04-03

Family

ID=50387463

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2013/005497 WO2014050020A1 (en) 2012-09-28 2013-09-18 Photoacoustic image generation device, and photoacoustic image generation method

Country Status (2)

Country Link
JP (1) JP2014068701A (en)
WO (1) WO2014050020A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016070115A1 (en) * 2014-10-30 2016-05-06 Seno Medical Instruments, Inc. Opto-acoustic imaging system with detection of relative orientation of light source and acoustic receiver using acoustic waves
WO2017138541A1 (en) * 2016-02-08 2017-08-17 Canon Kabushiki Kaisha Information acquiring apparatus and display method
CN114636672A (en) * 2022-05-11 2022-06-17 之江实验室 Photoacoustic and ultrasonic multiplexing acquisition system and method

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019039036A1 (en) * 2017-08-24 2019-02-28 富士フイルム株式会社 Photoacoustic image generation device
JP7020954B2 (en) 2018-02-13 2022-02-16 キヤノン株式会社 Image processing device, image processing method, program

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH10295691A (en) * 1997-04-25 1998-11-10 Aloka Co Ltd Ultrasonic diagnostic device
JP2012005623A (en) * 2010-06-24 2012-01-12 Fujifilm Corp Method and apparatus for imaging biological data
JP2012135610A (en) * 2010-12-10 2012-07-19 Fujifilm Corp Probe for photoacoustic inspection and photoacoustic inspection device

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH10295691A (en) * 1997-04-25 1998-11-10 Aloka Co Ltd Ultrasonic diagnostic device
JP2012005623A (en) * 2010-06-24 2012-01-12 Fujifilm Corp Method and apparatus for imaging biological data
JP2012135610A (en) * 2010-12-10 2012-07-19 Fujifilm Corp Probe for photoacoustic inspection and photoacoustic inspection device

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016070115A1 (en) * 2014-10-30 2016-05-06 Seno Medical Instruments, Inc. Opto-acoustic imaging system with detection of relative orientation of light source and acoustic receiver using acoustic waves
US10539675B2 (en) 2014-10-30 2020-01-21 Seno Medical Instruments, Inc. Opto-acoustic imaging system with detection of relative orientation of light source and acoustic receiver using acoustic waves
WO2017138541A1 (en) * 2016-02-08 2017-08-17 Canon Kabushiki Kaisha Information acquiring apparatus and display method
CN114636672A (en) * 2022-05-11 2022-06-17 之江实验室 Photoacoustic and ultrasonic multiplexing acquisition system and method

Also Published As

Publication number Publication date
JP2014068701A (en) 2014-04-21

Similar Documents

Publication Publication Date Title
JP5779169B2 (en) Acoustic image generating apparatus and method for displaying progress when generating image using the same
JP5448918B2 (en) Biological information processing device
JP6525565B2 (en) Object information acquisition apparatus and object information acquisition method
JP5626903B2 (en) Catheter-type photoacoustic probe and photoacoustic imaging apparatus provided with the same
WO2012132302A1 (en) Photoacoustic imaging method and device
WO2014050020A1 (en) Photoacoustic image generation device, and photoacoustic image generation method
JP6177530B2 (en) Doppler measuring device and doppler measuring method
JP5694991B2 (en) Photoacoustic imaging method and apparatus
JP6545190B2 (en) Photoacoustic apparatus, signal processing apparatus, signal processing method, program
WO2012157221A1 (en) Tomographic image generating device, method, and program
JP6289050B2 (en) SUBJECT INFORMATION ACQUISITION DEVICE, INFORMATION PROCESSING METHOD, AND PROGRAM
JP5683383B2 (en) Photoacoustic imaging apparatus and method of operating the same
JP5936559B2 (en) Photoacoustic image generation apparatus and photoacoustic image generation method
JP6742745B2 (en) Information acquisition device and display method
JP6742734B2 (en) Object information acquisition apparatus and signal processing method
WO2013080539A1 (en) Photoacoustic image generator device and photoacoustic image generator method
WO2012114695A1 (en) Photoacoustic image generation device
JP2002257803A (en) Method and apparatus for imaging ultrasonic wave
CN118019497A (en) Image generation method, image generation program, and image generation device
JP5722182B2 (en) Photoacoustic imaging apparatus and photoacoustic imaging method
JP2014161428A (en) Photoacoustic measuring apparatus and photoacoustic measuring method
JP5839688B2 (en) Photoacoustic image processing apparatus and method
JP7077384B2 (en) Subject information acquisition device
JP5868458B2 (en) measuring device
JP2014023680A (en) Subject information acquiring device, and control method and presentation method for the same

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 13842303

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 13842303

Country of ref document: EP

Kind code of ref document: A1