WO2020082269A1 - 一种成像方法以及成像系统 - Google Patents

一种成像方法以及成像系统 Download PDF

Info

Publication number
WO2020082269A1
WO2020082269A1 PCT/CN2018/111679 CN2018111679W WO2020082269A1 WO 2020082269 A1 WO2020082269 A1 WO 2020082269A1 CN 2018111679 W CN2018111679 W CN 2018111679W WO 2020082269 A1 WO2020082269 A1 WO 2020082269A1
Authority
WO
WIPO (PCT)
Prior art keywords
target tissue
photoacoustic
image
volume data
ultrasound
Prior art date
Application number
PCT/CN2018/111679
Other languages
English (en)
French (fr)
Inventor
姜玉新
李建初
杨萌
杨芳
朱磊
苏娜
王铭
唐鹤文
张睿
唐天虹
Original Assignee
中国医学科学院北京协和医院
深圳迈瑞生物医疗电子股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 中国医学科学院北京协和医院, 深圳迈瑞生物医疗电子股份有限公司 filed Critical 中国医学科学院北京协和医院
Priority to PCT/CN2018/111679 priority Critical patent/WO2020082269A1/zh
Priority to CN201880055971.2A priority patent/CN111727013B/zh
Publication of WO2020082269A1 publication Critical patent/WO2020082269A1/zh

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves

Definitions

  • This application relates to the field of medical devices, in particular to an imaging method and imaging system.
  • Photoacoustic imaging is a new type of biomedical imaging technology.
  • the principle of PAI is based on the photoacoustic effect.
  • biological tissues are irradiated with short pulses of laser light, for example, on the order of nanoseconds (ns)
  • Substances with strong optical absorption properties, such as blood will cause local heating and thermal expansion after absorbing light energy, thereby generating photoacoustic signals and propagating outward.
  • the photoacoustic signal generated by the biological tissue irradiated by the short pulse laser can be detected by the ultrasonic probe, and the photoacoustic signal is detected, and the corresponding reconstruction algorithm can be used to reconstruct the absorber, that is, the position of the substance with strong optical absorption characteristics And shape.
  • Photoacoustic imaging combines the advantages of optics and ultrasound. It has unique advantages in early diagnosis and prognosis evaluation of some major diseases. It is a new imaging technology with huge clinical and industrial prospects. Limited by the ability of light to penetrate biological tissues, the application of photoacoustic imaging focuses on some shallow organs. Photoacoustic imaging reflects the functional information of organisms, while traditional ultrasound imaging reflects the structural information of organisms, effectively combining the two, that is, photoacoustic-ultrasonic dual-mode imaging overcomes the shortcomings of single-mode imaging. Can provide more comprehensive organizational structure and functional information.
  • Three-dimensional photoacoustic-ultrasound imaging draws on the traditional ultrasonic three-dimensional scanning method, and uses a mechanical device to drive the photoacoustic-ultrasonic composite probe to move in a certain direction to realize the collection of three-dimensional (3D, 3Dimensions) data, and then renders and displays the 3D data In order to achieve the purpose of allowing operators to three-dimensionally observe the organizational structure and function.
  • the target tissue that is, the biological tissue can be displayed by adjusting the angle and transparency.
  • Ultrasound images only have gray-scale images, which cannot intuitively display images of target tissues, making it inconvenient to observe target tissues. For example, the gray value of the lesion and normal tissue may be very close. Therefore, the image obtained by the three-dimensional photoacoustic-ultrasound imaging cannot clearly observe the lesion and cannot effectively display the lesion.
  • the present application provides an imaging method and imaging system for improving the intuitiveness of an image.
  • a first aspect of an embodiment of the present application provides an imaging method, including: emitting laser light to a target body, and receiving a photoacoustic signal returned from the target body; emitting ultrasonic waves to the target body, and receiving from the target body The returned ultrasonic echo signal; obtaining photoacoustic volume data according to the photoacoustic signal, and obtaining ultrasonic volume data according to the ultrasonic echo signal; determining the boundary of the target tissue in the ultrasonic volume data; according to the target tissue Rendering the target tissue to obtain an ultrasound volume image of the target tissue; rendering the photoacoustic volume data to obtain a photoacoustic volume image of the target tissue; and comparing the ultrasound volume image with the The photoacoustic volume images are fused to obtain a fused image of the target tissue.
  • a second aspect of an embodiment of the present application provides an imaging system, including: a laser, a probe, and a transmitting circuit, a receiving circuit, and a processor; the laser is used to generate laser light irradiating a target body, and the laser is coupled to the The probe, and emit the laser light to the target body through the optical fiber bundle; the receiving circuit is used to control the probe to receive the photoacoustic signal returned from the target body; the transmitting circuit is used to control the The probe transmits ultrasonic waves to the target body; the receiving circuit is used to control the probe to receive the ultrasonic echo signal returned from the target body; the processor is used to generate a control signal and send it to the laser to control The laser generates the laser; the processor is further used to obtain photoacoustic volume data based on the photoacoustic signal, and obtain ultrasound volume data based on the ultrasound echo signal; determine target tissue in the ultrasound volume data , The ultrasound volume data is rendered according to the boundary of the target tissue to obtain an ultrasound volume image of the target tissue The photoa
  • a third aspect of the embodiments of the present application provides a computer-readable storage medium, in which instructions are stored in a computer-readable storage medium, which when executed on a computer, causes the computer to execute the imaging method provided in the first aspect.
  • the ultrasound volume data may be a grayscale image, which may display the shape of the target tissue
  • the photoacoustic volume data may generally include distribution data inside the target tissue, for example, distribution of blood vessels, blood oxygen, and the like. Therefore, the present application can segment the boundary of the target tissue based on the ultrasound volume data, and render the ultrasound volume data to obtain an ultrasound volume image of the target tissue. And by rendering the photoacoustic volume data, a photoacoustic volume image is obtained, and the photoacoustic volume image is an image including the distribution of blood vessels, blood oxygen, etc.
  • the ultrasound volume image and the photoacoustic volume image are fused to obtain a fused image of the target tissue, so that the fused image can simultaneously display a 3D image of the structure of the target tissue and the distribution inside or around the target tissue. Therefore, the fusion image obtained by fusion can display a more comprehensive three-dimensional stereoscopic display of the target tissue, so that the operator can observe the target tissue more comprehensively and intuitively.
  • FIG. 1 is a schematic structural block diagram of a possible imaging system provided by an embodiment of the present application.
  • FIG. 2 is a schematic diagram of an application scenario of a possible ultrasound imaging method provided by an embodiment of the present application
  • FIG. 3 is a flowchart of a possible imaging method provided by an embodiment of the present application.
  • FIG. 4 is a schematic diagram of a possible mechanical scanner provided by an embodiment of the present application.
  • FIG. 5 is a schematic diagram of a possible probe provided by an embodiment of the present application.
  • the present application provides an imaging method and imaging system for improving the intuitiveness of image display.
  • FIG. 1 is a schematic structural block diagram of an imaging system 10 in an embodiment of the present application.
  • the imaging system 10 may include a probe 110, a laser 120, and a mechanical scanner 130, as well as a transmission circuit 101, a transmission / reception selection switch 102, a reception circuit 103, a processor 105, a display 106, and a memory 107.
  • the imaging system 10 may also include other devices or devices not shown in the figure.
  • the transmitting circuit 101 can excite the probe 110 to transmit ultrasonic waves to the target body.
  • the receiving circuit 103 can receive the ultrasonic echo returned from the target body through the probe 110, thereby obtaining ultrasonic echo signals / data.
  • the ultrasonic echo signal / data is directly or through a beam synthesis circuit for beam synthesis processing, and then sent to the processor 105.
  • the processor 105 processes the ultrasound echo signal / data processed by the beam synthesis circuit to obtain ultrasound volume data of the target volume.
  • the ultrasound volume data obtained by the processor 105 may be stored in the memory 107.
  • the laser 120 can generate laser light and emit the laser light to the target body through the optical fiber bundle.
  • the receiving circuit 103 can also receive the photoacoustic signal / data returned by the target body under the excitation of the laser through the probe 110.
  • the photoacoustic signal / data is sent to the processor 105 directly or after processing, and the processor processes the photoacoustic signal / data to obtain the photoacoustic volume data of the target volume.
  • the mechanical scanner 130 can drive the probe 110 to move.
  • the aforementioned ultrasound volume data and photoacoustic volume data can be displayed on the display 106, that is, the ultrasound image and the photoacoustic image can be displayed on the display 106.
  • the mechanical scanner 130 enables the probe 110 to receive ultrasonic echo signals / data or photoacoustic signals / data from different orientations, and enables the processor 105 to process the received ultrasonic echo signals / data or photoacoustic signals / data To obtain ultrasound volume data or photoacoustic volume data.
  • the mechanical scanner 130 is an optional device.
  • the mechanical scanner 130 is disposed inside the probe 110, that is, the functions of the mechanical scanner 130 are integrated on the probe 110.
  • the mechanical scanner 130 may further include a motor controller and a motor, and the motor controller performs the motion trajectory, stroke, or speed of the motor in the mechanical scanner 130 according to the control signal sent by the processor. control.
  • the probe 110 may exist independently, or may be provided on the mechanical scanner 130, and the mechanical scanner 130 drives the probe 110 to move.
  • the laser 120 may be connected to the transmission / reception selection switch 102, and the transmission / reception selection switch 102 controls the emission of laser light, or the laser 120 may be directly connected to the probe 110 through an optical transmission tool.
  • the optical fiber bundle is coupled upward, and the laser beam is transmitted to both sides of the acoustic head of the probe 110 by the optical fiber bundle, and the target body is irradiated by back-lighting.
  • the probe 110 may specifically include an ultrasonic transducer, which has the functions of transmitting and receiving signals, and can perform various imaging such as gray-scale imaging and Doppler hemorrhage imaging.
  • the optical fiber bundle and the ultrasonic transducer are coupled and surrounded by a housing to form a probe that integrates photoacoustic imaging and ultrasonic imaging functions. That is, under this structure of the probe, the laser emits laser light , And irradiate the laser to the target body through the probe, and receive the photoacoustic signal formed under the excitation of the laser returned from the target body through the probe.
  • the probe can also be used for traditional ultrasound imaging, that is, transmitting ultrasound waves to the target body and receiving the ultrasonic echoes returned from the target body.
  • the laser can also be directly coupled with the ultrasonic transducer, and be completely or partially surrounded by the shell to form a probe that integrates photoacoustic imaging and ultrasonic imaging functions.
  • the probe can be used for both photoacoustic imaging and Used for ultrasound imaging.
  • the aforementioned display 106 may be a touch display screen, a liquid crystal display screen, etc. built in the imaging system, or an independent display device such as a liquid crystal display, a television, etc., which is independent from the imaging system, or may be Display screens on electronic devices such as mobile phones and tablet computers, etc.
  • the foregoing memory 107 may be a flash memory card, a solid-state memory, a hard disk, or the like.
  • a computer-readable storage medium stores a plurality of program instructions. After the plurality of program instructions are called and executed by the processor 105, various implementations of the present application can be performed. Some or all of the steps in the ultrasound imaging method in the example or any combination of the steps therein.
  • the computer-readable storage medium may be the memory 107, which may be a non-volatile storage medium such as a flash memory card, solid state memory, or hard disk.
  • the aforementioned processor 105 may be implemented by software, hardware, firmware, or a combination thereof, and may use circuits, single or multiple application specific integrated circuits (application specific integrated circuits (ASIC), single or multiple general-purpose Integrated circuits, single or multiple microprocessors, single or multiple programmable logic devices, or combinations of the aforementioned circuits or devices, or other suitable circuits or devices, so that the processor 105 can execute various embodiments of the present application The corresponding steps in the imaging method.
  • ASIC application specific integrated circuits
  • the imaging method provided in this embodiment of the present application can be applied to the following application scenarios: for example, for specific application scenarios, refer to FIG. 2.
  • the operator scans the probe 110 on the target body 201, the laser emits laser light, and irradiates the target body through the optical fiber bundle.
  • the probe receives the photoacoustic signal returned from the target body, uses the probe to transmit ultrasonic waves to the target body, and receives it through the probe The ultrasound echo signal returned by the target cluster.
  • the operator can see the organization structure and the like through the display 106.
  • an imaging method provided by an embodiment of the present application can be applied to the imaging system shown in FIG. 1, and the imaging method embodiment includes:
  • the laser 120 After determining the target body where the target tissue is located, the laser 120 emits laser light to the target body through the optical fiber bundle, and then the probe 110 receives the photoacoustic signal generated by the target body under laser excitation. Depending on the target organization, the received photoacoustic signal may also be different.
  • the laser is coupled to the probe through an optical fiber bundle, the laser emits laser light, and then the optical fiber bundle emits laser light to the target body. After the tissue in the target body absorbs the light energy, it will cause temperature rise and thermal expansion, thereby generating a photoacoustic signal to propagate outward, and the corresponding photoacoustic signal is detected by the probe 110.
  • the probe 110 may be disposed on the mechanical scanner 130, and then the processor 105 may send a control signal to the mechanical scanner 130 to control the motor in the mechanical scanner 130 to control the mechanical scanner 130 Scanning speed and trajectory, etc. After the laser is emitted to the target body, the probe 110 can surround the target body and receive the photoacoustic signal returned from the target body from different angles to perform photoacoustic imaging of the target body from different angles.
  • the mechanical scanner 130 may be as shown in FIG. 4,
  • the laser 120 may receive a control signal sent by the processor 105, and the control signal may include the frequency and timing of the generated laser.
  • the laser 120 generates the laser according to the control signal and is coupled to the probe 110 by Fiber bundle, and send the laser to the target.
  • the laser 120 may send a feedback signal to the processor 105, and the feedback signal may include the actual sending time of the laser.
  • the processor 105 determines the received photoacoustic signal according to a preset algorithm. The interval is long, and the probe 110 is controlled to receive the photoacoustic signal.
  • the probe 110 may transmit ultrasonic waves to the target body, and the probe 110 may receive ultrasonic echoes returned from the target body, and convert the ultrasonic echoes into ultrasonic echo signals. Depending on the target tissue, the received ultrasound echo signal may also be different.
  • the ultrasonic echo signal can be understood as the aforementioned ultrasonic echo signal / data.
  • the laser and the ultrasound are not sent at the same time.
  • the laser may be sent first, or the ultrasound may be sent first, that is, step 301 or step 302 may be performed first, which can be adjusted according to the actual application scenario. Not limited.
  • ultrasonic waves are sent through the probe 110, and the probe 110 can be set on the mechanical scanner 130, and then the processor 105 can send a control signal to the mechanical scanner 130 to control the motor in the mechanical scanner 130, The scanning speed and trajectory of the mechanical scanner 130 are controlled so that the probe 110 can surround the target body, send ultrasonic waves from different angles, and receive ultrasonic echoes from different angles, so as to image the target body from different angles.
  • the processor 105 controls to open the transmit / receive selection switch 102, and controls the transmit circuit 101 to transmit ultrasonic waves to the target body through the probe 110 and receive ultrasonic waves through the probe 110 Wave, and transmitted to the receiving circuit 103, that is, it can be understood that the receiving circuit 103 can receive the ultrasonic echo returned from the target body through the probe 110, thereby obtaining an ultrasonic echo signal.
  • an optical fiber bundle is coupled to the ultrasound array probe, and the optical fiber bundle is used to conduct laser light to both sides of the probe 110 to illuminate the target body in a back-illuminated manner.
  • the probe 110 includes an ultrasonic transducer.
  • the ultrasonic transducer has the function of transmitting and receiving signals. On the basis of ensuring the traditional ultrasonic imaging and Doppler blood flow imaging, it also has a large frequency bandwidth and high sensitivity. Improves the ability to detect photoacoustic signals, even weak signals can be detected.
  • the photoacoustic signal After receiving the photoacoustic signal and the ultrasonic echo signal, the photoacoustic signal can be converted into photoacoustic volume data, and the ultrasonic echo signal can be converted into ultrasonic volume data.
  • the noise in the ultrasonic signal may be removed.
  • the ultrasonic echo signal is subjected to beam synthesis processing by a beam synthesis circuit, and then transmitted to the processor 105, and the processor 105 processes the ultrasonic echo signal to obtain ultrasound volume data of the target volume.
  • the noise in the photoacoustic signal may be removed, and then image reconstruction processing such as beam synthesis processing may be performed to obtain photoacoustic volume data of the target volume.
  • the ultrasound volume data can be a grayscale image, which can reflect the structural information of the target tissue in the target body, and the photoacoustic volume data can reflect the functional information of the tissue in the target body.
  • the probe 110 is set to move in the mechanical scanner 130, multiple ultrasonic echo signals and photoacoustic signals of different angles can be acquired, then multiple frames of ultrasonic volume data and photoacoustic volume can also be obtained data.
  • the direction and angle of light projection can be changed, or the transparency of the object display can be adjusted to fully display the 3D structure of the target tissue, so that the operator can make certain observations through the ultrasound volume data and the photoacoustic volume data.
  • Doppler frequency shift can be used to realize Doppler blood flow imaging, and blood flow with a certain flow rate can be imaged.
  • Doppler blood flow imaging is too sensitive to movement, including tissue movement and probe movement, which makes it difficult to achieve three-dimensional Doppler imaging using mechanical scanners. In the process of mechanical probe driving probe scanning, it will introduce Artifacts.
  • photoacoustic imaging depends on the photoacoustic signal generated by the tissue's absorption of laser light at a specified wavelength, so it is not sensitive to the movement of the tissue or probe.
  • the present application can use a mechanical scanner to achieve the acquisition of photoacoustic volume data and ultrasound volume data of the target volume, display the collection of functional information of the target volume through the photoacoustic volume data, and realize the structure of the target volume through the ultrasound volume data Information collection, therefore, can eliminate the need for Doppler blood flow imaging, and can also achieve 3D collection of tissue function and structure information.
  • the photoacoustic volume data and / or the ultrasound volume data may be displayed on the display 106, or the operator may select to display the photoacoustic volume data or the ultrasound volume data Any frame of image in volume data.
  • the order of acquiring photoacoustic volume data and ultrasound volume data is not limited, and the photoacoustic volume data may be acquired first, or the ultrasound volume data may be acquired first, which may be adjusted according to actual application scenarios. , Not limited here.
  • the boundary of the target tissue is determined in the ultrasound volume data.
  • the boundary of the target tissue may be determined in the ultrasound volume data according to a preset algorithm, or the operator may input based on the ultrasound volume data to determine the boundary of the target tissue in the ultrasound volume data.
  • the boundary of the target tissue can be determined by comparing the parameter values of the target tissue and other tissues around the target tissue in the ultrasound volume data.
  • the parameter value may include at least one of a gray value, a brightness value, a pixel value, or a gradient value in the ultrasound volume data.
  • this parameter value may also be other values that can be compared with the image, which can be adjusted according to the actual application, and is not limited here.
  • the operator may also manually select to determine the boundary of the target tissue in the ultrasound volume data.
  • the processor 105 receives input parameters for ultrasound volume data, and determines the boundary of the target tissue according to the input parameters.
  • the ultrasound volume data may be displayed on the display 106, and the operator selects the boundary of the target tissue in the ultrasound volume data through the input device to generate input parameters. Therefore, even when the contrast between the target tissue and the surrounding normal tissue is not significant, the boundary of the target tissue can be manually delineated by the operator, so that the ultrasound volume image of the target tissue obtained later is more accurate.
  • the multiple frames of ultrasound volume data may be fused and displayed as 3D ultrasound volume data, and then the operator manually selects the 3D ultrasound volume data to determine the ultrasound volume data.
  • the boundaries of the target organization may be fused and displayed as 3D ultrasound volume data, and then the operator manually selects the 3D ultrasound volume data to determine the ultrasound volume data.
  • the ultrasound volume data is rendered, including adjusting the color value, brightness value or gray value of the boundary of the target tissue to obtain a three-dimensional ultrasound volume image of the target tissue.
  • ultrasound volume data when rendering ultrasound volume data, reference may be made to multi-frame ultrasound volume data, and volume ultrasound, surface rendering, and other three-dimensional rendering methods may be used to render ultrasound volume data.
  • An ultrasound volume image of the target tissue is obtained, that is, the ultrasound volume image is a three-dimensional ultrasound image.
  • the target tissue is drawn according to the frame shape to obtain a three-dimensional ultrasound volume image of the target tissue.
  • the photoacoustic volume data After acquiring the photoacoustic volume data, the photoacoustic volume data is rendered, and the light, color, etc. of the photoacoustic volume data are adjusted to obtain a three-dimensional photoacoustic volume image of the target tissue.
  • the photoacoustic volume data can be rendered by three-dimensional rendering to obtain the photoacoustic volume image.
  • the specific three-dimensional rendering method can be There are multiple methods including volume rendering and surface rendering, that is, the photoacoustic volume image is a three-dimensional photoacoustic image.
  • the application does not limit the order of acquiring the ultrasonic volume image and the photoacoustic volume image.
  • the ultrasonic volume image may be acquired first, or the photoacoustic volume image may be acquired first, that is, step 305 may be performed first, or the step may be performed first.
  • 306 which can be specifically adjusted according to actual application scenarios, which is not limited here.
  • the area where the ultrasound volume data is rendered may be greater than, equal to, or smaller than the area where the photoacoustic volume data is rendered.
  • the area for rendering the ultrasound volume data includes all or part of the target tissue, and the area for rendering the photoacoustic volume data may also include all or part of the target tissue. Assuming that the area for rendering ultrasound volume data is area A and the area for rendering photoacoustic volume data is area B, then area A may be greater than, or equal to, or less than area B.
  • the area B may be larger than the area A, that is, the area A may only be targeted to the area where the target tissue is located.
  • Rendering area B not only renders the area where the target tissue is located, but also renders other areas outside the target tissue. In this way, if the rendering method is different, the characteristics of the target tissue can be more clearly reflected on the fused image, improving the image Intuitive.
  • the area A may include all or part of the target organization
  • the area B may also include all or part of the target organization, that is, assuming that only a certain part of the target organization is analyzed, there is no need to To render an area, you only need to render the area where the certain part of the target tissue is located, which is not specifically limited here.
  • the ultrasound volume image and the photoacoustic volume image are fused to obtain a fusion image of the target tissue.
  • the photoacoustic volume image may be superimposed on the ultrasonic volume image, or the ultrasonic volume image may be superimposed on the photoacoustic volume image, or the operator may choose to superimpose the photoacoustic volume image on the ultrasonic volume image or
  • the ultrasound volume image is superimposed on the photoacoustic volume image, which can be adjusted according to the actual application scenario.
  • the photoacoustic volume image may be based on the photoacoustic volume image, superimposing the pixel value, brightness value, gray value, etc. of the ultrasonic volume image, or may be based on the ultrasonic volume image, superimposing the pixel value, brightness of the photoacoustic volume image.
  • the value, gray value, etc. can also be selected by the operator to superimpose on the basis of the photoacoustic volume image or the ultrasound volume image, which can be adjusted according to the actual application scenario.
  • the fusion image after the fusion image is obtained, it may be displayed on the display 106.
  • the ultrasound volume image and the photoacoustic volume image may be set to different colors to better distinguish the target tissue from the structural information in the target tissue.
  • the target tissue may be analyzed according to the parameters of the fused image to obtain the analysis result, which is displayed on the display 106.
  • the ultrasound volume data can be a grayscale image, which can show the approximate shape of the target tissue
  • the photoacoustic volume data can usually display the distribution inside the target tissue, for example, the distribution of blood vessels, blood oxygen, etc.
  • the boundary of the target tissue can be segmented according to the ultrasound volume data, and the ultrasound volume data is rendered to obtain an ultrasound volume image of the target tissue.
  • the photoacoustic volume image is obtained by rendering the photoacoustic volume data.
  • the photoacoustic volume image is an image including the distribution of blood vessels, blood oxygen, etc.
  • the ultrasound volume image and the photoacoustic volume image are superimposed to obtain a fusion image of the target tissue, so that the fusion image can simultaneously display the 3D image corresponding to the boundary of the target tissue and the distribution inside or around the target tissue. Therefore, the fusion image obtained by fusion can display a more comprehensive three-dimensional stereoscopic display of the target tissue, so that the operator can observe the target tissue more comprehensively and intuitively.
  • the disclosed system, device, and method may be implemented in other ways.
  • the device embodiments described above are only schematic.
  • the division of the units is only a division of logical functions.
  • there may be other divisions for example, multiple units or components may be combined or Can be integrated into another system, or some features can be ignored, or not implemented.
  • the displayed or discussed mutual coupling or direct coupling or communication connection may be indirect coupling or communication connection through some interfaces, devices or units, and may be in electrical, mechanical or other forms.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, they may be located in one place, or may be distributed on multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
  • each functional unit in each embodiment of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units may be integrated into one unit.
  • the above integrated unit can be implemented in the form of hardware or software function unit.
  • the integrated unit is implemented in the form of a software functional unit and sold or used as an independent product, it may be stored in a computer-readable storage medium.
  • the technical solution of the present application may be essentially or part of the contribution to the existing technology or all or part of the technical solution may be embodied in the form of a software product, and the computer software product is stored in a storage medium , Including several instructions to enable a computer device (which may be a personal computer, server, or network device, etc.) to perform all or part of the steps of the methods described in the embodiments of the present application.
  • the foregoing storage media include: U disk, mobile hard disk, read-only memory (Read-Only Memory, ROM), random access memory (Random Access Memory, RAM), magnetic disks or optical disks and other media that can store program codes .
  • the target body may be a human body, an animal, or the like.
  • the target tissue may be the face, spine, heart, uterus, or pelvic floor, or other parts of the human tissue, such as the brain, bones, liver, or kidney, which is not specifically limited here.

Landscapes

  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Biophysics (AREA)
  • Pathology (AREA)
  • Engineering & Computer Science (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Physics & Mathematics (AREA)
  • Molecular Biology (AREA)
  • Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Ultra Sonic Daignosis Equipment (AREA)

Abstract

一种成像方法和成像系统。成像方法包括:向目标体发射激光,并接收从该目标体返回的光声信号(301);向该目标体发射超声波,并接收从该目标体返回的超声回波,获得超声回波信号(302);根据该光声信号得到光声体数据,并且根据该超声回波信号得到超声体数据(303);在该超声体数据中确定目标组织的边界(304);根据该目标组织的边界对该目标组织进行渲染,以得到该目标组织的超声容积图像(305);对该光声体数据进行渲染得到该目标组织的光声容积图像(306);将该超声容积图像与该光声容积图像进行融合,以得到该目标组织的融合图像(307)。该成像方法和系统用于提高图像的直观性。

Description

一种成像方法以及成像系统 技术领域
本申请涉及医疗器械领域,尤其涉及一种成像方法以及成像系统。
背景技术
光声成像(Photoacoustic Imaging,PAI)是新型的生物医疗成像技术,PAI的原理是基于光声效应,当生物组织受到短脉冲的激光照射时,例如,纳秒(ns)量级,生物组织中具有强光学吸收特性的物质,例如血液,在吸收光能量后,将引起局部升温和热膨胀,从而产生光声信号,并向外传播。可以通过超声探头检测到受短脉冲的激光照射后的生物组织产生的光声信号,探测到光声信号,利用相应的重建算法,即可重建吸收体,即具有强光学吸收特性的物质的位置和形态。光声成像结合了光学和超声的有点,对一些重大疾病的早期诊断与预后评估有独特的优势,是具有巨大临床和产业前景的新型影像技术。受限于光在生物组织中的穿透能力,光声成像应用重点集中于一些浅层的器官。光声成像体现了生物体的功能信息,而传统的超声成像反应了生物体的结构信息,将二者有效地结合起来,即光声-超声双模态成像克服了单一模态成像的不足,能够提供更全面的组织结构和功能信息。
三维光声-超声成像借鉴于传统超声三维扫描方式,利用机械装置带动光声-超声复合探头沿某一方向运动,实现三维(3D,3Dimensions)数据的采集,然后对3D数据进行渲染、显示等,以达到使操作人员可以对组织结构和功能进行三维观察的目的。
然而,现有方案中,在通过三维光声-超声成像得到光声图像与超声图像时,可以通过调整角度以及透明度,对目标组织,即生物组织进行显示。而超声图像仅有灰度图像,无法直观地显示目标组织的图像,不便于对目标组织进行观察。例如,病灶与正常组织的灰度值可能会很接近,因此,通过三维光声-超声成像得到的图像,无法明显地观察出病灶,无法对病灶进行有效地显示。
发明内容
本申请提供一种成像方法以及成像系统,用于提高图像的直观性。
本申请实施例的第一方面提供一种成像方法,包括:向目标体发射激光,并接收从所述目标体返回的光声信号;向所述目标体发射超声波,并接收从所述目标体返回的超声回波信号;根据所述光声信号得到光声体数据,并且根据所述超声回波信号得到超声体数据;在所述超声体数据中确定目标组织的边界;根据所述目标组织的边界对所述目标组织进行渲染,以得到所述目标组织的超声容积图像;对所述光声体数据进行渲染得到所述目标组织的光声容积图像;将所述超声容积图像与所述光声容积图像进行融合,以得到所述目标组织的融合图像。
本申请实施例的第二方面提供一种成像系统,包括:激光器、探头以及发射电路、接收电路以及处理器;所述激光器用于产生照射目标体的激光,所述激光器通过光纤束耦合至所述探头,并通过所述光纤束向所述目标体发射所述激光;所述接收电路用于控制所述探头接收从所述目标体返回的光声信号;所述发射电路用于控制所述探头向所述目标体发射超声波;所述接收电路用于控制所述探头接收从所述目标体返回的超声回波信号;所述处理器用于生成控制信号,并发送至所述激光器,以控制所述激光器产生所述激光;所述处理器还用于根据所述光声信号得到光声体数据,并且根据所述超声回波信号得到超声体数据;在所述超声体数据中确定目标组织的边界,根据所述目标组织的边界对所述超声体数据进行渲染,以得到所述目标组织的超声容积图像;对所述光声体数据进行渲染得到所述目标组织的光声容积图像;将所述超声容积图像与所述光声容积图像进行融合,以得到所述目标组织的融合图像。
本申请实施例的第三方面提供了一种计算机可读存储介质,该计算机可读存储介质中存储有指令,当其在计算机上运行时,使得计算机执行上述第一方面提供的成像方法。
在本申请实施方式中,首先分别向目标体发射激光以及超声波,以得到超声体数据与光声体数据。通常,超声体数据可以是灰度图像,可以显示目标组织的形状,光声体数据通常可以包括目标组织内部的分布数据,例如,血管、血氧等分布情况。因此,本申请根据超声体数据可以对目标组织的边界进行分割,并对超声体数据进行渲染,以得到目标组织的超声容积图像。并通过对光声体数据进行渲染得到光声容积图像,该光声容积图像为包括目标组织内部或 周边的血管、血氧等分布情况的图像。并将超声容积图像与光声容积图像进行融合,得到目标组织的融合图像,以使融合图像中可以同时显示与目标组织的结构的3D图像,以及目标组织内部或周边的分布情况。因此,融合得到的融合图像可以对目标组织进行更全面的三维立体显示,使操作人员可以更全面、更直观地对目标组织进行观察。
附图说明
图1为本申请实施例提供的一种可能的成像系统的结构框图示意图;
图2为本申请实施例提供的一种可能的超声成像方法的应用场景示意图;
图3为本申请实施例提供的一种可能的成像方法的流程图;
图4为本申请实施例提供的一种可能的机械扫描器示意图;
图5为本申请实施例提供的一种可能的探头示意图。
具体实施方式
本申请提供一种成像方法以及成像系统,用于提高图像显示的直观性。
本申请的说明书和权利要求书及上述附图中的术语“第一”、“第二”、“第三”、“第四”等(如果存在)是用于区别类似的对象,而不必用于描述特定的顺序或先后次序。应该理解这样使用的数据在适当情况下可以互换,以便这里描述的实施例能够以除了在这里图示或描述的内容以外的顺序实施。此外,术语“包括”和“具有”以及他们的任何变形,意图在于覆盖不排他的包含,例如,包含了一系列步骤或单元的过程、方法、系统、产品或设备不必限于清楚地列出的那些步骤或单元,而是可包括没有清楚地列出的或对于这些过程、方法、产品或设备固有的其它步骤或单元。
图1为本申请实施例中的成像系统10的结构框图示意图。该成像系统10可以包括探头110、激光器120以及机械扫描器130,以及发射电路101、发射/接收选择开关102、接收电路103、处理器105、显示器106以及存储器107。当然,该成像系统10还可以包括其他图中未示出的设备或器件等。
发射电路101可以激励探头110向目标体发射超声波。在探头110发射超声波后,接收电路103可以通过探头110接收从目标体返回的超声回波,从而获得 超声回波信号/数据。该超声回波信号/数据直接或经过波束合成电路进行波束合成处理后,送入处理器105。处理器105对经波束合成电路处理后的超声回波信号/数据进行处理,以获得目标体的超声体数据。处理器105获得的超声体数据可以存储于存储器107中。激光器120可以产生激光,并通过光纤束向目标体发射激光。在激光器120发射激光后,接收电路103还可以通过探头110接收目标体在激光的激励下返回的光声信号/数据。该光声信号/数据直接或经过处理后送入处理器105,处理器对该光声信号/数据进行处理,以得到目标体的光声体数据。机械扫描器130可以带动探头110运动。前述的超声体数据与光声体数据可以在显示器106上显示,即显示器106上可以显示超声图像与光声图像。
通过机械扫描器130可以使探头110从不同的方位接收超声回波信号/数据或光声信号/数据,可以使处理器105对接收到的超声回波信号/数据或光声信号/数据进行处理,得到超声体数据或光声体数据。
其中,机械扫描器130为可选装置,在有些实施方式中,该机械扫描器130设置在探头110内部,即在探头110上集合了机械扫描器130的功能。
本申请的一个实施例中,机械扫描器130中还可以包括电机控制器与电机,由电机控制器根据处理器发送的控制信号,对机械扫描器130内电机的运动轨迹、行程或速度等进行控制。
本申请的一个实施例中,探头110可以是独立存在的,也可以是设置在机械扫描器130上,由机械扫描器130带动探头110运动。
本申请的一个实施例中,激光器120可以是与发射/接收选择开关102连接,由发射/接收选择开关102控制发射激光,也可以是激光器120直接通过光传导工具连接到探头110,在探头110上耦合光纤束,利用光纤束将激光传导至探头110的声头的两侧,采用背向式打光的方式对目标体进行照射。
本申请的一个实施例中,探头110上具体可以包括超声换能器,超声换能器具有发射和接收信号的功能,可以进行灰阶成像与多普勒流血成像等多种成像。另外,在有些实现方式中,光纤束和超声换能器耦合,并通过外壳包围,形成一个集光声成像和超声成像功能为一体的探头,即,在这种结构的探头下,激光器发射激光,并通过探头将该激光照射到目标体上,并通过探头接收从该目标体返回的在激光激励下形成的光声信号。当然,该探头还可以用于传统的 超声成像,即向目标体发射超声波,并接收从目标体返回的超声回波。当然,还可以将激光器直接和超声换能器耦合,并通过外壳全部包围或者部分包围,形成一个集光声成像和超声成像功能为一体的探头,该探头既可以用于光声成像,又可以用于超声成像。
本申请的一个实施例中,前述的显示器106可为成像系统内置的触摸显示屏、液晶显示屏等,也可以是独立于成像系统之外的液晶显示器、电视机等独立显示设备,也可为手机、平板电脑等电子设备上的显示屏,等等。
本申请的一个实施例中,前述的存储器107可为闪存卡、固态存储器、硬盘等。
本申请的一个实施例中,还提供一种计算机可读存储介质,该计算机可读存储介质存储有多条程序指令,该多条程序指令被处理器105调用执行后,可执行本申请各个实施例中的超声成像方法中的部分步骤或全部步骤或其中步骤的任意组合。
本申请的一个实施例中,该计算机可读存储介质可为存储器107,其可以是闪存卡、固态存储器、硬盘等非易失性存储介质。
本申请的一个实施例中,前述的处理器105可以通过软件、硬件、固件或者其组合实现,可以使用电路、单个或多个专用集成电路(application specific integrated circuits,ASIC)、单个或多个通用集成电路、单个或多个微处理器、单个或多个可编程逻辑器件、或者前述电路或器件的组合、或者其他适合的电路或器件,从而使得该处理器105可以执行本申请的各个实施例中的成像方法的相应步骤。
下面基于前述的成像系统,对本申请中的成像方法进行详细描述。
需要说明的是,结合图1所示的成像系统的结构框图示意图,本申请实施例提供的成像方法可应用于如下应用场景:示例性地,具体应用场景可以参阅图2。操作人员将探头110在目标体201进行扫描,激光器发射激光,并通过光纤束照射在目标体上,探头接收从目标体上返回的光声信号,利用探头向目标体发射超声波,并通过探头接收丛目标体返回的超声回波信号。操作人员可以通过显示器106看到组织结构等。
基于此,请参阅图3,本申请实施例提供的一种成像方法,该成像方法可 以应用于前述图1所示的成像系统,该成像方法实施例包括:
301、向目标体发射激光,并接收从目标体返回的光声信号。
在确定目标组织所在的目标体后,激光器120通过光纤束向目标体发射激光,然后探头110接收目标体在激光激励下产生的光声信号。根据目标组织的不同,接收到的光声信号也可能不同。
具体地,激光器通过光纤束耦合至探头,激光器发射激光,然后由光纤束向目标体发射激光。当目标体中的组织吸收光能量之后,将引起升温和热膨胀,从而产生光声信号向外传播,由探头110检测得到对应的光声信号。
本申请的一个实施例中,可以将探头110设置在机械扫描器130上,然后可以由处理器105向机械扫描器130发送控制信号,控制机械扫描器130内的电机,以控制机械扫描器130的扫描速度与轨迹等。向目标体发射激光后,使探头110可以围绕目标体,从不同的角度接收从目标体返回的光声信号,以从不同的角度对目标体进行光声成像。
示例性地,机械扫描器130可以如图4所示,
本申请的一个实施例中,激光器120可以接收处理器105发送的控制信号,该控制信号可以包括产生的激光的频率、时序等,激光器120根据该控制信号产生激光,并通过耦合至探头110的光纤束,并向目标体发送该激光。
本申请的一个实施例中,激光器120在产生激光后,可以向处理器105发送反馈信号,该反馈信号中可以包括激光实际的发送时间,处理器105根据预置的算法确定接收光声信号的间隔时长,并控制探头110接收光声信号。
302、向目标体发射超声波,并接收从目标体返回的超声回波,获得超声回波信号。
可以通过探头110向目标体发射超声波,并通过探头110接收从目标体返回的超声回波,并将该超声回波转换成超声回波信号。根据目标组织的不同,接收到的超声回波信号也可能不同。超声回波信号可以理解为前述的超声回波信号/数据。
需要说明的是,激光与超声波不同时发送,可以是先发送激光,也可以是先发送超声波,即可以是先执行步骤301,也可以先执行步骤302,具体可以根据实际应用场景调整,此处不作限定。
本申请的一个实施例中,超声波通过探头110发送,可以将探头110设置在机械扫描器130上,然后可以由处理器105向机械扫描器130发送控制信号,控制机械扫描器130内的电机,以控制机械扫描器130的扫描速度与轨迹等,以使探头110可以围绕目标体,从不同的角度发送超声波,并且不同的角度接收超声回波,以从不同的角度对目标体进行超声成像。
本申请的一个实施例中,如图5所示,具体可以是处理器105控制打开发射/接收选择开关102,并控制发射电路101通过探头110向目标体发射超声波,并通过探头110接收超声回波,并传送至接收电路103,即可以理解为接收电路103可以通过探头110接收从目标体返回的超声回波,从而获得超声回波信号。
本申请的一个实施例中,在超声阵列探头上耦合光纤束,利用光纤束将激光传导至探头110的两侧采用背向式打光的方式对目标体进行照射。且探头110中包括超声换能器,超声换能器具有发射和接收信号的功能,在保证了传统的超声成像与多普勒血流成像的基础上,同时具有较大频率带宽以及高灵敏度,提升了对光声信号的检测能力,即使微弱的信号也能检测到。
303、根据光声信号得到光声体数据,并且根据超声回波信号得到超声体数据。
在接收光声信号与超声回波信号后,可以将光声信号转换为光声体数据,将超声回波信号转换为超声体数据。
具体地,在接收到超声回波信号后,可以去除超声信号中的噪声。超声回波信号经过波束合成电路进行波束合成处理后,传输至处理器105,处理器105对该超声回波信号进行处理,以获得目标体的超声体数据。在获取光声信号后,也可以是去除光声信号中的噪声,然后可以波束合成处理等图像重建处理,以获得目标体的光声体数据。通常,超声体数据可以为灰度图像,可以体现目标体内的目标组织的结构信息,光声体数据可以体现目标体内的组织的功能信息。
本申请的一个实施例中,若探头110设置在机械扫描器130运动,则可以获取多个不同角度超声回波信号与光声信号,那么也可以得到多帧对应的超声体数据与光声体数据。通常,可以改变光线投射方向与角度,或者调整物体显示的透明度,来全面显示目标组织的3D结构,以使操作人员可以通过超声体数 据与光声体数据进行一定的观察。
通常,可以利用多普勒频移实现多普勒流血成像,可以对具有一定流速的血流进行成像。但多普勒血流成像对于运动过于敏感,包括组织运动和探头运动,导致利用机械扫描器实现三维多普勒成像较难实现,在机械扫描器带动探头扫描的过程中,会引入因运动导致的伪像。但是光声成像依赖于组织对指定波长下的激光的吸收而产生的光声信号,因此,对于组织或探头的运动并不敏感。因此,本申请可以利用机械扫描器实现对目标体的光声体数据与超声体数据的获取,通过光声体数据显示目标体的功能信息的采集,并通过超声体数据实现对目标体的结构信息的采集,因此,可以无需进行多普勒血流成像,也可以实现对组织体的功能和结构信息的3D采集。
本申请的一个实施例中,在获取光声体数据与超声体数据后,可以在显示器106中显示光声体数据和/或超声体数据,还可以由操作人员选择显示光声体数据或超声体数据中的任一帧图像。
需要说明的是,在本申请实施例中,对获取光声体数据与超声体数据的顺序不作限定,可以先获取光声体数据,也可以先获取超声体数据,具体可以根据实际应用场景调整,此处不作限定。
304、在超声体数据中确定目标组织的边界。
在获取超声体数据后,在超声体数据中确定目标组织的边界。可以是根据预置的算法在超声体数据中确定目标组织的边界,也可以是操作人员根据超声体数据进行输入,以在超声体数据中确定目标组织的边界。
本申请的一个实施例中,可以通过对比超声体数据中,目标组织与目标组织周围的其他组织的参数值确定目标组织的边界。其中,该参数值可以包括该超声体数据中的灰度值、亮度值、像素值或梯度值中的至少一个。当然,该参数值除了前述的灰度值、亮度值、像素值或梯度值外,也可以是其他可以进行图像对比的值,具体可以根据实际应用调整,此处并不作限定。
本申请的一个实施例中,还可以通过操作人员手动选择,确定超声体数据中目标组织的边界。处理器105接收对超声体数据的输入参数,并根据该输入参数确定出目标组织的边界。例如,可以在显示器106中显示超声体数据,操作人员通过输入设备,在超声体数据中选定目标组织的边界,生成输入参数。 因此,即使在目标组织与周围的正常组织的对比不显著时,也可以通过操作人员手动划定目标组织的边界,使后续得到目标组织的超声容积图像更准确。
本申请的一个实施例中,当超声体数据有多帧时,可以将多帧超声体数据融合显示为3D超声体数据,然后由操作人员在3D超声体数据中手动选择,确定超声体数据中目标组织的边界。
305、根据目标组织的边界对超声体数据进行渲染,以得到目标组织的超声容积图像。
在超声体数据中确定目标组织的边界后,对超声体数据进行渲染,包括对目标组织的边界进行色彩值、亮度值或灰度值等的调整,得到立体的目标组织的超声容积图像。
本申请的一个实施例中,具体在对超声体数据进行渲染时,可以参阅多帧超声体数据进行,还可以通过体绘制、面绘制等多种三维渲染的方式,对超声体数据进行渲染,得到目标组织的超声容积图像,即该超声容积图像是三维超声图像。
可以理解为,在超声体数据中确定目标组织的框架形状后,按照该框架形状将目标组织绘制出来,得到立体的目标组织的超声容积图像。
306、对光声体数据进行渲染得到目标组织的光声容积图像。
在获取光声体数据后,对光声体数据进行渲染,调整光声体数据的光线、色彩等,以得到立体的目标组织的光声容积图像。
在本申请的一个实施例中,得到目标组织的光声容积图像的方式有多种,可以通过三维渲染的方式对光声体数据进行渲染,得到光声容积图像,具体的三维渲染的方式可以包括体绘制、面绘制等多种方式,即该光声容积图像是三维光声图像。
需要说明的是,本申请对获取超声容积图像与光声容积图像的顺序不作限定,可以先获取超声容积图像,也可以先获取光声容积图像,即可以先执行步骤305,也可以先执行步骤306,具体可以根据实际应用场景调整,此处不作限定。
在本申请的一个实施例中,对超声体数据进行渲染的区域可以大于、等于、或者小于对光声体数据进行渲染的区域。其中,对超声体数据进行渲染的区域 包括目标组织的全部或者部分区域,对光声体数据进行渲染的区域也可以包括目标组织的全部或者部分区域。假设对超声体数据进行渲染的区域为A区域,对光声体数据进行渲染的区域为B区域,则A区域可以大于,或者等于,或者小于B区域。例如,在光声容积图像和超声容积图像融合的过程中,假设光声容积图像在超声容积图像的下面,则可以将B区域大于A区域,即,A区域可以只是针对目标组织所在的区域进行渲染,B区域不仅对目标组织所在的区域进行渲染,还会对目标组织以外的其他区域进行渲染,这样,渲染的方式不同,则在融合图像上更能明显地体现目标组织特征,提高图像的直观性。当然,该A区域可以包括目标组织的全部或者部分,该B区域也可以包括目标组织的全部或者部分,即,假设只是对目标组织的某个部位进行重点分析,则无需对整个目标组织所在的区域进行渲染,只需渲染该目标组织的该某个部位所在的区域,此处不做具体限定。
307、将超声容积图像与光声容积图像进行融合,以得到目标组织的融合图像。
在获取超声容积图像与光声容积图像后,对超声容积图像与光声容积图像进行融合,得到目标组织的融合图像。
具体地,可以是将光声容积图像叠加至超声容积图像中,也可以是将超声容积图像叠加至光声容积图像中,还可以是由操作人员选择将光声容积图像叠加至超声容积图像或将超声容积图像叠加至光声容积图像中,具体可以根据实际应用场景调整。
更进一步地,可以是以光声容积图像为基础,叠加超声容积图像的像素值、亮度值、灰度值等,也可以是以超声容积图像为基础,叠加光声容积图像的像素值、亮度值、灰度值等,还可以是由操作人员选择以光声容积图像或超声容积图像为基础进行叠加等,具体可以根据实际应用场景调整。
在本申请的一个实施例中,在得到融合图像后,可以在显示器106中显示。
在本申请的一个实施例中,在进行图像融合时,可以将超声容积图像与光声容积图像设置为不同的色彩,以更好地区分目标组织与目标组织内的结构信息。
在本申请的一个实施例中,在得到融合图像后,可以根据融合图像的参数, 对目标组织进行分析,以得到分析结果,并在显示器106中显示。例如,通常可以从融合图像中获取一些血管或血氧等分布情况,可以根据目标组织内部或周围的血管或血氧等分布情况对目标组织进行分析,可以对目标组织的状态进行评估等,并将评估结果显示在显示器106中。以使操作人员可以以该评估结果为参考,更全面地对目标组织进行观察。
因此,在本申请实施方式中,向目标体发射激光以得到光声体数据,以及向目标体发射超声波,以得到超声体数据。通常,超声体数据可以为灰度图像,可以显示目标组织的大概形状,光声体数据通常可以显示目标组织内部的分布情况,例如,血管、血氧等分布情况。然后根据超声体数据可以对目标组织的边界进行分割,并对超声体数据进行渲染,以得到目标组织的超声容积图像。并通过光声体数据渲染得到光声容积图像,该光声容积图像为包括目标组织内部或周边的血管、血氧等分布情况的图像。并将超声容积图像与光声容积图像进行叠加,得到目标组织的融合图像,以使融合图像中可以同时显示与目标组织的边界对应的3D图像,以及目标组织内部或周边的分布情况。因此,融合得到的融合图像可以对目标组织进行更全面的三维立体显示,使操作人员可以更全面、更直观地对目标组织进行观察。
在本申请所提供的几个实施例中,应该理解到,所揭露的系统,装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元 中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。
所述集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本申请各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(Read-Only Memory,ROM)、随机存取存储器(Random Access Memory,RAM)、磁碟或者光盘等各种可以存储程序代码的介质。
需要说明的是,实际应用中,该目标体可以是人体、动物等。目标组织可以为面部、脊柱、心脏、子宫或者盆底等,也可以是人体组织的其他部位,如脑部、骨骼、肝脏或者肾脏等,具体此处不做限定。
以上所述,仅为本申请的具体实施方式,但本申请的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本申请揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应以所述权利要求的保护范围为准。

Claims (18)

  1. 一种成像方法,其特征在于,包括:
    向目标体发射激光,并接收从所述目标体返回的光声信号;
    向所述目标体发射超声波,并接收从所述目标体返回的超声回波,获得超声回波信号;
    根据所述光声信号得到光声体数据,并且根据所述超声回波信号得到超声体数据;
    在所述超声体数据中确定目标组织的边界;
    根据所述目标组织的边界对所述超声体数据进行渲染,以得到所述目标组织的超声容积图像;
    对所述光声体数据进行渲染得到所述目标组织的光声容积图像;
    将所述超声容积图像与所述光声容积图像进行融合,以得到所述目标组织的融合图像。
  2. 根据权利要求1所述的方法,其特征在于,所述在所述超声体数据中确定目标组织的边界,包括:
    通过对比所述目标组织与所述目标组织周围的其他组织的参数值确定所述目标组织的边界,其中,所述参数值包括所述超声体数据中的灰度值,亮度值,像素值和梯度值中的至少一个。
  3. 根据权利要求1所述的方法,其特征在于,所述在所述超声体数据中确定目标组织的边界,包括:
    接收对所述超声体数据的输入参数;
    根据所述输入参数确定所述目标组织的边界。
  4. 根据权利要求1-3中任一项所述的方法,其特征在于,所述根据所述目标组织的边界对所述超声体数据进行渲染,以得到所述目标组织的超声容积图像,包括:
    根据所述目标组织的边界并通过三维渲染的方式对所述超声体数据进行渲染,以得到所述超声容积图像。
  5. 根据权利要求1-4中任一项所述的方法,其特征在于,所述对所述光声体数据进行渲染得到所述目标组织的光声容积图像,包括:
    通过三维渲染的方式对所述光声体数据进行渲染,得到所述光声容积图像。
  6. 根据权利要求1-5任一项所述的方法,其特征在于,所述方法还包括:
    显示所述融合图像。
  7. 根据权利要求1-6中任一项所述的方法,其特征在于,所述方法还包括:
    根据所述融合图像对所述目标组织进行分析,以得到所述目标组织的组织分布数据。
  8. 根据权利要求1-7任一项所述的方法,其特征在于,
    对所述超声体数据进行渲染的区域大于,等于,或者小于对所述光声体数据进行渲染的区域,其中,对所述超声体数据进行渲染的区域包括所述目标组织的全部或者部分区域,对所述光声体数据进行渲染的区域包括所述目标组织的全部或者部分区域。
  9. 根据权利要求1-8任一项所述的方法,其特征在于,所述将所述超声容积图像与所述光声容积图像进行融合,以得到所述目标组织的融合图像,包括:
    将所述超声容积图像叠加至所述光声容积图像中;
    或者,将所述光声容积图像叠加至所述超声容积图像中。
  10. 一种成像系统,其特征在于,包括:激光器、探头以及发射电路、接收电路以及处理器;
    所述激光器用于产生照射目标体的激光,所述激光通过光纤束耦合至所述探头,并通过所述光纤束向所述目标体发射所述激光;
    所述接收电路用于控制所述探头接收从所述目标体返回的光声信号;
    所述发射电路用于控制所述探头向所述目标体发射超声波;
    所述接收电路用于控制所述探头接收从所述目标体返回的超声回波,获得超声回波信号;
    所述处理器用于生成控制信号,并发送至所述激光器,以控制所述激光器产生所述激光;
    所述处理器还用于根据所述光声信号得到光声体数据,并且根据所述超声 回波信号得到超声体数据;在所述超声体数据中确定目标组织的边界,根据所述目标组织的边界对所述超声体数据进行渲染,以得到所述目标组织的超声容积图像;对所述光声体数据进行渲染得到所述目标组织的光声容积图像;将所述超声容积图像与所述光声容积图像进行融合,以得到所述目标组织的融合图像。
  11. 根据权利要求10所述的成像系统,其特征在于,所述处理器,具体用于:
    通过对比所述目标组织与所述目标组织周围的其他组织的参数值确定所述目标组织的边界,其中,所述参数值包括所述超声体数据中的灰度值,亮度值,像素值和梯度值中的至少一个。
  12. 根据权利要求10所述的成像系统,其特征在于,所述处理器,具体用于:
    接收对所述超声体数据的输入参数;
    根据所述输入参数确定所述目标组织的边界。
  13. 根据权利要求10-12中任一项所述的成像系统,其特征在于,所述处理器,具体用于:
    根据所述目标组织的边界并通过三维渲染的方式对所述目标组织进行渲染,以得到所述超声容积图像。
  14. 根据权利要求10-13中任一项所述的成像系统,其特征在于,所述处理器,具体用于:
    通过三维渲染的方式对所述光声体数据进行渲染,得到所述光声容积图像。
  15. 根据权利要求10-14中任一项所述的成像系统,其特征在于,所述成像系统还包括:显示器;
    所述显示器用于显示所述融合图像。
  16. 根据权利要求10-15中任一项所述的成像系统,其特征在于,
    所述处理器还用于根据所述融合图像对所述目标组织进行分析,以得到所述目标组织的组织分布数据。
  17. 根据权利要求10-16中任一项所述的成像系统,其特征在于,
    对所述超声体数据进行渲染的区域大于,等于,或者小于对所述光声体数据进行渲染的区域,其中,对所述超声体数据进行渲染的区域包括所述目标组织的全部或者部分区域,对所述光声体数据进行渲染的区域包括所述目标组织的全部或者部分区域。
  18. 根据权利要求10-17中任一项所述的成像系统,其特征在于,所述处理器具体用于将所述超声容积图像叠加至所述光声容积图像中;或者,将所述光声容积图像叠加至所述超声容积图像中。
PCT/CN2018/111679 2018-10-24 2018-10-24 一种成像方法以及成像系统 WO2020082269A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/CN2018/111679 WO2020082269A1 (zh) 2018-10-24 2018-10-24 一种成像方法以及成像系统
CN201880055971.2A CN111727013B (zh) 2018-10-24 2018-10-24 一种成像方法以及成像系统

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2018/111679 WO2020082269A1 (zh) 2018-10-24 2018-10-24 一种成像方法以及成像系统

Publications (1)

Publication Number Publication Date
WO2020082269A1 true WO2020082269A1 (zh) 2020-04-30

Family

ID=70330893

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/111679 WO2020082269A1 (zh) 2018-10-24 2018-10-24 一种成像方法以及成像系统

Country Status (2)

Country Link
CN (1) CN111727013B (zh)
WO (1) WO2020082269A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113243889A (zh) * 2020-08-10 2021-08-13 北京航空航天大学 获取生物组织的信息的方法和设备

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115248446A (zh) * 2021-04-28 2022-10-28 中慧医学成像有限公司 一种基于激光雷达的三维超声成像方法和系统

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101053521A (zh) * 2006-04-12 2007-10-17 株式会社东芝 医用图像显示装置
CN104545991A (zh) * 2013-10-22 2015-04-29 三星电子株式会社 用于光声图像和超声图像的宽带超声探测器
CN104661596A (zh) * 2012-09-20 2015-05-27 株式会社东芝 图像处理装置、x射线诊断装置以及显示方法
CN104939864A (zh) * 2014-03-28 2015-09-30 日立阿洛卡医疗株式会社 诊断图像生成装置以及诊断图像生成方法
CN105431091A (zh) * 2013-08-01 2016-03-23 西江大学校产学协力団 用于获取融合图像的设备和方法
US20170209119A1 (en) * 2016-01-27 2017-07-27 Canon Kabushiki Kaisha Photoacoustic ultrasonic imaging apparatus
CN107223035A (zh) * 2017-01-23 2017-09-29 深圳迈瑞生物医疗电子股份有限公司 一种成像系统、方法及超声成像系统

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5432708B2 (ja) * 2006-06-23 2014-03-05 コーニンクレッカ フィリップス エヌ ヴェ 光音響及び超音波合成撮像器のタイミング制御装置
JP5523681B2 (ja) * 2007-07-05 2014-06-18 株式会社東芝 医用画像処理装置
EP2182382A1 (en) * 2008-11-03 2010-05-05 Medison Co., Ltd. Ultrasound system and method for providing three-dimensional ultrasound images
JP5655021B2 (ja) * 2011-03-29 2015-01-14 富士フイルム株式会社 光音響画像化方法および装置
JP6058290B2 (ja) * 2011-07-19 2017-01-11 東芝メディカルシステムズ株式会社 画像処理システム、装置、方法及び医用画像診断装置
CN106214130A (zh) * 2016-08-31 2016-12-14 北京数字精准医疗科技有限公司 一种手持式光学成像和超声成像多模态融合成像系统与方法
CN107174208A (zh) * 2017-05-24 2017-09-19 哈尔滨工业大学(威海) 一种适用于外周血管成像的光声成像系统及方法
CN108403082A (zh) * 2018-01-24 2018-08-17 苏州中科先进技术研究院有限公司 一种生物组织成像系统及成像方法

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101053521A (zh) * 2006-04-12 2007-10-17 株式会社东芝 医用图像显示装置
CN104661596A (zh) * 2012-09-20 2015-05-27 株式会社东芝 图像处理装置、x射线诊断装置以及显示方法
CN105431091A (zh) * 2013-08-01 2016-03-23 西江大学校产学协力団 用于获取融合图像的设备和方法
CN104545991A (zh) * 2013-10-22 2015-04-29 三星电子株式会社 用于光声图像和超声图像的宽带超声探测器
CN104939864A (zh) * 2014-03-28 2015-09-30 日立阿洛卡医疗株式会社 诊断图像生成装置以及诊断图像生成方法
US20170209119A1 (en) * 2016-01-27 2017-07-27 Canon Kabushiki Kaisha Photoacoustic ultrasonic imaging apparatus
CN107223035A (zh) * 2017-01-23 2017-09-29 深圳迈瑞生物医疗电子股份有限公司 一种成像系统、方法及超声成像系统

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113243889A (zh) * 2020-08-10 2021-08-13 北京航空航天大学 获取生物组织的信息的方法和设备
CN113243889B (zh) * 2020-08-10 2022-05-10 北京航空航天大学 获取生物组织的信息的方法和设备

Also Published As

Publication number Publication date
CN111727013A (zh) 2020-09-29
CN111727013B (zh) 2023-12-22

Similar Documents

Publication Publication Date Title
US11323625B2 (en) Subject information obtaining apparatus, display method, program, and processing apparatus
EP1614387B1 (en) Ultrasonic diagnostic apparatus, image processing apparatus and image processing method
KR20070069322A (ko) 병변조직을 검출하는 초음파 진단 시스템 및 방법
KR20060100283A (ko) 초음파 화상 생성 방법 및 초음파 진단 장치
US10123780B2 (en) Medical image diagnosis apparatus, image processing apparatus, and image processing method
CN104981208A (zh) 超声波诊断装置及其控制程序
US8454515B2 (en) Ultrasonic diagnostic apparatus and ultrasonic diagnostic method
US20150173721A1 (en) Ultrasound diagnostic apparatus, medical image processing apparatus and image processing method
WO2018008439A1 (en) Apparatus, method and program for displaying ultrasound image and photoacoustic image
WO2020082269A1 (zh) 一种成像方法以及成像系统
WO2007072490A1 (en) An operating mode for ultrasound imaging systems
CN110338754B (zh) 光声成像系统及方法、存储介质及处理器
CN109414254A (zh) 控制设备、控制方法、控制系统及程序
WO2020082270A1 (zh) 一种成像方法以及成像系统
US20150105658A1 (en) Ultrasonic imaging apparatus and control method thereof
US20200113437A1 (en) Systems and methods for multi-modality imaging
WO2020082265A1 (zh) 一种成像方法以及成像系统
EP3329843A1 (en) Display control apparatus, display control method, and program
JP5354885B2 (ja) 超音波診断システム
KR101861842B1 (ko) 복수의 주파수를 이용한 고강도 집속 초음파 제어방법과 그를 위한 고강도 집속 초음파 치료 장치
JP4909132B2 (ja) 光トモグラフィ装置
US11965960B2 (en) Ultrasound imaging apparatus and control method thereof
WO2022080228A1 (ja) 超音波診断装置および超音波診断装置の表示方法
KR20070109292A (ko) 프로브의 트랜스듀서를 제어하는 초음파 시스템 및 방법
KR101538423B1 (ko) 초음파 영상 장치 및 그 제어 방법

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18937642

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18937642

Country of ref document: EP

Kind code of ref document: A1