WO2022202051A1 - Biological observation system, biological observation method, and irradiation device - Google Patents

Biological observation system, biological observation method, and irradiation device Download PDF

Info

Publication number
WO2022202051A1
WO2022202051A1 PCT/JP2022/007134 JP2022007134W WO2022202051A1 WO 2022202051 A1 WO2022202051 A1 WO 2022202051A1 JP 2022007134 W JP2022007134 W JP 2022007134W WO 2022202051 A1 WO2022202051 A1 WO 2022202051A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
light
pixel
observation
living body
Prior art date
Application number
PCT/JP2022/007134
Other languages
French (fr)
Japanese (ja)
Inventor
浩 吉田
Original Assignee
ソニーグループ株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ソニーグループ株式会社 filed Critical ソニーグループ株式会社
Publication of WO2022202051A1 publication Critical patent/WO2022202051A1/en

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B10/00Other methods or instruments for diagnosis, e.g. instruments for taking a cell sample, for biopsy, for vaccination diagnosis; Sex determination; Ovulation-period determination; Throat striking implements
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/02Detecting, measuring or recording pulse, heart rate, blood pressure or blood flow; Combined pulse/heart-rate/blood pressure determination; Evaluating a cardiovascular condition not otherwise provided for, e.g. using combinations of techniques provided for in this group with electrocardiography or electroauscultation; Heart catheters for measuring blood pressure
    • A61B5/026Measuring blood flow
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/02Detecting, measuring or recording pulse, heart rate, blood pressure or blood flow; Combined pulse/heart-rate/blood pressure determination; Evaluating a cardiovascular condition not otherwise provided for, e.g. using combinations of techniques provided for in this group with electrocardiography or electroauscultation; Heart catheters for measuring blood pressure
    • A61B5/026Measuring blood flow
    • A61B5/0285Measuring or recording phase velocity of blood waves

Definitions

  • the present disclosure relates to a living body observation system, a living body observation method, and an irradiation device.
  • the present disclosure provides a living body observation system, a living body observation method, and an irradiation device capable of acquiring information on deeper parts of a living body.
  • an irradiation unit that irradiates an observation site of a living body with coherent light through a diaphragm having a light shielding portion that shields light and an opening that transmits light; a pixel signal acquisition unit that acquires pixel signals including a region of the observation site irradiated with the coherent light; an image generating unit that generates a first image based on pixel signals in a dark area not irradiated with the coherent light in the pixel signals;
  • a living body observation system comprising:
  • the light shielding part may be made of a metal that does not transmit light.
  • the pixel signals of the dark region may be based on light that is part of the coherent light that is directly applied to the living body and is scattered by blood flow under the tissue of the observation site.
  • a pixel range setting unit for setting a predetermined pixel value range based on a profile, which is a pixel row with respect to a positional change on the image based on the pixel signal
  • the image generator may generate the first image based on the predetermined pixel value range.
  • It may further include an image generation unit that generates a second image based on pixel signals in the bright area irradiated with the coherent light.
  • a speckle calculator that calculates speckle data based on at least one of the first image and the second image may be further provided.
  • the speckle data includes speckle contrast
  • the speckle calculator may generate an observation image based on the speckle contrast.
  • the pixel signal acquisition unit acquires a plurality of pixel signals obtained by changing the region of the observation site irradiated with the coherent light
  • the first image generator may generate the first image based on the plurality of pixel signals.
  • the apparatus may further include an irradiation control unit that controls the irradiation intensity of the coherent light based on the speckle contrast value in the observation image.
  • the irradiation control unit may control the irradiation intensity of the coherent light based on pixel values of at least one of the dark region and the bright region in the pixel signal.
  • the aperture is a variable aperture, and further includes an aperture control unit that sets a width of an aperture and a width between apertures based on pixel values in the dark region and the bright region in the pixel signal.
  • the aperture control unit may set the width between the openings so that a ratio of predetermined pixel values in the dark area and the bright area is within a predetermined range.
  • a blood flow meter device using the biological observation system may be used.
  • a microscope device using the biological observation system may be used.
  • a midoscope device using the living body observation system may be used.
  • a living body observation method is provided.
  • a light source that generates coherent light
  • a movable diaphragm having a light shielding portion that shields the coherent light and an opening that transmits the light, The width of the light shielding portion and the width of the opening are determined based on pixel signals of a dark region not directly irradiated with the coherent light in an image including the region of the observation site in the living body irradiated with the coherent light.
  • An illumination device is provided, at least one of which is controlled.
  • FIG. 1 is a block diagram showing a configuration example of an observation system according to an embodiment of the present technology
  • FIG. 4 is a diagram showing a configuration example of a laser irradiation unit; Top view of the aperture.
  • FIG. 4 is a schematic diagram showing the distribution of laser light on the exit-side surface of the diaphragm
  • FIG. 2 is a block diagram showing the configuration of an irradiation control unit
  • FIG. 4 is a diagram showing the relationship between the speckle contrast and the average luminance value of pixel signals
  • FIG. 2 is a block diagram showing the configuration of an arithmetic processing unit
  • FIG. 4 is a diagram showing an example of a pixel value profile generated by a profile generator;
  • FIG. 4 is a diagram schematically showing laser light in a direct region and reflected and scattered light;
  • FIG. 4 is a diagram schematically showing laser light in a global area and reflected and scattered light;
  • FIG. 4 is a diagram schematically showing a processing example of a pixel value range generation unit;
  • FIG. 4 is a diagram showing a correspondence relationship between an extracted pixel value range and a two-dimensional image acquired by an image acquiring unit;
  • FIG. 5 is a diagram showing an example of an image generated by the first image generation unit based on the extracted pixel value range;
  • FIG. 10 is a diagram showing an example of a D (direct image) image generated by a second image generation unit;
  • FIG. 4 is a diagram schematically showing brightness values of pixels 43 included in a 3 ⁇ 3 cell 42 by light and shade;
  • FIG. 4 is a schematic diagram for explaining a calculation example of speckle contrast within an effective area;
  • FIG. 4 is a diagram for explaining the characteristics of a speckle pattern;
  • FIG. 4 shows a G (global) image of a comparative example and a G (global) image according to the present disclosure;
  • 4 is a flowchart showing an example of processing of the observation system;
  • FIG. 10 is a side cross-sectional view of a diaphragm according to a second embodiment;
  • the block diagram of the irradiation control part which concerns on 2nd Embodiment.
  • FIGS. 4A and 4B are diagrams for explaining a control example of a diaphragm control unit;
  • FIG. 4 is another diagram for explaining a control example of the aperture control unit;
  • FIG. 7 is a diagram for explaining still another control example of the aperture control unit;
  • 5 is a flow chart showing an example of processing by an aperture control unit;
  • FIG. 1 is a block diagram showing a configuration example of a biological observation system according to an embodiment of the present technology.
  • the living body observation system 100 is used, for example, for observation of an operating field in surgery, observation of the inside of a patient's body in medical diagnosis, and the like. More specifically, the living body observation system 100 is used for a blood flow meter device, a microscope device, an endoscope device, and the like.
  • the present technology can be applied when observing any living tissue.
  • This biological observation system 100 includes a laser irradiation unit 10, a camera 20, and a controller 30.
  • the laser irradiation unit 10 is arranged to face the observed region 2 of the patient, and irradiates the observed region 2 with laser light 11, which is coherent light.
  • FIG. 1 schematically shows a laser beam 11 irradiated toward a patient's hand (observation site 2).
  • the observation site 2 corresponds to a living tissue in this embodiment.
  • the laser irradiation unit 10 according to the present embodiment corresponds to the irradiation device.
  • FIG. 2A is a diagram showing a configuration example of the laser irradiation unit 10.
  • FIG. 2B is a top view of the diaphragm 102.
  • the laser irradiation unit 10 has a laser 90 and a diaphragm 102.
  • a laser 90 emits highly coherent light through an illumination optical system.
  • the diaphragm 102 has a light blocking portion 102a and a slit-shaped opening 102b.
  • the light blocking portion 102a is made of metal, for example, and does not transmit light.
  • the opening 102b is configured to transmit light.
  • FIG. 3 is a schematic diagram showing the distribution of the laser light 11 on the surface of the aperture 102 on the exit side.
  • the vertical axis indicates brightness, and the horizontal axis corresponds to the position of a line that crosses the diaphragm 102.
  • FIG. 3 the light intensity of the light shielding portion 102a of the diaphragm 102 can be regarded as 0 except for the boundary portion of the aperture.
  • FIG. 2A a striped projected optical image of bright portions, dark portions, and bright portions is projected onto the surface of the observation site 2 . No laser light is projected onto the dark portion, and the projected light amount of the laser light onto the dark portion is zero.
  • an example of one opening may be described, but the present invention is not limited to this.
  • dark areas are also irradiated with laser light that is lower in intensity than laser light that is irradiated onto bright areas.
  • b (brightness of dark area)/(brightness of bright area).
  • the light shielding portion 102a according to the present embodiment does not transmit light, the projected light amount in the dark region can always be zero.
  • the camera 20 has a lens section 21 and an imaging section 22 connected to the lens section 21 .
  • the camera 20 is arranged so that the lens portion 21 faces the observed region 2 of the patient 1 and images the observed region 2 irradiated with the laser beam 11 .
  • the camera 20 captures striped images of bright areas, dark areas, and bright areas on the surface of the observation site 2 .
  • the camera 20 is configured as, for example, a CHU (Camera Head Unit), and is connected to the controller 30 via a predetermined interface or the like.
  • the camera 20 corresponds to an imaging system.
  • the lens unit 21 has an optical zoom function.
  • the lens unit 21 generates an optical image of the observed region 2 that is optically enlarged or reduced by controlling imaging parameters such as an F number (aperture value) and optical magnification.
  • imaging parameters such as an F number (aperture value) and optical magnification.
  • a specific configuration for realizing the optical zoom function is not limited, and for example, automatic zooming by electronic control, manual zooming, or the like may be performed as appropriate.
  • the imaging unit 22 captures an optical image generated by the lens unit 21 and generates pixel signals of the observed region 2 .
  • the pixel signal is a signal capable of forming an image.
  • the pixel signal includes, for example, information such as the luminance value of each pixel. That is, the imaging device of the imaging unit 22 detects light from each point of the observed region 2 (subject) within the imaging range of the imaging unit 22 and converts it into a pixel signal.
  • the pixel signal is divided into a Direct component and a Global component.
  • the pixel signal detected by direct illumination of the point of interest is defined as the Direct component.
  • a pixel signal detected by illuminating the point of interest via another point is defined as a global component.
  • the type, format, etc. of the pixel signal are not limited, and any format capable of forming a moving image or a still image, for example, may be used.
  • the imaging unit 22 for example, an image sensor such as a CMOS (Complementary Metal-Oxide Semiconductor) sensor or a CCD (Charge Coupled Device) sensor is used.
  • CMOS Complementary Metal-Oxide Semiconductor
  • CCD Charge Coupled Device
  • the controller 30 has hardware necessary for configuring a computer, such as a CPU (Central Processing Unit), ROM (Read Only Memory), RAM (Random Access Memory), and HDD (Hard Disk Drive).
  • a CPU Central Processing Unit
  • ROM Read Only Memory
  • RAM Random Access Memory
  • HDD Hard Disk Drive
  • the controller 30 corresponds to a control device.
  • Each functional block shown in FIG. 1 is realized by the CPU loading the program according to the present technology stored in the ROM or HDD into the RAM and executing it. These functional blocks execute the control method according to the present technology.
  • the program is installed in the controller 30 via various recording media, for example. Alternatively, program installation may be executed via the Internet or the like.
  • the specific configuration of the controller 30 is not limited, and devices such as FPGA (Field Programmable Gate Array), image processing IC (Integrated Circuit), and other ASIC (Application Specific Integrated Circuit) may be used.
  • FPGA Field Programmable Gate Array
  • image processing IC Integrated Circuit
  • ASIC Application Specific Integrated Circuit
  • the controller 30 has an irradiation control unit 31, an image acquisition unit 32, a camera control unit 33, a UI acquisition unit 34, a block control unit 35, and an arithmetic processing unit 36 as functional blocks.
  • a processing size table 38 is stored in a storage unit 37 constituted by a ROM of the controller 30 or the like. Dedicated hardware may be appropriately used to implement each functional block.
  • FIG. 4A is a block diagram showing the configuration of the irradiation control unit 31.
  • the irradiation control section 31 has a light source control section 310 and a position control section 312 .
  • the light source control unit 310 controls the irradiation intensity of the laser light 11 emitted from the laser irradiation unit 10 and the like. For example, when observing a D (direct image) image, the light source control unit 310 controls the light intensity of the laser 90 so that the pixel value of the bright area (see FIG. 2A) of the pixel signal generated by the camera 20 becomes a predetermined value. Control.
  • the light amount of the laser 90 is controlled so that the pixel value of the dark region (see FIG. 2A) of the pixel signal generated by the camera 20 becomes a predetermined value.
  • the light source control unit 310 can control the laser 90 to decrease the light intensity of the light source when the predetermined area determined for observation purposes is too bright, and to increase the light intensity of the light source when it is too dark.
  • FIG. 4B is a diagram showing the relationship between the speckle contrast L40 and the average luminance value of pixel signals generated by the camera 20.
  • the vertical axis indicates the speckle contrast
  • the horizontal axis indicates the average luminance value.
  • the speckle contrast L40 is calculated by the spec calculator 370, which will be described later. As shown in FIG. 4B, the speckle contrast L40 varies in value depending on the average luminance value of the pixel signal. Therefore, the light source control section 310 may control the irradiation intensity of the laser light 11 so that the average luminance value of the pixel signal falls within a predetermined range.
  • the light source control unit 310 may acquire information on the irradiation intensity of the laser light 11 specified by an operator who operates the biological observation system 100, for example.
  • the light source control unit 310 outputs an instruction to the laser irradiation unit 10 to output the laser light 11 with the designated irradiation intensity. This makes it possible to irradiate the laser beam 11 with the irradiation intensity desired by the operator.
  • the method for controlling the irradiation intensity of the laser light 11 is not limited.
  • the irradiation intensity of the laser light 11 may be appropriately controlled according to the exposure time of the camera 20 or the like.
  • the light source control unit 310 may appropriately control not only the irradiation intensity of the laser light 11 but also arbitrary parameters such as the wavelength of the laser light 11 and the irradiation area.
  • the position control unit 312 can control a driving unit (not shown) to move the laser irradiation unit 10 by a predetermined distance at predetermined time intervals.
  • the image acquisition unit 32 acquires pixel signals generated by the camera 20 . That is, the image acquisition unit 32 acquires pixel signals of the observed region 2 captured by irradiation with the laser light 11 .
  • the pixel signals acquired by the image acquisition section 32 are supplied to the arithmetic processing section 36 .
  • the image acquisition section 32 corresponds to a pixel signal acquisition section.
  • the camera control unit 33 is connected to the camera 20 via an interface or the like, and controls the operation of the camera 20.
  • the camera control unit 33 outputs to the camera 20 a signal designating, for example, the zoom amount (optical magnification), aperture, or exposure time of the camera 20 .
  • the camera 20 images the observed region 2 based on the signal output from the camera control section 33 . This allows the operation of camera 20 to be electronically controlled.
  • the camera control unit 33 acquires imaging parameters for imaging the observed region 2 .
  • the imaging parameters include the F-number (aperture value) and optical magnification of the lens unit 21 (camera 20).
  • the imaging parameters acquired by the camera control section 33 are output to the block control section 35 .
  • the imaging parameters correspond to imaging conditions.
  • the UI acquisition unit 34 acquires instructions and the like input by the operator via a user interface (UI: User Interface) (not shown).
  • UI User Interface
  • a display device such as a display and an input device such as a mouse and a keyboard are appropriately used.
  • the operator inputs instructions using the input device while looking at the operation screen displayed on the display device, for example.
  • the type of user interface is not limited, and for example, a display provided with a touch sensor, a foot switch, a control switch at hand, or the like may be used.
  • the block control unit 35 has a predicted speckle size calculation unit 40 and a processing size control unit 41 .
  • the predicted speckle size calculator 40 calculates the speckle size based on the imaging parameters input from the camera controller 33 .
  • the speckle size is the size of individual spots forming speckles.
  • the speckle size changes according to the imaging system that images the speckle pattern.
  • is the wavelength of the irradiated laser light 11 .
  • this formula may be described as a speckle size calculation formula.
  • the predicted speckle size calculation unit 40 calculates the speckle size d using the speckle size calculation formula based on the F number F# and the optical magnification M included in the imaging parameters. Therefore, the predicted speckle size calculator 40 can calculate the speckle size d in the captured speckle pattern. The calculated speckle size d is output to the processing size control section 41 .
  • the processing size control unit 41 controls the cell size (cell size) that is a pixel block.
  • a cell is, for example, a rectangular block composed of m ⁇ n pixels, and is used when calculating the speckle contrast from the pixel signal.
  • the number of pixels (horizontal ⁇ vertical) (m ⁇ n) corresponds to the cell size.
  • the shape and the like of the cells are not limited, and cells of any shape may be used, for example. Cell and speckle contrast will be described later.
  • the processing size control unit 41 controls the cell size based on the predicted speckle size d calculated by the speckle size calculation unit 40 .
  • the processing size control unit 41 also controls the cell size according to the image quality mode acquired by the UI acquisition unit 34 . Therefore, the cell size controlled by the processing size control unit 41 is a size corresponding to the speckle size d and the image quality mode.
  • the processing size table 38 stored in the storage unit 37 is used when controlling the cell size.
  • the processing size table 38 records the correspondence between the speckle size d, the image quality mode, and the cell size.
  • the processing size control unit 41 acquires the cell size value corresponding to the calculated speckle size d and the specified image quality mode from the processing size table 38 . This makes it possible to easily control the cell size.
  • the processing size table 38 corresponds to a control table.
  • the block control unit 35 calculates the speckle size based on the imaging parameters, and controls the cell size based on the calculated speckle size. That is, the block control unit 35 controls the cell size based on imaging parameters for imaging the observed region 2 .
  • FIG. 5 is a block diagram showing the configuration of the arithmetic processing unit 36.
  • the arithmetic processing unit 36 uses cells whose size is controlled by the processing size control unit 41 (block control unit 35) to perform speckle processing based on the pixel signals acquired by the image acquisition unit 32. Calculate the data. That is, the calculation processing unit 36 includes a storage unit 360, a profile generation unit 362, a pixel value range generation unit 364, a first image generation unit 366, a second image generation unit 368, and a speckle calculation unit 370. have Note that the first image generation unit 366 and the second image generation unit 368 according to this embodiment correspond to the image generation unit.
  • the storage unit 360 stores the pixel signals acquired by the image acquiring unit 32 as a two-dimensional image. Note that the storage unit 360 may be configured within the storage unit 37 .
  • FIG. 6 is a diagram showing an example of a pixel value profile generated by the profile generator 362.
  • FIG. The above figure is an example of a pixel value profile.
  • the vertical axis indicates pixel values, and the horizontal axis indicates positions on the image.
  • the lower diagram shows a two-dimensional image including bright portions and dark portions generated based on the pixel signals acquired by the image acquisition section 32 .
  • the image of the dark portion corresponding to the light shielding portion 102b also has pixel values. This is caused by the fact that the laser light that is directly applied to the bright area is reflected, scattered, emitted from the dark area, and captured.
  • G area global area
  • the G area is an area where the laser light that entered from the D area (direct area) is reflected and scattered and emitted.
  • FIG. 7A and B are schematic diagrams showing cross sections of skin tissue.
  • layer A is the stratum corneum and epidermis
  • layer B is the upper layer of the dermis, for example.
  • Capillaries are present in the upper layers of the dermis.
  • FIG. 7A is a diagram schematically showing the laser light 11 in the direct area and the reflected and scattered light EA.
  • FIG. 7B is a diagram schematically showing the laser light 11 in the global area and the reflected and scattered light EB.
  • the laser beam 11 is also reflected from the A layer, so the weak light reflected and scattered from the B layer is buried in the light reflected and scattered from the A layer. Therefore, in the direct area, the image is captured as the main components of reflected and scattered light EA from the A layer.
  • the reflected/scattered light EA corresponds to, for example, light that is part of the coherent light that is directly applied to the observation site 2 , which is a living body, and is scattered by the blood flow under the tissue of the observation site 2 .
  • the global area is an area that is not directly irradiated with the laser light 11, and is an area where the light incident from the direct area is reflected and scattered. Therefore, in the global region, the light reflected and scattered from the A layer is reduced, and the reflected and scattered light EB reflected and scattered from the B layer is imaged as the main component.
  • FIG. 8 is a diagram schematically showing a processing example of the pixel value range generation unit 364.
  • FIG. The upper diagram is a schematic diagram showing the distribution of the laser light 11 on the output side surface of the diaphragm 102 .
  • the vertical axis indicates brightness, and the horizontal axis corresponds to the position of a line that crosses the diaphragm 102.
  • FIG. The lower diagram shows a partial area of the pixel value profile generated by the profile generator 362.
  • FIG. 9 is a diagram showing the correspondence relationship between the extracted pixel value range G9 set by the pixel value range generation unit 364 and the two-dimensional image acquired by the image acquisition unit 32.
  • FIG. The upper diagram shows the pixel value profile generated by the pixel value range generation unit 364 and the extracted pixel value range G9 set by the pixel value range generation unit 364.
  • FIG. The lower diagram shows the two-dimensional image acquired by the image acquisition unit 32, and line L9 indicates the position where the pixel value profile is generated.
  • FIG. 10 is a diagram showing an example of an image generated by the first image generator 366 based on the extracted pixel value range G9 (FIG. 9).
  • FIG. 10(a) shows an example of an image cut out based on the extracted pixel value range G9 (FIG. 9).
  • the first image generation unit 366 causes the image acquisition unit 32 to generate an element image in the pixel value range of the pixel value Lg corresponding to the extracted pixel value range G9 (FIG. 9). Cut out from the obtained two-dimensional image.
  • the image area on the right side of the direct area is extracted in FIG. 10A, the image area on the left side of the direct area may also be extracted.
  • the first image generation section stores the generated elemental images in the storage section 360 .
  • FIG. 10(b) is a diagram showing an example of a G (global image) image synthesized by the first image generation unit 366.
  • the first image generator acquires the generated elemental images from the storage unit 360 and synthesizes the entire image as a G (global image) image.
  • FIG. 11 is a diagram showing an example of a D (direct image) image generated by the second image generating section 368.
  • the second image generator 368 synthesizes the two-dimensional image acquired by the image acquirer 32, excluding the image below the extraction pixel value range G9 set by the pixel value range generator 364.
  • FIG. 11 is a diagram showing an example of a D (direct image) image generated by the second image generating section 368.
  • the second image generator 368 synthesizes the two-dimensional image acquired by the image acquirer 32, excluding the image below the extraction pixel value range G9 set by the pixel value range generator 364.
  • FIG. 11 is a diagram showing an example of a D (direct image) image generated by the second image generating section 368.
  • the speckle data is data related to the speckle pattern of the observation site 2.
  • the speckle data is calculated by appropriately processing information such as the luminance value of each pixel included in the pixel signal, for example.
  • the speckle calculator 370 calculates the speckle contrast as the speckle data.
  • the average, variance, standard deviation, etc. of luminance values in the speckle pattern may be calculated as the speckle data.
  • the calculated speckle data can be output to the processing size control unit 41 and the processing size table 38, and used for calibration of the processing size table 38 and the like.
  • the speckle calculator 370 also generates an observation image of the observed region 2 based on the calculated speckle contrast.
  • the generated observation image is output to a display device such as a display (not shown).
  • the speckle calculator 370 functions as a calculator and a generator.
  • FIG. 12 and 13 are schematic diagrams for explaining an example of speckle contrast calculation.
  • luminance values of pixels 43 included in a 3 ⁇ 3 cell 42 are schematically illustrated by contrast.
  • the speckle contrast Cs is given by the following formula using the standard deviation .sigma.
  • Cs ⁇ /A (4)
  • the standard deviation ⁇ and the average value A of the luminance value I(m, n) are given by the following equations.
  • the summation symbol ⁇ represents the sum of the luminance values of all the pixels 43 in the cell 42 .
  • the method for calculating the speckle contrast Cs is not limited, and for example, instead of the standard deviation ⁇ , the variance ⁇ 2 of the brightness values I(m, n) may be used. Alternatively, the difference (Imax(m,n)-Imin(m,n)) between the maximum and minimum luminance values I(m,n) in the cell 42 may be used as the speckle contrast Cs.
  • FIG. 13A shows an example of processing for calculating the speckle contrast Cs using 3 ⁇ 3 cells 42 .
  • the position of the upper left pixel 43 of the image 44 is assumed to be coordinates (0, 0).
  • the speckle calculator 370 first sets the cell 42a including the upper left pixel 43. As shown in FIG. In this case, a cell 42a centered on the pixel 43 at coordinates (1, 1) is set (step 1A).
  • the speckle calculator 370 calculates the speckle contrast Cs (1, 1) in the cell 42a centered at the coordinates (1, 1). That is, Cs(1, 1) is calculated from the brightness values of the center pixel 43 and eight pixels 43 around it. The calculated speckle contrast Cs(1,1) is recorded as the speckle contrast Cs corresponding to the pixel 43 at coordinates (1,1) (step 1B).
  • the speckle calculation unit 370 sets a cell 42b centered at the coordinates (2, 1), which is shifted one pixel to the right from the coordinates (1, 1) (step 2A).
  • the speckle calculator 370 calculates the speckle contrast Cs (2, 1) in the cell 42b and records it as the speckle contrast Cs of the pixel 43 at the coordinates (2, 1) (step 2B).
  • the center of the cell 42 is moved pixel by pixel, and the process of calculating the speckle contrast Cs of the pixel 43 at the center of the cell 42 is executed. Thereby, the speckle contrast Cs corresponding to each pixel 43 included in the pixel signal is sequentially calculated.
  • the method of calculating the speckle contrast Cs using the cell 42 is not limited.
  • the calculated speckle contrast Cs may be assigned to other pixels 43 within the cell 42 that are different from the central pixel 43 .
  • the amount, direction, order, and the like of moving the cells 42 are not limited, and may be changed as appropriate according to, for example, the processing time required for image processing.
  • FIG. 13A schematically shows the overall image of the process of calculating the speckle contrast Cs.
  • the diagram on the left side of FIG. 13A is a schematic diagram of the global image generated by the second image generator.
  • the speckle calculator 370 starts the process of calculating the speckle contrast Cs from the upper left of the global image 50 .
  • the original image for calculating the speckle contrast Cs, that is, the global image 50 is hereinafter referred to as the speckle image 50 .
  • the speckle calculator 370 generates a speckle contrast image 60 as an observation image based on the calculated speckle contrast Cs.
  • the diagram on the right side of FIG. 13B is a schematic diagram of the speckle contrast image 60 .
  • the speckle contrast image 60 is generated by converting the value of the speckle contrast Cs into a luminance value.
  • a pixel with a high speckle contrast Cs value is set to a bright luminance value
  • a pixel with a low Cs value is set to a dark luminance value.
  • a method or the like for converting the speckle contrast Cs into a luminance value is not limited, and any method may be used.
  • a brightness value may be set in which the brightness is inverted with respect to the level of the speckle contrast Cs.
  • FIG. 14 is a diagram for explaining the characteristics of speckle patterns.
  • the image shown in the upper right of FIG. 14 is an image (speckle image 50a) captured by irradiating the observation target in a stationary state with the laser beam 11.
  • the image shown in the upper left is an image (speckle image 50b) captured by irradiating the observation target in a moving state with the laser beam 11.
  • the phase of laser light 11 (reflected light) reflected by the observation target changes randomly.
  • the laser beams 11 with random phases interfere with each other to form a bright and dark speckle pattern.
  • the positions where interference occurs are stable, so a clear speckle pattern is formed as shown in the speckle image 50a on the right side.
  • the laser beam 11 irradiates a moving object, for example, blood flow
  • the position where interference occurs changes, and the light and dark pattern of the speckle pattern changes. is reduced (left speckle image 50b).
  • the degree to which the contrast between light and dark decreases is, for example, a value corresponding to the amount of movement of the camera 20 within the exposure time. In other words, the decrease in contrast between light and dark is an index reflecting speed.
  • a graph showing the luminance distribution of the speckle images 50a and 50b in the stationary state and the moving state is shown in the lower part of FIG.
  • the horizontal axis of the graph is the luminance value
  • the vertical axis is the number of pixels (distribution) of each luminance value.
  • the luminance distributions in the speckle images 50a and 50b in the static and moving states are illustrated by dotted and solid lines, respectively.
  • the speckle image 50a in the stationary state is an image having a wide brightness difference between bright pixels and dark pixels and a large light-dark contrast.
  • the speckle image 50b in the moving state has a narrow luminance difference between bright pixels and dark pixels and a small light-dark contrast.
  • FIG. 15 is a diagram showing a configuration example of a laser irradiation unit 10a of a comparative example.
  • a diffusion medium 102a having different light transmittances in stripes is configured. Since the transmittance of the laser light is varied by the diffusing medium 102a, the dark part of the surface of the subject 2 is also irradiated with the laser light. As a result, the aforementioned b value becomes dependent on the distance of the object plane. Therefore, it is necessary to derive the b-value each time the alignment of the optical system changes.
  • FIG. 16 is a diagram showing a G (global) image of a comparative example captured by the imaging system of FIG. 15 and a G (global) image according to the present disclosure.
  • the average finger speckle contrast Cs of the comparative G (global) image is 14, and the average finger speckle contrast Cs of the G (global) image according to the present disclosure is 10.
  • the lower the value of the speckle contrast Cs the more motion state images are present in the G (global) image.
  • using the diaphragm 102 of the present disclosure indicates that more information on the blood flow component can be obtained than when using the diffusion medium 102a of the comparative example.
  • FIG. 17 is a flowchart showing a processing example of the biological observation system 100.
  • FIG. 17 first, under the control of the irradiation control unit 31 and the camera control unit 33, irradiation by the laser irradiation unit 10 and imaging by the camera 20 are started, and the image acquisition unit 32 acquires image data (step S100).
  • the profile generator 362 generates a profile that traverses the bright area
  • the pixel range generator 364 generates the pixel value range of the G area (step S102). Note that the generation of the pixel value range may be performed once at the beginning and omitted in the next loop.
  • the first image generation unit 366 generates an elemental image of the G area based on the pixel value range from the image data, and stores it in the storage unit 360. Further, the first image generation unit 366 generates an element image of the D region by excluding the pixel range below the pixel value range from the image data, and stores it in the storage unit 360 (step S104).
  • the irradiation control unit 31 determines whether or not the irradiation of the laser irradiation unit 10 has been completed up to a predetermined range (step S106). If it is determined that the predetermined range has not been completed (NO in step S106), the irradiation position is changed, and the processing from step S100 is repeated.
  • the irradiation control unit 31 determines that the irradiation has finished up to the predetermined range (YES in step S106), it stops laser irradiation. Subsequently, the first image generation unit 366 generates a G image using the element images of the G area stored in the storage unit 360, and the second image generation unit 368 generates an image of the D area stored in the storage unit 360. A D image is generated using the element image (step S108).
  • the speckle calculator 370 generates a speckle contrast image as an observation image based on the speckle contrast Cs of the generated G image and D image as an operator, and outputs the speckle contrast image to the display. (step S110). Then, the display displays the speckle contrast images of the G image and the D image (step S112), and the process ends.
  • the laser irradiation unit 10 emits coherent light to the observation site 2 of the living body through the diaphragm 102 having the light shielding unit 02b for shielding light and the aperture 102a for transmitting light. and the first image generation unit 366 generates a first image based on the pixel signals of the dark region not irradiated with the coherent light in the pixel signals including the region of the observation site 2 irradiated with the coherent light.
  • the pixel signals based on the light scattered by the layer B, which is the lower layer of the tissue of the observation site 2 can be obtained while the reflected light from the layer A, which is the surface layer of the observation site 2, is suppressed. Therefore, an observation image of the underlying layer of the tissue of the observation site 2 can be generated with lower noise.
  • the living body observation system 100 according to the first embodiment has a fixed diaphragm 102, whereas the living body observation system 100 according to the second embodiment has a variable diaphragm 1020.
  • FIG. Differences from the biological observation system 100 according to the first embodiment will be described below.
  • FIG. 18 is a side sectional view of the diaphragm 1020 according to the second embodiment.
  • the diaphragm 1020 has a plurality of movable diaphragms 1020a and 1020b.
  • the positions of the plurality of movable diaphragms 1020a and 1020b are configured to be changeable, and the interval D1020 between the apertures and the width D1040 of the diaphragm can be changed.
  • FIG. 19 is a diagram showing a block diagram of the irradiation control unit 31 according to the second embodiment. As shown in FIG. 19, it differs from the irradiation control section 31 according to the first embodiment in that it further includes an aperture control section 314 .
  • the aperture control unit 314 performs position control of the plurality of movable apertures 1020 a and 1020 b of the aperture 1020 based on the pixel value profile generated by the profile generation unit 362 . In addition, the aperture control unit 314 can also perform control for parallel movement of the movable apertures 1020a and 1020b.
  • FIG. 20 is a diagram for explaining a control example of the aperture control unit 314.
  • FIG. The left diagram is a schematic diagram showing the distribution of the laser light 11 on the exit side surface of the diaphragm 1020 .
  • the vertical axis indicates brightness, and the horizontal axis corresponds to the position of a line that crosses aperture 1020 .
  • the right figure is a profile on the observation site 2 corresponding to the opening and the light shielding part in the left figure.
  • the vertical axis indicates pixel values, and the horizontal axis indicates positions.
  • the diaphragm control unit 314 controls the aperture interval D1020 and the diaphragm width D1040 based on the maximum pixel value L2 of the bright portion and the minimum pixel value L1 of the dark portion of the profile.
  • FIG. 21 is another diagram illustrating a control example of the aperture control unit 314.
  • FIG. The left diagram is a schematic diagram showing the distribution of the laser light 11 on the surface of the exit side of the diaphragm 1020, centering on the aperture.
  • the vertical axis indicates brightness, and the horizontal axis corresponds to the position of a line that crosses aperture 1020 .
  • the right figure is a profile on the observation site 2 corresponding to the opening and the light shielding part in the left figure.
  • the vertical axis indicates pixel values, and the horizontal axis indicates positions.
  • the diaphragm control unit 314 controls the aperture interval D1020 and the diaphragm width D1040 based on the maximum pixel value L2 of the bright portion and the minimum pixel value L1 of the dark portion of the profile.
  • FIG. 22 is a diagram explaining still another control example of the aperture control unit 314.
  • FIG. (a) is a diagram showing a profile on the observed region 2 before control.
  • (b) is a diagram showing a profile on the observation site 2 after control.
  • (c) shows the phase pitch of the diaphragm 1020;
  • FIG. 4 is a diagram showing a profile on an observation site 2;
  • the vertical axis indicates pixel values, and the horizontal axis indicates positions.
  • the vertical axis indicates the pixel value, and the horizontal axis indicates the position.
  • FIG. 23 is a flowchart showing a processing example of the aperture control unit 314.
  • FIG. A processing example will be described with reference to FIG.
  • the laser irradiation unit 10 irradiates the laser through the diaphragm 1020, and the image acquisition unit 32 acquires the image data captured by the camera 20 (step S200).
  • the profile generator 362 generates a profile that crosses the bright area and the bright area (step S202).
  • the aperture control unit 314 calculates the geometrical magnification between the laser irradiation unit 10 and the observed region 2 from the relationship between the intervals D1020 and D1040 and the intervals between the bright and dark portions on the profile (step S204).
  • the aperture control unit 314 sets the width L_dark of the dark portion to a range equal to or less than twice the range L_dif, and determines the width of the light shielding portion based on the magnification calculated in step S204. D1040 is adjusted (step S206). Subsequently, the aperture control unit 314 calculates the width L_ill of the bright portion to be the same value as the width L_dark, and adjusts the width D1020 of the aperture based on the magnification calculated in step S204 (step S206).
  • the diaphragm control unit 314 sets the movement step width L_ill_step of the diaphragm 1020 to be less than the range L_diff, sets the movement step width of the diaphragm 1020 based on the magnification, and ends the setting process.
  • the diaphragm 1020 is composed of a plurality of movable diaphragms 1020a and 1020b, and the diaphragm controller 314 controls the movable diaphragms 1020a and 1020b based on the profile crossing the bright area and the and This makes it possible to set the pixel values and the ratio of the bright and dark portions in the image data captured by the camera 20 to predetermined values.
  • This technology can be configured as follows.
  • an irradiation unit that irradiates coherent light onto an observation site of a living body through a diaphragm having a light shielding portion that blocks light and an opening that transmits light; an image acquisition unit that acquires pixel signals including a region of the observation site irradiated with the coherent light; an image generating unit that generates a first image based on pixel signals in a dark area not irradiated with the coherent light in the pixel signals;
  • a living body observation system A living body observation system.
  • the pixel signals in the dark region are based on light that is part of the coherent light that is directly applied to the living body and is scattered by blood flow under the tissue of the observation site; The in vivo observation system described.
  • (4) further comprising a pixel range setting unit for setting a predetermined pixel value range based on a profile, which is a pixel row with respect to a positional change on the image based on the pixel signal;
  • a pixel range setting unit for setting a predetermined pixel value range based on a profile, which is a pixel row with respect to a positional change on the image based on the pixel signal;
  • the living body according to (4) further comprising: a second image generation unit that generates a second image based on pixel signals of a bright region in the image directly irradiated with the coherent light. observation system.
  • the speckle data includes speckle contrast
  • the pixel signal acquisition unit acquires a plurality of images in which the region of the observation site irradiated with the coherent light is changed, respectively;
  • the biological observation system according to (8) further comprising an irradiation control unit that controls the irradiation intensity of the coherent light based on the speckle contrast value in the observation image.
  • the aperture is a variable aperture, and an aperture control unit that sets a width of an aperture and a width between apertures based on pixel values in the dark region and the bright region in the image.
  • the aperture control unit sets the width between the openings so that a ratio of predetermined pixel values in the dark area and the bright area is within a predetermined range. in vivo observation system.
  • a light source that produces coherent light
  • a movable diaphragm having a light shielding portion that shields the coherent light and an opening that transmits the light, At least one of the width of the light shielding portion and the width of the opening is controlled based on pixel signals of a dark region of an observation site not irradiated with the coherent light in a living body irradiated with the coherent light.

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Medical Informatics (AREA)
  • Molecular Biology (AREA)
  • Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • Pathology (AREA)
  • Engineering & Computer Science (AREA)
  • Veterinary Medicine (AREA)
  • Hematology (AREA)
  • Cardiology (AREA)
  • Physiology (AREA)
  • Physics & Mathematics (AREA)
  • Biophysics (AREA)
  • Investigating Or Analysing Materials By Optical Means (AREA)
  • Measuring Pulse, Heart Rate, Blood Pressure Or Blood Flow (AREA)

Abstract

[Problem] To provide a biological observation system, a biological observation method, and an irradiation device with which information on deeper parts of a living body can be obtained. [Solution] According to the present disclosure, a biological observation system comprises: an irradiation unit that irradiates an observation site of a living body with coherent light through a diaphragm having a light-blocking portion that blocks light and an opening that transmits light; a pixel signal acquisition unit that acquires a pixel signal including a region of the observation site irradiated with the coherent light; and an image generation unit that generates a first image on the basis of a pixel signal of a dark region in the pixel signal, wherein the dark region is not irradiated with the coherent light.

Description

生体観察システム、生体観察方法、及び照射装置Living body observation system, living body observation method, and irradiation device
 本開示は生体観察システム、生体観察方法、及び照射装置に関する。 The present disclosure relates to a living body observation system, a living body observation method, and an irradiation device.
 従来、生体組織等にレーザ光を照射してスペックルパターンを検出し、生体組織等を観察する技術が開発されている。こような、スペックルパターンに基づいた生体組織の観察等は、外科手術や内科診断等の様々なシーンでの応用が期待されており、高い精度を発揮可能な技術が求められている。 Conventionally, techniques have been developed for observing biological tissues, etc. by irradiating them with laser light and detecting speckle patterns. Observation of living tissue based on such speckle patterns is expected to be applied in various scenes such as surgical operation and medical diagnosis, and a technique capable of exhibiting high accuracy is required.
特表2016-509509号公報Japanese Patent Application Publication No. 2016-509509
 ところが、生体組織等にレーザ光を照射する場合に、生体表面からの反射光が強くなってしまう。このため、下部組織の観察が困難になってしまう恐れがある。 However, when irradiating a biological tissue or the like with a laser beam, the reflected light from the surface of the biological body becomes strong. For this reason, there is a possibility that observation of the underlying tissue may become difficult.
 そこで、本開示では、生体のより深部の情報を取得可能な生体観察システム、生体観察方法、及び照射装置を提供するものである。 Therefore, the present disclosure provides a living body observation system, a living body observation method, and an irradiation device capable of acquiring information on deeper parts of a living body.
 上記の課題を解決するために、本開示によれば、光を遮光する遮光部と光を透過する開口部とを有する絞りを介してコヒーレント光を生体の観察部位に照射する照射部と、
 前記コヒーレント光が照射される前記観察部位の領域を含む画素信号を取得する画素信号取得部と、
  前記画素信号内の前記コヒーレント光が照射されていない暗部領域の画素信号に基づき第1画像を生成する画像生成部と、
 を備える、生体観察システムが提供される。
In order to solve the above problems, according to the present disclosure, an irradiation unit that irradiates an observation site of a living body with coherent light through a diaphragm having a light shielding portion that shields light and an opening that transmits light;
a pixel signal acquisition unit that acquires pixel signals including a region of the observation site irradiated with the coherent light;
an image generating unit that generates a first image based on pixel signals in a dark area not irradiated with the coherent light in the pixel signals;
A living body observation system is provided, comprising:
 前記遮光部は、光を透過させない金属で構成されてもよい。 The light shielding part may be made of a metal that does not transmit light.
 前記暗部領域の画素信号は、前記生体に直接的に照射された前記コヒーレント光の一部が前記前記観察部位の組織の下の血流によって散乱された光に基づいてもよい。 The pixel signals of the dark region may be based on light that is part of the coherent light that is directly applied to the living body and is scattered by blood flow under the tissue of the observation site.
 前記画素信号に基づく画像上の位置変化に対する画素列であるプロファイルに基づき、所定の画素値範囲を設定する画素範囲設定部を更に備え、
 前記画像生成部は、前記所定の画素値範囲に基づき、前記第1画像を生成してもよい。
Further comprising a pixel range setting unit for setting a predetermined pixel value range based on a profile, which is a pixel row with respect to a positional change on the image based on the pixel signal,
The image generator may generate the first image based on the predetermined pixel value range.
 前記コヒーレント光が照射されている明部領域の画素信号に基づき、第2画像を生成する画像生成部を、更に備えてもよい。 It may further include an image generation unit that generates a second image based on pixel signals in the bright area irradiated with the coherent light.
 前記第1画像及び前記第2画像の少なくとも一方に基づいたスペックルデータを演算するスペックル演算部を、更に備えてもよい。 A speckle calculator that calculates speckle data based on at least one of the first image and the second image may be further provided.
 前記スペックルデータは、スペックルコントラストを含み、
 前記スペックル演算部は、前記スペックルコントラストに基づいて観察像を生成してもよい。
The speckle data includes speckle contrast,
The speckle calculator may generate an observation image based on the speckle contrast.
 前記画素信号取得部は、前記コヒーレント光が照射される前記観察部位の領域をそれぞれ変更した複数の画素信号を取得し、
  前記第1画像生成部は、前記複数の画素信号に基づき、前記第1画像を生成してもよい。
The pixel signal acquisition unit acquires a plurality of pixel signals obtained by changing the region of the observation site irradiated with the coherent light,
The first image generator may generate the first image based on the plurality of pixel signals.
  前記観察像における前記スペックルコントラストの値に基づき、前記コヒーレント光の照射強度を制御する照射制御部を更に備えてもよい。 The apparatus may further include an irradiation control unit that controls the irradiation intensity of the coherent light based on the speckle contrast value in the observation image.
 前記照射制御部は、前記画素信号内の前記暗部領域、及び前記明部領域の少なくとも一方の画素値に基づき、前記コヒーレント光の照射強度を制御してもよい。 The irradiation control unit may control the irradiation intensity of the coherent light based on pixel values of at least one of the dark region and the bright region in the pixel signal.
 前記絞りは、可変絞りであり、前記画素信号内の前記暗部領域内、及び前記明部領域内の画素値に基づき、開口部の幅及び開口部間の幅を設定する絞り制御部を更に備えてもよい。 The aperture is a variable aperture, and further includes an aperture control unit that sets a width of an aperture and a width between apertures based on pixel values in the dark region and the bright region in the pixel signal. may
 前記絞り制御部は、前記暗部領域内、及び前記明部領域内の所定の画素値の比が所定範囲となるように、前記開口部間の幅を設定してもよい。 The aperture control unit may set the width between the openings so that a ratio of predetermined pixel values in the dark area and the bright area is within a predetermined range.
 前記生体観察システムを用いた血流計装置でもよい。 A blood flow meter device using the biological observation system may be used.
 前記生体観察システムを用いた顕微鏡装置でもよい。 A microscope device using the biological observation system may be used.
 前記生体観察システムを用いた中視鏡装置でもよい。 A midoscope device using the living body observation system may be used.
 上記の課題を解決するために、本開示によれば、光を遮光する遮光部と光を透過する開口部とを有する絞りを介してコヒーレント光を生体の観察部位に照射する照射工程と、
 前記コヒーレント光が照射される前記観察部位の領域を含む画素信号を取得する画素信号取得工程と、
 前記画素信号内の前記コヒーレント光が照射されていない暗部領域の画素信号に基づき第1画像を生成する第1画像生成工程と、
 を備える、生体観察方法が提供される。
In order to solve the above problems, according to the present disclosure, an irradiation step of irradiating an observation site of a living body with coherent light through a diaphragm having a light shielding portion for shielding light and an opening for transmitting light;
a pixel signal acquiring step of acquiring a pixel signal including a region of the observation site irradiated with the coherent light;
a first image generating step of generating a first image based on the pixel signals in the dark area not irradiated with the coherent light in the pixel signals;
A living body observation method is provided.
 上記の課題を解決するために、本開示によれば、コヒーレント光を生成する光源と、
 前記コヒーレント光を遮光する遮光部と光を透過する開口部とを有する可動絞りと、を備え、
 前記コヒーレント光が照射される生体における観察部位の領域を含む画像内の、前記コヒーレント光が直接的に照射されていない暗部領域の画素信号に基づき、前記遮光部の幅、前記開口部の幅の少なくとも一方が制御される、照射装置が提供される。
In order to solve the above problems, according to the present disclosure, a light source that generates coherent light;
A movable diaphragm having a light shielding portion that shields the coherent light and an opening that transmits the light,
The width of the light shielding portion and the width of the opening are determined based on pixel signals of a dark region not directly irradiated with the coherent light in an image including the region of the observation site in the living body irradiated with the coherent light. An illumination device is provided, at least one of which is controlled.
本技術の一実施形態に係る観察システムの構成例を示すブロック図。1 is a block diagram showing a configuration example of an observation system according to an embodiment of the present technology; FIG. レーザ照射部の構成例を示す図。FIG. 4 is a diagram showing a configuration example of a laser irradiation unit; 絞りの上面図。Top view of the aperture. 絞りの出射側の面におけるレーザ光の分布を示す模式図。FIG. 4 is a schematic diagram showing the distribution of laser light on the exit-side surface of the diaphragm; 照射制御部の構成を示すブロック図。FIG. 2 is a block diagram showing the configuration of an irradiation control unit; スペッククルコントラストと画素信号の平均輝度値との関係を示す図。FIG. 4 is a diagram showing the relationship between the speckle contrast and the average luminance value of pixel signals; 算処理部の構成を示すブロック図。FIG. 2 is a block diagram showing the configuration of an arithmetic processing unit; プロファイル生成部が生成する画素値プロファイルの例を示す図。FIG. 4 is a diagram showing an example of a pixel value profile generated by a profile generator; ダイレクト領域のレーザ光、及び反射、散乱光を模式的に示す図。FIG. 4 is a diagram schematically showing laser light in a direct region and reflected and scattered light; グローバル領域のレーザ光、及び反射、散乱光を模式的に示す図。FIG. 4 is a diagram schematically showing laser light in a global area and reflected and scattered light; 画素値範囲生成部の処理例を模式的に示す図。FIG. 4 is a diagram schematically showing a processing example of a pixel value range generation unit; 抽出画素値範囲と画像取得部が取得した二次元画像との対応関係を示す図。FIG. 4 is a diagram showing a correspondence relationship between an extracted pixel value range and a two-dimensional image acquired by an image acquiring unit; 第1画像生成部が抽出画素値範囲に基づき生成した画像例を示す図。FIG. 5 is a diagram showing an example of an image generated by the first image generation unit based on the extracted pixel value range; 第2画像生成部が生成したD(ダイレクト画像)画像の例を示す図。FIG. 10 is a diagram showing an example of a D (direct image) image generated by a second image generation unit; 3×3のセル42に含まれる画素43の輝度値が明暗により模式的に示す図。FIG. 4 is a diagram schematically showing brightness values of pixels 43 included in a 3×3 cell 42 by light and shade; 有効エリア内のスペックルコントラストの算出例を説明するための模式図。FIG. 4 is a schematic diagram for explaining a calculation example of speckle contrast within an effective area; スペックルパターンの特性を説明するための図。FIG. 4 is a diagram for explaining the characteristics of a speckle pattern; 比較例のレーザ照射部の構成例を示す図。The figure which shows the structural example of the laser irradiation part of a comparative example. 比較例のG(グローバル)画像と、本開示によるG(グローバル)画像と、を示す図。FIG. 4 shows a G (global) image of a comparative example and a G (global) image according to the present disclosure; 観察システムの処理例を示すフローチャート。4 is a flowchart showing an example of processing of the observation system; 第2実施形態に係る絞りの側面断面図。FIG. 10 is a side cross-sectional view of a diaphragm according to a second embodiment; 第2実施形態に係る照射制御部のブロック図。The block diagram of the irradiation control part which concerns on 2nd Embodiment. 絞り制御部の制御例を説明する図。4A and 4B are diagrams for explaining a control example of a diaphragm control unit; FIG. 絞り制御部の制御例を説明する別の図。FIG. 4 is another diagram for explaining a control example of the aperture control unit; 絞り制御部の更に別の制御例を説明する図。FIG. 7 is a diagram for explaining still another control example of the aperture control unit; 絞り制御部の処理例を示すフローチャート。5 is a flow chart showing an example of processing by an aperture control unit;
(第1実施形態)
[生体観察システム]
 図1は、本技術の一実施形態に係る生体観察システムの構成例を示すブロック図である。生体観察システム100は、例えば外科手術における術野の観察や、内科診断における患者の体内の観察等に用いられる。より具体的には、生体観察システム100は、血流計装置、顕微鏡装置、中視鏡装置などに用いられる。この他、任意の生体組織を観察する場合に本技術は適用可能である。
(First embodiment)
[Biological observation system]
FIG. 1 is a block diagram showing a configuration example of a biological observation system according to an embodiment of the present technology. The living body observation system 100 is used, for example, for observation of an operating field in surgery, observation of the inside of a patient's body in medical diagnosis, and the like. More specifically, the living body observation system 100 is used for a blood flow meter device, a microscope device, an endoscope device, and the like. In addition, the present technology can be applied when observing any living tissue.
 この生体観察システム100は、レーザ照射部10と、カメラ20と、コントローラ30とを備える。 This biological observation system 100 includes a laser irradiation unit 10, a camera 20, and a controller 30.
 レーザ照射部10は、患者の観察部位2に向けて配置され、観察部位2にコヒーレント光であるレーザ光11を照射する。図1には、患者の手部(観察部位2)に向けて照射されるレーザ光11が模式的に図示されている。観察部位2は、本実施形態において、生体組織に相当する。また、本実施形態に係るレーザ照射部10が照射装置に対応する。 The laser irradiation unit 10 is arranged to face the observed region 2 of the patient, and irradiates the observed region 2 with laser light 11, which is coherent light. FIG. 1 schematically shows a laser beam 11 irradiated toward a patient's hand (observation site 2). The observation site 2 corresponds to a living tissue in this embodiment. Also, the laser irradiation unit 10 according to the present embodiment corresponds to the irradiation device.
 図2A、図2B、及び図3に基づき、レーザ照射部10の構成例を説明する。図2Aは、レーザ照射部10の構成例を示す図である。図2Bは、絞り102の上面図である。 A configuration example of the laser irradiation unit 10 will be described based on FIGS. 2A, 2B, and 3. FIG. FIG. 2A is a diagram showing a configuration example of the laser irradiation unit 10. As shown in FIG. FIG. 2B is a top view of the diaphragm 102. FIG.
 図2Aに示すように、レーザ照射部10は、レーザ90と、絞り102とを有する。レーザ90は、高コヒーレント光を、照射光学系を介して出射する。図2Bに示すように、絞り102は、遮光部102aと、スリット状の開口部102bとを有する。遮光部102aは、例えば金属で構成され、光を透過しない。一方で、開口部102bは、光を透過するように構成される。 As shown in FIG. 2A, the laser irradiation unit 10 has a laser 90 and a diaphragm 102. A laser 90 emits highly coherent light through an illumination optical system. As shown in FIG. 2B, the diaphragm 102 has a light blocking portion 102a and a slit-shaped opening 102b. The light blocking portion 102a is made of metal, for example, and does not transmit light. On the other hand, the opening 102b is configured to transmit light.
 図3は、絞り102の出射側の面におけるレーザ光11の分布を示す模式図である。縦軸は明るさを示し、横軸は絞り102を横切るラインの位置に対応する。このように、絞り102の遮光部102aの光強度は開口部の境界部を除き0とみなすことが可能である。これにより、図2Aに示すように、観察部位2の表面には、明部、暗部、明部の縞状の投影光学像が投影される。暗部には、レーザ光は投影されず、暗部へのレーザ光の投影光量は0となる。なお、本実施形態では、説明を簡単にするために、一開口部の例で説明する場合があるが、これに限定されない。 FIG. 3 is a schematic diagram showing the distribution of the laser light 11 on the surface of the aperture 102 on the exit side. The vertical axis indicates brightness, and the horizontal axis corresponds to the position of a line that crosses the diaphragm 102. FIG. Thus, the light intensity of the light shielding portion 102a of the diaphragm 102 can be regarded as 0 except for the boundary portion of the aperture. As a result, as shown in FIG. 2A, a striped projected optical image of bright portions, dark portions, and bright portions is projected onto the surface of the observation site 2 . No laser light is projected onto the dark portion, and the projected light amount of the laser light onto the dark portion is zero. In addition, in this embodiment, in order to simplify the description, an example of one opening may be described, but the present invention is not limited to this.
 このような、明部領域(図2A参照)の画素信号に基づく画素値Ldと、暗部領域(図2A参照)の画素信号に基づく画素値Lgに基づき、一般に(1)式、及び(2)式によりD(ダイレクト画像)、G(グロ-バル画像)が定義される。 Based on such a pixel value Ld based on the pixel signal of the bright area (see FIG. 2A) and the pixel value Lg based on the pixel signal of the dark area (see FIG. 2A), generally the equations (1) and (2) The formula defines D (direct image) and G (global image).
 比較例(図15により後述)では、明部に照射されるレーザ光よりも低いレーザ光が暗部にも照射される。ここで、b=(暗部の明るさ)/(明部の明るさ)である。
Figure JPOXMLDOC01-appb-M000001
Figure JPOXMLDOC01-appb-M000002
これに対して、本実施形態に係る遮光部102aが光を透過しないので、暗部領域の投影光量は常に0にすることが可能である。このため、本実施形態に係る絞り102を用いた場合には、レーザ照射部10の照射光学系のアライメントが変わってもbは常に0となり、D(ダイレクト画像)画像、G(グロ-バル画像)画像の生成におけるパラメータbの依存性を除くことができる。このように、遮光部102aが光を透過しないので、観察部位2の表面までの距離に関係なく、光が投影されない暗部領域を観察部位2の表面に生成可能となる。これにより、(1)、(2)式において、D=Ldとして演算可能となり、G=Lgとして演算可能となる。
In a comparative example (which will be described later with reference to FIG. 15), dark areas are also irradiated with laser light that is lower in intensity than laser light that is irradiated onto bright areas. Here, b=(brightness of dark area)/(brightness of bright area).
Figure JPOXMLDOC01-appb-M000001
Figure JPOXMLDOC01-appb-M000002
On the other hand, since the light shielding portion 102a according to the present embodiment does not transmit light, the projected light amount in the dark region can always be zero. Therefore, when the diaphragm 102 according to the present embodiment is used, b is always 0 even if the alignment of the irradiation optical system of the laser irradiation unit 10 is changed, and the D (direct image) image and the G (global image) ) the dependence of the parameter b in generating the image can be removed. As described above, since the light shielding portion 102 a does not transmit light, it is possible to generate a dark region on the surface of the observed portion 2 where no light is projected regardless of the distance to the surface of the observed portion 2 . As a result, in equations (1) and (2), it becomes possible to calculate D=Ld and G=Lg.
 カメラ20は、レンズ部21と、レンズ部21に接続された撮像部22とを有する。カメラ20は、レンズ部21が患者1の観察部位2に向くように配置され、レーザ光11が照射された観察部位2を撮像する。 The camera 20 has a lens section 21 and an imaging section 22 connected to the lens section 21 . The camera 20 is arranged so that the lens portion 21 faces the observed region 2 of the patient 1 and images the observed region 2 irradiated with the laser beam 11 .
 カメラ20は、観察部位2の表面における明部、暗部、明部の縞状画像を撮像する。このカメラ20は、例えばCHU(Camera Head Unit)として構成され、所定のインターフェース等を介してコントローラ30と接続される。本実施形態では、カメラ20は、撮像系に相当する。 The camera 20 captures striped images of bright areas, dark areas, and bright areas on the surface of the observation site 2 . The camera 20 is configured as, for example, a CHU (Camera Head Unit), and is connected to the controller 30 via a predetermined interface or the like. In this embodiment, the camera 20 corresponds to an imaging system.
 レンズ部21は、光学ズーム機能を備える。レンズ部21は、例えばF値(絞り値)及び光学倍率等の撮像パラメータを制御することで、光学的に拡大または縮小された観察部位2の光学像を生成する。光学ズーム機能を実現するための具体的な構成は限定されず、例えば電子制御による自動ズームや、手動でのズーム等が適宜実行可能であってよい。 The lens unit 21 has an optical zoom function. The lens unit 21 generates an optical image of the observed region 2 that is optically enlarged or reduced by controlling imaging parameters such as an F number (aperture value) and optical magnification. A specific configuration for realizing the optical zoom function is not limited, and for example, automatic zooming by electronic control, manual zooming, or the like may be performed as appropriate.
 撮像部22は、レンズ部21により生成された光学像を撮像して観察部位2の画素信号を生成する。ここで画素信号とは、画像を構成することが可能な信号である。画素信号には、例えば画素ごとの輝度値等の情報が含まれる。すなわち、撮像部22の撮像範囲内にある、観察部位2(被写体)の各点からの光を、撮像部22の撮像素子が検出し、画素信号へ変換する。その画素信号は、Direct(ダイレクト)成分とGlobal(グローバル)成分に分けられる。 The imaging unit 22 captures an optical image generated by the lens unit 21 and generates pixel signals of the observed region 2 . Here, the pixel signal is a signal capable of forming an image. The pixel signal includes, for example, information such as the luminance value of each pixel. That is, the imaging device of the imaging unit 22 detects light from each point of the observed region 2 (subject) within the imaging range of the imaging unit 22 and converts it into a pixel signal. The pixel signal is divided into a Direct component and a Global component.
 撮像範囲内にある、被写体のある位置に着目した際、直接照明によりその着目点が照らされたことにより検出された画素信号をDirect成分とする。他の点を経由して着目点が照らされたことにより検出された画素信号をGlobal成分とする。 When focusing on the position of the subject within the imaging range, the pixel signal detected by direct illumination of the point of interest is defined as the Direct component. A pixel signal detected by illuminating the point of interest via another point is defined as a global component.
 また、画素信号の種類や形式等は限定されず、例えば動画像や静止画像を構成可能な任意の形式が用いられてよい。撮像部22としては、例えばCMOS(Complementary Metal-Oxide Semiconductor)センサやCCD(Charge Coupled Device)センサ等のイメージセンサが用いられる。 Also, the type, format, etc. of the pixel signal are not limited, and any format capable of forming a moving image or a still image, for example, may be used. As the imaging unit 22, for example, an image sensor such as a CMOS (Complementary Metal-Oxide Semiconductor) sensor or a CCD (Charge Coupled Device) sensor is used.
 コントローラ30は、CPU(Central Processing Unit)、ROM(Read Only Memory)、RAM(Random Access Memory)、HDD(Hard Disk Drive)等のコンピュータの構成に必要なハードウェアを有する。本実施形態では、コントローラ30は、制御装置に相当する。 The controller 30 has hardware necessary for configuring a computer, such as a CPU (Central Processing Unit), ROM (Read Only Memory), RAM (Random Access Memory), and HDD (Hard Disk Drive). In this embodiment, the controller 30 corresponds to a control device.
 CPUが、ROMやHDDに格納された本技術に係るプログラムをRAMにロードして実行することにより、図1に示す各機能ブロックが実現される。そしてこれらの機能ブロックにより、本技術に係る制御方法が実行される。 Each functional block shown in FIG. 1 is realized by the CPU loading the program according to the present technology stored in the ROM or HDD into the RAM and executing it. These functional blocks execute the control method according to the present technology.
 プログラムは、例えば種々の記録媒体を介してコントローラ30にインストールされる。又はインターネット等を介してプログラムのインストールが実行されてもよい。 The program is installed in the controller 30 via various recording media, for example. Alternatively, program installation may be executed via the Internet or the like.
 コントローラ30の具体的な構成は限定されず、例えばFPGA(Field Programmable Gate Array)、画像処理IC(Integrated Circuit)、その他ASIC(Application Specific Integrated Circuit)等のデバイスが用いられてもよい。 The specific configuration of the controller 30 is not limited, and devices such as FPGA (Field Programmable Gate Array), image processing IC (Integrated Circuit), and other ASIC (Application Specific Integrated Circuit) may be used.
 図1に示すように、コントローラ30は、機能ブロックとして、照射制御部31、画像取得部32、カメラ制御部33、UI取得部34、ブロック制御部35、及び演算処理部36を有する。またコントローラ30のROM等により構成される記憶部37には処理サイズテーブル38が格納されている。なお各機能ブロックを実現するために、専用のハードウェアが適宜用いられてもよい。 As shown in FIG. 1, the controller 30 has an irradiation control unit 31, an image acquisition unit 32, a camera control unit 33, a UI acquisition unit 34, a block control unit 35, and an arithmetic processing unit 36 as functional blocks. A processing size table 38 is stored in a storage unit 37 constituted by a ROM of the controller 30 or the like. Dedicated hardware may be appropriately used to implement each functional block.
 図4Aは、照射制御部31の構成を示すブロック図である。図4Aに示すように、照射制御部31は、光源制御部310と、位置制御部312とを有する。光源制御部310は、レーザ照射部10から照射されるレーザ光11の照射強度等を制御する。光源制御部310は、例えば、D(ダイレクト画像)画像の観察時には、カメラ20により生成された画素信号の明部領域(図2A参照)の画素値が所定値となるようにレーザ90の光量を制御する。 4A is a block diagram showing the configuration of the irradiation control unit 31. FIG. As shown in FIG. 4A , the irradiation control section 31 has a light source control section 310 and a position control section 312 . The light source control unit 310 controls the irradiation intensity of the laser light 11 emitted from the laser irradiation unit 10 and the like. For example, when observing a D (direct image) image, the light source control unit 310 controls the light intensity of the laser 90 so that the pixel value of the bright area (see FIG. 2A) of the pixel signal generated by the camera 20 becomes a predetermined value. Control.
 一方で、G(グローバル)画像の観察時には、カメラ20により生成された画素信号の暗部領域(図2A参照)の画素値が所定値となるようにレーザ90の光量を制御する。このように、光源制御部310は、観察目的で定まる所定領域が明るすぎる場合は光源の光量を下げ、暗すぎる場合は光源の光量を上げる制御をレーザ90に対して行うことが可能である。 On the other hand, when observing a G (global) image, the light amount of the laser 90 is controlled so that the pixel value of the dark region (see FIG. 2A) of the pixel signal generated by the camera 20 becomes a predetermined value. Thus, the light source control unit 310 can control the laser 90 to decrease the light intensity of the light source when the predetermined area determined for observation purposes is too bright, and to increase the light intensity of the light source when it is too dark.
 図4Bは、スペッククルコントラストL40とカメラ20により生成された画素信号の平均輝度値との関係を示す図である。縦軸はスペッククルコントラストを示し、横軸は平均輝度値を示す。なお、スペッククルコントラストL40は後述のスペック演算部370で演算される。図4Bに示すように、スペッククルコントラストL40は画素信号の平均輝度値により値が変動する。このため、光源制御部310は、画素信号の平均輝度値が所定の範囲に入るように、レーザ光11の照射強度を制御してもよい。 FIG. 4B is a diagram showing the relationship between the speckle contrast L40 and the average luminance value of pixel signals generated by the camera 20. FIG. The vertical axis indicates the speckle contrast, and the horizontal axis indicates the average luminance value. The speckle contrast L40 is calculated by the spec calculator 370, which will be described later. As shown in FIG. 4B, the speckle contrast L40 varies in value depending on the average luminance value of the pixel signal. Therefore, the light source control section 310 may control the irradiation intensity of the laser light 11 so that the average luminance value of the pixel signal falls within a predetermined range.
 また、光源制御部310は、例えば生体観察システム100を操作するオペレータが指定したレーザ光11の照射強度の情報を取得してもよい。光源制御部310は、レーザ照射部10に対して指定された照射強度でレーザ光11を出力する旨の指示を出力する。これによりオペレータが所望する照射強度でレーザ光11を照射することが可能となる。 Also, the light source control unit 310 may acquire information on the irradiation intensity of the laser light 11 specified by an operator who operates the biological observation system 100, for example. The light source control unit 310 outputs an instruction to the laser irradiation unit 10 to output the laser light 11 with the designated irradiation intensity. This makes it possible to irradiate the laser beam 11 with the irradiation intensity desired by the operator.
 レーザ光11の照射強度を制御する方法等は限定されない。例えば、カメラ20の露光時間等に合わせてレーザ光11の照射強度が適宜制御されてもよい。なお光源制御部310により、レーザ光11の照射強度のみならず、レーザ光11の波長や照射領域等の任意のパラメータが適宜制御されてよい。 The method for controlling the irradiation intensity of the laser light 11 is not limited. For example, the irradiation intensity of the laser light 11 may be appropriately controlled according to the exposure time of the camera 20 or the like. Note that the light source control unit 310 may appropriately control not only the irradiation intensity of the laser light 11 but also arbitrary parameters such as the wavelength of the laser light 11 and the irradiation area.
 位置制御部312は、不図示の駆動部を制御してレーザ照射部10を所定の時間間隔で所定の距離ずつ移動させる制御を行うことが可能である。
 画像取得部32は、カメラ20により生成された画素信号を取得する。すなわち、画像取得部32は、レーザ光11が照射されて撮像された観察部位2の画素信号を取得する。画像取得部32により取得された画素信号は、演算処理部36に供給される。本実施形態では、画像取得部32は、画素信号取得部に相当する。
The position control unit 312 can control a driving unit (not shown) to move the laser irradiation unit 10 by a predetermined distance at predetermined time intervals.
The image acquisition unit 32 acquires pixel signals generated by the camera 20 . That is, the image acquisition unit 32 acquires pixel signals of the observed region 2 captured by irradiation with the laser light 11 . The pixel signals acquired by the image acquisition section 32 are supplied to the arithmetic processing section 36 . In this embodiment, the image acquisition section 32 corresponds to a pixel signal acquisition section.
 カメラ制御部33は、インターフェース等を介してカメラ20に接続され、カメラ20の動作を制御する。カメラ制御部33は、例えばカメラ20のズーム量(光学倍率)や絞り、あるいは露光時間等を指定する信号をカメラ20に対して出力する。カメラ20は、カメラ制御部33から出力された信号に基づいて観察部位2を撮像する。これによりカメラ20の動作を電子的に制御することが可能となる。 The camera control unit 33 is connected to the camera 20 via an interface or the like, and controls the operation of the camera 20. The camera control unit 33 outputs to the camera 20 a signal designating, for example, the zoom amount (optical magnification), aperture, or exposure time of the camera 20 . The camera 20 images the observed region 2 based on the signal output from the camera control section 33 . This allows the operation of camera 20 to be electronically controlled.
 また、カメラ制御部33は、観察部位2に対する撮像の撮像パラメータを取得する。撮像パラメータは、レンズ部21(カメラ20)のF値(絞り値)及び光学倍率等を含む。カメラ制御部33により取得された撮像パラメータは、ブロック制御部35に出力される。本実施形態では、撮像パラメータは、撮像条件に相当する。カメラ制御部33は、位置制御部312がレーザ照射部10の位置を変更する場合には、位置制御部312の同期信号に同期して観察部位2の画像を複数枚撮像する。 In addition, the camera control unit 33 acquires imaging parameters for imaging the observed region 2 . The imaging parameters include the F-number (aperture value) and optical magnification of the lens unit 21 (camera 20). The imaging parameters acquired by the camera control section 33 are output to the block control section 35 . In this embodiment, the imaging parameters correspond to imaging conditions. When the position control unit 312 changes the position of the laser irradiation unit 10 , the camera control unit 33 captures a plurality of images of the observed region 2 in synchronization with the synchronization signal of the position control unit 312 .
 UI取得部34は、図示しないユーザインターフェース(UI:User Interface)を介してオペレータにより入力された指示等を取得する。ユーザインターフェースとしては、例えばディスプレイ等の表示装置及びマウスやキーボード等の入力装置が適宜用いられる。オペレータは、例えば表示装置に表示された操作画面を見ながら入力装置を使って指示を入力する。ユーザインターフェースの種類等は限定されず、例えばタッチセンサを供えたディスプレイ、フットスイッチ、手元のコントロールスイッチ等が用いられてもよい。 The UI acquisition unit 34 acquires instructions and the like input by the operator via a user interface (UI: User Interface) (not shown). As the user interface, for example, a display device such as a display and an input device such as a mouse and a keyboard are appropriately used. The operator inputs instructions using the input device while looking at the operation screen displayed on the display device, for example. The type of user interface is not limited, and for example, a display provided with a touch sensor, a foot switch, a control switch at hand, or the like may be used.
 ブロック制御部35は、予測されるスペックルサイズ算出部40及び処理サイズ制御部41を有する。予測されるスペックルサイズ算出部40は、カメラ制御部33から入力された撮像パラメータに基づいて、スペックルサイズを算出する。 The block control unit 35 has a predicted speckle size calculation unit 40 and a processing size control unit 41 . The predicted speckle size calculator 40 calculates the speckle size based on the imaging parameters input from the camera controller 33 .
 スペックルサイズとは、スペックルを形成する個々の斑点の大きさである。一般にスペックルサイズは、スペックルパターンを撮像する撮像系に応じて変化する。例えばスペックルサイズdは、以下の式で与えられる。
 d=F#×(1+M)×λ×1.22                   (3) ここで、F#はレンズ部21のF値であり、Mはレンズ部21の光学倍率Mである。またλは照射されたレーザ光11の波長である。以下ではこの式をスペックルサイズ算出式と記載する場合がある。
The speckle size is the size of individual spots forming speckles. Generally, the speckle size changes according to the imaging system that images the speckle pattern. For example, the speckle size d is given by the following formula.
d=F#×(1+M)×λ×1.22 (3) where F# is the F value of the lens unit 21 and M is the optical magnification M of the lens unit 21 . λ is the wavelength of the irradiated laser light 11 . Below, this formula may be described as a speckle size calculation formula.
 本実施形態では、予測されるスペックルサイズ算出部40により、撮像パラメータに含まれるF値F#及び光学倍率Mに基づいて、スペックルサイズ算出式を用いてスペックルサイズdが算出される。従って予測されるスペックルサイズ算出部40は、撮像されているスペックルパターンでのスペックルサイズdを算出することが可能である。算出されたスペックルサイズdは、処理サイズ制御部41に出力される。 In the present embodiment, the predicted speckle size calculation unit 40 calculates the speckle size d using the speckle size calculation formula based on the F number F# and the optical magnification M included in the imaging parameters. Therefore, the predicted speckle size calculator 40 can calculate the speckle size d in the captured speckle pattern. The calculated speckle size d is output to the processing size control section 41 .
 処理サイズ制御部41は、画素ブロックであるセルのサイズ(セルサイズ)を制御する。セルは、例えばm×nの画素で構成された矩形ブロックであり、画素信号からスペックルコントラストを算出する際に用いられる。(横×縦)の画素数(m×n)がセルサイズに相当する。セルの形状等は限定されず、例えば任意の形状のセルが用いられてよい。セル及びスペックルコントラストについては後述する。 The processing size control unit 41 controls the cell size (cell size) that is a pixel block. A cell is, for example, a rectangular block composed of m×n pixels, and is used when calculating the speckle contrast from the pixel signal. The number of pixels (horizontal×vertical) (m×n) corresponds to the cell size. The shape and the like of the cells are not limited, and cells of any shape may be used, for example. Cell and speckle contrast will be described later.
 処理サイズ制御部41は、予測されるスペックルサイズ算出部40により算出されたスペックルサイズdに基づいてセルサイズを制御する。また処理サイズ制御部41は、UI取得部34により取得された画質モードに応じてセルサイズを制御する。従って処理サイズ制御部41により制御されるセルサイズは、スペックルサイズdと画質モードに応じたサイズとなる。 The processing size control unit 41 controls the cell size based on the predicted speckle size d calculated by the speckle size calculation unit 40 . The processing size control unit 41 also controls the cell size according to the image quality mode acquired by the UI acquisition unit 34 . Therefore, the cell size controlled by the processing size control unit 41 is a size corresponding to the speckle size d and the image quality mode.
 本実施形態では、セルのサイズを制御する際に、記憶部37に記憶された処理サイズテーブル38が用いられる。処理サイズテーブル38には、スペックルサイズdと、画質モードと、セルのサイズとの対応関係が記録されている。例えば処理サイズ制御部41は、算出されたスペックルサイズd及び指定された画質モードに対応するセルのサイズの値を処理サイズテーブル38から取得する。これにより容易にセルのサイズを制御することが可能となる。本実施形態では、処理サイズテーブル38は、制御テーブルに相当する。 In this embodiment, the processing size table 38 stored in the storage unit 37 is used when controlling the cell size. The processing size table 38 records the correspondence between the speckle size d, the image quality mode, and the cell size. For example, the processing size control unit 41 acquires the cell size value corresponding to the calculated speckle size d and the specified image quality mode from the processing size table 38 . This makes it possible to easily control the cell size. In this embodiment, the processing size table 38 corresponds to a control table.
 このようにブロック制御部35は、撮像パラメータに基づいてスペックルサイズを算出し、算出されたスペックルサイズに基づいてセルサイズを制御する。すなわち、ブロック制御部35は、観察部位2に対する撮像の撮像パラメータに基づいて、セルのサイズを制御する。 Thus, the block control unit 35 calculates the speckle size based on the imaging parameters, and controls the cell size based on the calculated speckle size. That is, the block control unit 35 controls the cell size based on imaging parameters for imaging the observed region 2 .
 図5は、演算処理部36の構成を示すブロック図である。図5に示すように、演算処理部36は、処理サイズ制御部41(ブロック制御部35)によりサイズが制御されたセルを用いて、画像取得部32により取得された画素信号に基づいてスペックルデータを算出する。すなわち、この演算処理部36は、記憶部360と、プロファイル生成部362と、画素値範囲生成部364と、第1画像生成部366と、第2画像生成部368と、スペックル演算部370とを有する。なお、本実施形態に係る第1画像生成部366と、第2画像生成部368とが画像生成部に対応する。 FIG. 5 is a block diagram showing the configuration of the arithmetic processing unit 36. As shown in FIG. As shown in FIG. 5, the arithmetic processing unit 36 uses cells whose size is controlled by the processing size control unit 41 (block control unit 35) to perform speckle processing based on the pixel signals acquired by the image acquisition unit 32. Calculate the data. That is, the calculation processing unit 36 includes a storage unit 360, a profile generation unit 362, a pixel value range generation unit 364, a first image generation unit 366, a second image generation unit 368, and a speckle calculation unit 370. have Note that the first image generation unit 366 and the second image generation unit 368 according to this embodiment correspond to the image generation unit.
 記憶部360は、画像取得部32が取得した画素信号を二次元の画像として記憶する。なお、記憶部360は、記憶部37内に構成してもよい。 The storage unit 360 stores the pixel signals acquired by the image acquiring unit 32 as a two-dimensional image. Note that the storage unit 360 may be configured within the storage unit 37 .
 図6は、プロファイル生成部362が生成する画素値プロファイルの例を示す図である。上図は、画素値プロファイルの例である。縦軸は画素値を示し、横軸は、画像上の位置を示す。下図は、画像取得部32が取得した画素信号に基づき生成された明部と暗部とを含む二次元画像を示す。図6の下図に示すように、遮光部102bに対応する暗部の画像も画素値を有する。これは、明部に直接照射されたレーザ光が反射、散乱して暗部から出射して撮像されることにより生じる。また、レーザ光11が直接照射される明部(開口部)に対応する画像領域は、D領域(ダイレクト領域)と称する。一方で、遮光部に対応する範囲で所定の画素値を有する画像領域は、G領域(グローバル領域)と称する。すなわち、G領域(グローバル領域)は、D領域(ダイレクト領域)から入射したレーザ光が反射及び散乱して出射した領域である。 FIG. 6 is a diagram showing an example of a pixel value profile generated by the profile generator 362. FIG. The above figure is an example of a pixel value profile. The vertical axis indicates pixel values, and the horizontal axis indicates positions on the image. The lower diagram shows a two-dimensional image including bright portions and dark portions generated based on the pixel signals acquired by the image acquisition section 32 . As shown in the lower diagram of FIG. 6, the image of the dark portion corresponding to the light shielding portion 102b also has pixel values. This is caused by the fact that the laser light that is directly applied to the bright area is reflected, scattered, emitted from the dark area, and captured. An image region corresponding to a bright portion (opening portion) directly irradiated with the laser beam 11 is referred to as a D region (direct region). On the other hand, an image area having a predetermined pixel value within a range corresponding to the light shielding portion is called a G area (global area). That is, the G area (global area) is an area where the laser light that entered from the D area (direct area) is reflected and scattered and emitted.
 図7A、Bは、皮膚組織の断面を示す模式図である。例えばA層は角質層及び表皮であり、例えばB層は、真皮の上部層である。真皮の上部層には毛細血管が存在する。図7Aはダイレクト(Direct)領域のレーザ光11、及び反射、散乱光EAを模式的に示す図である。図7Bはグローバル(Global)領域のレーザ光11、及び反射、散乱光EBを模式的に示す図である。 7A and B are schematic diagrams showing cross sections of skin tissue. For example, layer A is the stratum corneum and epidermis, and layer B is the upper layer of the dermis, for example. Capillaries are present in the upper layers of the dermis. FIG. 7A is a diagram schematically showing the laser light 11 in the direct area and the reflected and scattered light EA. FIG. 7B is a diagram schematically showing the laser light 11 in the global area and the reflected and scattered light EB.
 図7Aに示すように、ダイレクト領域では、レーザ光11がA層からも反射するため、B層から反射、散乱する微弱光は、A層から反射、散乱する光に埋もれてしまう。このため、ダイレクト領域では、A層から反射、散乱する反射、散乱光EAが主たる成分として撮像される。反射、散乱光EAは、例えば、生体である観察部位2に直接的に照射されたコヒーレント光の一部が観察部位2の組織の下の血流によって散乱された光に対応する。 As shown in FIG. 7A, in the direct region, the laser beam 11 is also reflected from the A layer, so the weak light reflected and scattered from the B layer is buried in the light reflected and scattered from the A layer. Therefore, in the direct area, the image is captured as the main components of reflected and scattered light EA from the A layer. The reflected/scattered light EA corresponds to, for example, light that is part of the coherent light that is directly applied to the observation site 2 , which is a living body, and is scattered by the blood flow under the tissue of the observation site 2 .
 図7Bに示すように、グローバル領域は、レーザ光11は直接照射されていない領域であり、ダイレクト領域から入射した光が反射、散乱する領域である。このため、グローバル領域では、A層から反射、散乱する光は低減し、B層から反射、散乱する反射、散乱光EBが主たる成分として撮像される。 As shown in FIG. 7B, the global area is an area that is not directly irradiated with the laser light 11, and is an area where the light incident from the direct area is reflected and scattered. Therefore, in the global region, the light reflected and scattered from the A layer is reduced, and the reflected and scattered light EB reflected and scattered from the B layer is imaged as the main component.
 図8は、画素値範囲生成部364の処理例を模式的に示す図である。上側の図は、絞り102の出射側の面におけるレーザ光11の分布を示す模式図である。縦軸は明るさを示し、横軸は絞り102を横切るラインの位置に対応する。下側の図は、プロファイル生成部362が生成した画素値プロファイルの一部領域を示す。 FIG. 8 is a diagram schematically showing a processing example of the pixel value range generation unit 364. FIG. The upper diagram is a schematic diagram showing the distribution of the laser light 11 on the output side surface of the diaphragm 102 . The vertical axis indicates brightness, and the horizontal axis corresponds to the position of a line that crosses the diaphragm 102. FIG. The lower diagram shows a partial area of the pixel value profile generated by the profile generator 362. FIG.
 画素値範囲生成部364は、開口部領域の画素値の代表値を生成する。例えば、開口部領域の画素値の最大値、平均値などである。続けて画素値範囲生成部364は、代表値に基づく画素値Lgの画素値範囲を生成する。例えば、代表値の50%~1/e=13.5%の範囲をグローバル領域における画素値Lgの抽出画素値範囲として生成する。 The pixel value range generation unit 364 generates a representative value of pixel values in the opening region. For example, it is the maximum value, the average value, or the like of the pixel values in the opening region. Subsequently, the pixel value range generation unit 364 generates a pixel value range of the pixel value Lg based on the representative value. For example, a range of 50% to 1/e 2 =13.5% of the representative value is generated as the extraction pixel value range of the pixel value Lg in the global area.
 図9は、画素値範囲生成部364が設定した抽出画素値範囲G9と画像取得部32が取得した二次元画像との対応関係を示す図である。上図は、画素値範囲生成部364が生成した画素値プロファイルと、画素値範囲生成部364が設定した抽出画素値範囲G9を示す図である。下図は画像取得部32が取得した二次元画像を示し、ラインL9は、画素値プロファイルを生成した位置を示す。 FIG. 9 is a diagram showing the correspondence relationship between the extracted pixel value range G9 set by the pixel value range generation unit 364 and the two-dimensional image acquired by the image acquisition unit 32. FIG. The upper diagram shows the pixel value profile generated by the pixel value range generation unit 364 and the extracted pixel value range G9 set by the pixel value range generation unit 364. FIG. The lower diagram shows the two-dimensional image acquired by the image acquisition unit 32, and line L9 indicates the position where the pixel value profile is generated.
 図10は、第1画像生成部366が抽出画素値範囲G9(図9)に基づき生成した画像例を示す図である。図10(a)は, 抽出画素値範囲G9(図9)に基づき切り出した画像例を示す。 FIG. 10 is a diagram showing an example of an image generated by the first image generator 366 based on the extracted pixel value range G9 (FIG. 9). FIG. 10(a) shows an example of an image cut out based on the extracted pixel value range G9 (FIG. 9).
 図9及び図10(a)に示すように、第1画像生成部366は、抽出画素値範囲G9(図9)に対応する画素値Lgの画素値範囲の要素画像を、画像取得部32が取得した二次元画像から切り出す。図10(a)では、ダイレクト領域の右側の画像領域を抽出しているが、ダイレクト領域の左側の画像領域も抽出してよい。第1画像生成部は、生成した要素画像を記憶部360に記憶する。 As shown in FIGS. 9 and 10(a), the first image generation unit 366 causes the image acquisition unit 32 to generate an element image in the pixel value range of the pixel value Lg corresponding to the extracted pixel value range G9 (FIG. 9). Cut out from the obtained two-dimensional image. Although the image area on the right side of the direct area is extracted in FIG. 10A, the image area on the left side of the direct area may also be extracted. The first image generation section stores the generated elemental images in the storage section 360 .
 図10(b)は、第1画像生成部366が合成したG(グロ-バル画像)画像の例を示す図である。図10(b)に示すように、第1画像生成部は、生成した要素画像を記憶部360から取得し、全体画像をG(グロ-バル画像)画像として合成する。 FIG. 10(b) is a diagram showing an example of a G (global image) image synthesized by the first image generation unit 366. FIG. As shown in FIG. 10B, the first image generator acquires the generated elemental images from the storage unit 360 and synthesizes the entire image as a G (global image) image.
 図11は、第2画像生成部368が生成したD(ダイレクト画像)画像の例を示す図である。図11に示すように、第2画像生成部368は、画像取得部32が取得した二次元画像から画素値範囲生成部364が設定した抽出画素値範囲G9以下の画像を除き、合成する。 FIG. 11 is a diagram showing an example of a D (direct image) image generated by the second image generating section 368. FIG. As shown in FIG. 11, the second image generator 368 synthesizes the two-dimensional image acquired by the image acquirer 32, excluding the image below the extraction pixel value range G9 set by the pixel value range generator 364. FIG.
 ここでスペックルデータとは、観察部位2のスペックルパターンに関するデータである。スペックルデータは、例えば画素信号に含まれる各画素の輝度値等の情報を適宜処理することで算出される。 Here, the speckle data is data related to the speckle pattern of the observation site 2. The speckle data is calculated by appropriately processing information such as the luminance value of each pixel included in the pixel signal, for example.
 本実施形態では、スペックル演算部370により、スペックルデータとしてスペックルコントラストが算出される。なおスペックルコントラストのみならず、例えばスペックルパターンでの輝度値の平均、分散、及び標準偏差等がスペックルデータとして算出されてもよい。算出されたスペックルデータは、処理サイズ制御部41及び処理サイズテーブル38に出力可能であり、処理サイズテーブル38の校正等に用いられる。 In the present embodiment, the speckle calculator 370 calculates the speckle contrast as the speckle data. In addition to the speckle contrast, for example, the average, variance, standard deviation, etc. of luminance values in the speckle pattern may be calculated as the speckle data. The calculated speckle data can be output to the processing size control unit 41 and the processing size table 38, and used for calibration of the processing size table 38 and the like.
 またスペックル演算部370により、算出されたスペックルコントラストに基づいて観察部位2の観察像が生成される。生成された観察像は、図示しないディスプレイ等の表示装置に出力される。本実施形態において、スペックル演算部370は、算出部及び生成部として機能する。 The speckle calculator 370 also generates an observation image of the observed region 2 based on the calculated speckle contrast. The generated observation image is output to a display device such as a display (not shown). In this embodiment, the speckle calculator 370 functions as a calculator and a generator.
 図12及び図13は、スペックルコントラストの算出例を説明するための模式図である。図12では、3×3のセル42に含まれる画素43の輝度値が明暗により模式的に図示されている。 12 and 13 are schematic diagrams for explaining an example of speckle contrast calculation. In FIG. 12, luminance values of pixels 43 included in a 3×3 cell 42 are schematically illustrated by contrast.
 図13に示すようにスペックルコントラストCsは、セル42に含まれる各画素43の輝度値I(m、n)の標準偏差σ及び平均値Aを用いて以下の式で与えられる。
 Cs=σ/A                              (4) 
As shown in FIG. 13, the speckle contrast Cs is given by the following formula using the standard deviation .sigma.
Cs=σ/A (4)
 また輝度値I(m、n)の標準偏差σ及び平均値Aは以下の式で与えられる。
 A=Ave(I(m、n))=Σ[I(m、n)]/N           (5) σ=Stdev(I(m、n))=Sqrt((Σ[I(m、n)-Ave]^2)/N)                                   (6) ここで総和記号Σは、セル42内の全ての画素43の輝度値に関する和を表す。またNはセル42に含まれる画素43の総数であり、図12ではN=3×3=9である。なおスペックルコントラストCsの算出方法は限定されず、例えば標準偏差σに代えて、輝度値I(m、n)の分散σ^2等が用いられてもよい。またスペックルコントラストCsとしてセル42内での輝度値I(m、n)の最大値及び最小値の差分(Imax(m、n)-Imin(m、n))が用いられてもよい。
Also, the standard deviation σ and the average value A of the luminance value I(m, n) are given by the following equations.
A=Ave(I(m,n))=Σ[I(m,n)]/N (5) σ=Stdev(I(m,n))=Sqrt((Σ[I(m,n)− Ave]̂2)/N) (6) where the summation symbol Σ represents the sum of the luminance values of all the pixels 43 in the cell 42 . Also, N is the total number of pixels 43 included in the cell 42, and N=3×3=9 in FIG. Note that the method for calculating the speckle contrast Cs is not limited, and for example, instead of the standard deviation σ, the variance σ̂2 of the brightness values I(m, n) may be used. Alternatively, the difference (Imax(m,n)-Imin(m,n)) between the maximum and minimum luminance values I(m,n) in the cell 42 may be used as the speckle contrast Cs.
 図13Aには、3×3のセル42を用いてスペックルコントラストCsを算出する処理の一例が示されている。例えば図13に示すように、画像44の左上の画素43の位置を座標(0,0)とする。スペックル演算部370は、まず左上の画素43を含むセル42aを設定する。この場合、座標(1,1)にある画素43を中心とするセル42aが設定される(ステップ1A)。 FIG. 13A shows an example of processing for calculating the speckle contrast Cs using 3×3 cells 42 . For example, as shown in FIG. 13, the position of the upper left pixel 43 of the image 44 is assumed to be coordinates (0, 0). The speckle calculator 370 first sets the cell 42a including the upper left pixel 43. As shown in FIG. In this case, a cell 42a centered on the pixel 43 at coordinates (1, 1) is set (step 1A).
 スペックル演算部370は、座標(1,1)を中心とするセル42aでのスペックルコントラストCs(1,1)を算出する。すなわち中心の画素43とその周りの8つの画素43との輝度値からCs(1,1)が算出される。算出されたスペックルコントラストCs(1,1)は、座標(1,1)にある画素43に対応するスペックルコントラストCsとして記録される(ステップ1B)。 The speckle calculator 370 calculates the speckle contrast Cs (1, 1) in the cell 42a centered at the coordinates (1, 1). That is, Cs(1, 1) is calculated from the brightness values of the center pixel 43 and eight pixels 43 around it. The calculated speckle contrast Cs(1,1) is recorded as the speckle contrast Cs corresponding to the pixel 43 at coordinates (1,1) (step 1B).
 次にスペックル演算部370は、座標(1,1)から右方向に1画素だけ移動した座標(2,1)を中心とするセル42bを設定する(ステップ2A)。スペックル演算部370はセル42bでのスペックルコントラストCs(2,1)を算出し、座標(2,1)にある画素43のスペックルコントラスCsとして記録する(ステップ2B)。 Next, the speckle calculation unit 370 sets a cell 42b centered at the coordinates (2, 1), which is shifted one pixel to the right from the coordinates (1, 1) (step 2A). The speckle calculator 370 calculates the speckle contrast Cs (2, 1) in the cell 42b and records it as the speckle contrast Cs of the pixel 43 at the coordinates (2, 1) (step 2B).
 このようにセル42の中心を1画素ずつ移動して、セル42の中心の画素43のスペックルコントラストCsを算出する処理が実行される。これにより画素信号に含まれる各画素43に対応するスペックルコントラストCsが順次算出される。 In this way, the center of the cell 42 is moved pixel by pixel, and the process of calculating the speckle contrast Cs of the pixel 43 at the center of the cell 42 is executed. Thereby, the speckle contrast Cs corresponding to each pixel 43 included in the pixel signal is sequentially calculated.
 なお、セル42を使ってスペックルコントラストCsを算出する方法等は限定されない。例えば算出されたスペックルコントラストCsが、セル42内にある中心の画素43とは異なる他の画素43に割り当てられてもよい。またセル42を移動させる量、向き、及び順序等は限定されず、例えば画像処理に要する処理時間等に応じて適宜変更されてよい。 The method of calculating the speckle contrast Cs using the cell 42 is not limited. For example, the calculated speckle contrast Cs may be assigned to other pixels 43 within the cell 42 that are different from the central pixel 43 . Further, the amount, direction, order, and the like of moving the cells 42 are not limited, and may be changed as appropriate according to, for example, the processing time required for image processing.
 図13Aには、スペックルコントラストCsを算出する処理の全体像が模式的に図示されている。図13Aの左側の図は、第2画像生成部により生成されたグローバル画像の模式図である。スペックル演算部370は、グローバル画像50の左上からスペックルコントラストCsの算出処理を開始する。以下では、スペックルコントラストCsを算出するための元画像、すなわちグローバル画像50をスペックル画像50と記載する。 FIG. 13A schematically shows the overall image of the process of calculating the speckle contrast Cs. The diagram on the left side of FIG. 13A is a schematic diagram of the global image generated by the second image generator. The speckle calculator 370 starts the process of calculating the speckle contrast Cs from the upper left of the global image 50 . The original image for calculating the speckle contrast Cs, that is, the global image 50 is hereinafter referred to as the speckle image 50 .
 スペックル演算部370は、算出されたスペックルコントラストCsに基づいて観察像となるスペックルコントラスト画像60を生成する。図13Bの右側の図は、スペックルコントラスト画像60の模式図である。 The speckle calculator 370 generates a speckle contrast image 60 as an observation image based on the calculated speckle contrast Cs. The diagram on the right side of FIG. 13B is a schematic diagram of the speckle contrast image 60 .
 スペックルコントラスト画像60は、スペックルコントラストCsの値を輝度値に変換して生成される。例えば、スペックルコントラスCsの値が高い画素には明るい輝度値が設定され、Csの値が低い画素には暗い輝度値が設定される。スペックルコントラストCsを輝度値に変換する方法等は限定されず、任意の方法が用いられてよい。例えばスペックルコントラストCsの高低に対して明暗が反転した輝度値が設定されてもよい。 The speckle contrast image 60 is generated by converting the value of the speckle contrast Cs into a luminance value. For example, a pixel with a high speckle contrast Cs value is set to a bright luminance value, and a pixel with a low Cs value is set to a dark luminance value. A method or the like for converting the speckle contrast Cs into a luminance value is not limited, and any method may be used. For example, a brightness value may be set in which the brightness is inverted with respect to the level of the speckle contrast Cs.
 図14は、スペックルパターンの特性を説明するための図である。図14の右上に示す画像は、静止状態にある観察対象にレーザ光11を照射して撮像された画像(スペックル画像50a)である。また左上に示す画像は、移動状態にある観察対象にレーザ光11を照射して撮像された画像(スペックル画像50b)である。 FIG. 14 is a diagram for explaining the characteristics of speckle patterns. The image shown in the upper right of FIG. 14 is an image (speckle image 50a) captured by irradiating the observation target in a stationary state with the laser beam 11. As shown in FIG. The image shown in the upper left is an image (speckle image 50b) captured by irradiating the observation target in a moving state with the laser beam 11. FIG.
 一般にレーザ光11のような干渉性の高い光を観察対象に照射すると、観察対象で反射されるレーザ光11(反射光)の位相がランダムに変化する。位相がランダムになったレーザ光11が互いに干渉することで明暗のスペックルパターンが形成される。例えば観察対象が静止状態にある場合には、干渉が生じる位置等が安定するため、右側のスペックル画像50aに示すように明瞭なスペックルパターンが形成される。 In general, when a highly coherent light such as laser light 11 is irradiated onto an observation target, the phase of laser light 11 (reflected light) reflected by the observation target changes randomly. The laser beams 11 with random phases interfere with each other to form a bright and dark speckle pattern. For example, when the observation target is in a stationary state, the positions where interference occurs are stable, so a clear speckle pattern is formed as shown in the speckle image 50a on the right side.
 一方でレーザ光11が移動する対象、例えば血流に照射される場合には、干渉が生じる位置等が変化してスペックルパターンの明暗のパターンが変化し、露光時間中に積分した結果、明暗のコントラストは低下する(左側のスペックル画像50b)。明暗のコントラストが低下する度合いは、例えばカメラ20の露光時間内での移動量に応じた値となる。すなわち明暗のコントラストの低下が速度を反映した指標となる。 On the other hand, when the laser beam 11 irradiates a moving object, for example, blood flow, the position where interference occurs changes, and the light and dark pattern of the speckle pattern changes. is reduced (left speckle image 50b). The degree to which the contrast between light and dark decreases is, for example, a value corresponding to the amount of movement of the camera 20 within the exposure time. In other words, the decrease in contrast between light and dark is an index reflecting speed.
 図14の下側には、静止状態及び移動状態でのスペックル画像50a及び50bでの輝度分布を示すグラフが示されている。グラフの横軸は輝度値であり、縦軸は各輝度値の画素数(分布)である。静止状態及び移動状態でのスペックル画像50a及び50bでの輝度分布は、それぞれ点線及び実線で図示されている。 A graph showing the luminance distribution of the speckle images 50a and 50b in the stationary state and the moving state is shown in the lower part of FIG. The horizontal axis of the graph is the luminance value, and the vertical axis is the number of pixels (distribution) of each luminance value. The luminance distributions in the speckle images 50a and 50b in the static and moving states are illustrated by dotted and solid lines, respectively.
 グラフに示すように、観察対象が静止状態である場合には、移動状態である場合と比べ、幅の広い輝度分布となる。すなわち静止状態でのスペックル画像50aは、明るい画素と暗い画素との輝度差が広く、明暗のコントラストが大きい画像となる。一方で、移動状態でのスペックル画像50bは、明るい画素と暗い画素との輝度差が狭く、明暗のコントラストが小さい画像となる。 As shown in the graph, when the observation target is stationary, the brightness distribution is wider than when it is moving. That is, the speckle image 50a in the stationary state is an image having a wide brightness difference between bright pixels and dark pixels and a large light-dark contrast. On the other hand, the speckle image 50b in the moving state has a narrow luminance difference between bright pixels and dark pixels and a small light-dark contrast.
 図15は、比較例のレーザ照射部10aの構成例を示す図である。図15に示すように、比較例のレーザ照射部10aでは、絞り102の替わりに、例えば光の透過率が縞状に異なる拡散媒質102aが構成される。拡散媒質102aによりレーザ光の透過率を異ならせるので、被写体2の表面の暗部にもレーザ光が照射されてしまう。これにより、上述のb値に被写体面の距離に対する依存性が生じる。このため、光学系のアライメントが変わる度にb値の導出が必要となる。 FIG. 15 is a diagram showing a configuration example of a laser irradiation unit 10a of a comparative example. As shown in FIG. 15, in the laser irradiation unit 10a of the comparative example, instead of the diaphragm 102, for example, a diffusion medium 102a having different light transmittances in stripes is configured. Since the transmittance of the laser light is varied by the diffusing medium 102a, the dark part of the surface of the subject 2 is also irradiated with the laser light. As a result, the aforementioned b value becomes dependent on the distance of the object plane. Therefore, it is necessary to derive the b-value each time the alignment of the optical system changes.
 図16は、図15の撮像系で撮像した比較例のG(グローバル)画像と、本開示によるG(グローバル)画像と、を示す図である。比較例のG(グローバル)画像の指部のスペックルコントラストCsの平均は14であり、本開示によるG(グローバル)画像の指部のスペックルコントラストCsの平均は10である。上述のように、スペックルコントラストCsの値が低いほど、G(グローバル)画像中に移動状態画像がより多く存在することを示す。これから分かるように、比較例の拡散媒質102aを用いる場合より、本開示の絞り102を用いる方が、より血流成分の情報を取得できることを示している。 FIG. 16 is a diagram showing a G (global) image of a comparative example captured by the imaging system of FIG. 15 and a G (global) image according to the present disclosure. The average finger speckle contrast Cs of the comparative G (global) image is 14, and the average finger speckle contrast Cs of the G (global) image according to the present disclosure is 10. As described above, the lower the value of the speckle contrast Cs, the more motion state images are present in the G (global) image. As can be seen from this, using the diaphragm 102 of the present disclosure indicates that more information on the blood flow component can be obtained than when using the diffusion medium 102a of the comparative example.
 図17は、生体観察システム100の処理例を示すフローチャートである。図17に示すように、先ず照射制御部31とカメラ制御部33の制御にしたがい、レーザ照射部10の照射とカメラ20の撮像とを開始し、画像取得部32が画像データを取得する(ステップS100)。 FIG. 17 is a flowchart showing a processing example of the biological observation system 100. FIG. As shown in FIG. 17, first, under the control of the irradiation control unit 31 and the camera control unit 33, irradiation by the laser irradiation unit 10 and imaging by the camera 20 are started, and the image acquisition unit 32 acquires image data (step S100).
 次に、プロファイル生成部362が明部を横切るプロファイルを生成し、画素範囲生成部364がG領域の画素値範囲を生成する(ステップS102)。なお、画素値範囲の生成は、初期に一度行い、次のループでは省略してもよい、 Next, the profile generator 362 generates a profile that traverses the bright area, and the pixel range generator 364 generates the pixel value range of the G area (step S102). Note that the generation of the pixel value range may be performed once at the beginning and omitted in the next loop.
 次に、第1画像生成部366が、画像データから画素値範囲に基づきG領域の要素画像を生成し、記憶部360に記憶させる。また、第1画像生成部366は、画像データから画素値範囲以下の画素範囲を除いたD領域の要素画像を生成し、記憶部360に記憶させる(ステップS104)。 Next, the first image generation unit 366 generates an elemental image of the G area based on the pixel value range from the image data, and stores it in the storage unit 360. Further, the first image generation unit 366 generates an element image of the D region by excluding the pixel range below the pixel value range from the image data, and stores it in the storage unit 360 (step S104).
 次に、照射制御部31は、所定の範囲までレーザ照射部10の照射が終了したか否かを判定する(ステップS106)。所定の範囲まで終了していないと判定する場合(ステップS106のNO)、照射位置を変更し、ステップS100からの処理を繰り返す。 Next, the irradiation control unit 31 determines whether or not the irradiation of the laser irradiation unit 10 has been completed up to a predetermined range (step S106). If it is determined that the predetermined range has not been completed (NO in step S106), the irradiation position is changed, and the processing from step S100 is repeated.
 一方で、照射制御部31は、所定の範囲まで終了していると判定する場合(ステップS106のYES)、レーザ照射を停止する。続けて、第1画像生成部366は、記憶部360に記憶されるG領域の要素画像を用いてG画像を生成し、第2画像生成部368は、記憶部360に記憶されるD領域の要素画像を用いてD画像を生成する(ステップS108)。 On the other hand, when the irradiation control unit 31 determines that the irradiation has finished up to the predetermined range (YES in step S106), it stops laser irradiation. Subsequently, the first image generation unit 366 generates a G image using the element images of the G area stored in the storage unit 360, and the second image generation unit 368 generates an image of the D area stored in the storage unit 360. A D image is generated using the element image (step S108).
 次に、スペックル演算部370は、生成されたG画像、及びD画像のスペックルコントラストCsを演算子、スペックルコントラストCsに基づいて観察像となるスペックルコントラスト画像を生成し、ディスプレイに出力する(ステップS110)。そして、ディスプレイは、G画像、及びD画像のスペックルコントラスト画像を表示し(ステップS112)、処理を終了する。 Next, the speckle calculator 370 generates a speckle contrast image as an observation image based on the speckle contrast Cs of the generated G image and D image as an operator, and outputs the speckle contrast image to the display. (step S110). Then, the display displays the speckle contrast images of the G image and the D image (step S112), and the process ends.
 以上説明したように、本実施形態によれば、レーザ照射部10が、光を遮光する遮光部02bと光を透過する開口部102aとを有する絞り102を介してコヒーレント光を生体の観察部位2に照射し、第1画像生成部366が、コヒーレント光が照射される観察部位2の領域を含む画素信号内のコヒーレント光が照射されていない暗部領域の画素信号に基づき、第1画像を生成することとした。これにより、観察部位2の組織の下層であるB層によって散乱された光に基づく画素信号を観察部位2の表層であるA層からの反射光を抑制した状態で取得できる。このため、観察部位2の組織の下層における観察像をより低ノイズで生成可能となる。 As described above, according to the present embodiment, the laser irradiation unit 10 emits coherent light to the observation site 2 of the living body through the diaphragm 102 having the light shielding unit 02b for shielding light and the aperture 102a for transmitting light. and the first image generation unit 366 generates a first image based on the pixel signals of the dark region not irradiated with the coherent light in the pixel signals including the region of the observation site 2 irradiated with the coherent light. I decided to Thereby, the pixel signals based on the light scattered by the layer B, which is the lower layer of the tissue of the observation site 2, can be obtained while the reflected light from the layer A, which is the surface layer of the observation site 2, is suppressed. Therefore, an observation image of the underlying layer of the tissue of the observation site 2 can be generated with lower noise.
 (第2実施形態)
 第1実施形態に係る生体観察システム100は、絞り102が固定絞りであったのに対し、第2実施形態に係る生体観察システム100は、絞り1020が可変絞りである点で相違する。以下では第1実施形態に係る生体観察システム100と相違する点を説明する。
(Second embodiment)
The living body observation system 100 according to the first embodiment has a fixed diaphragm 102, whereas the living body observation system 100 according to the second embodiment has a variable diaphragm 1020. FIG. Differences from the biological observation system 100 according to the first embodiment will be described below.
 図18は、第2実施形態に係る絞り1020の側面断面図である。図18に示すように、絞り1020は、複数の可動絞り1020a、1020bを有する。複数の可動絞り1020a、1020bの位置は変更可能に構成されており、開口部の間隔D1020、及び絞りの幅D1040を変更できる。 FIG. 18 is a side sectional view of the diaphragm 1020 according to the second embodiment. As shown in FIG. 18, the diaphragm 1020 has a plurality of movable diaphragms 1020a and 1020b. The positions of the plurality of movable diaphragms 1020a and 1020b are configured to be changeable, and the interval D1020 between the apertures and the width D1040 of the diaphragm can be changed.
 図19は、第2実施形態に係る照射制御部31のブロック図を示す図である。図19に示すように、絞り制御部314を更に有する点で、第1実施形態に係る照射制御部31と相違する。 FIG. 19 is a diagram showing a block diagram of the irradiation control unit 31 according to the second embodiment. As shown in FIG. 19, it differs from the irradiation control section 31 according to the first embodiment in that it further includes an aperture control section 314 .
 絞り制御部314は、プロファイル生成部362の生成する画素値プロファイルに基づき、絞り1020の複数の可動絞り1020a、1020bの位置制御を行う。また、絞り制御部314は、可動絞り1020a、1020bを平行移動させる制御を行うことも可能である。 The aperture control unit 314 performs position control of the plurality of movable apertures 1020 a and 1020 b of the aperture 1020 based on the pixel value profile generated by the profile generation unit 362 . In addition, the aperture control unit 314 can also perform control for parallel movement of the movable apertures 1020a and 1020b.
 図20は、絞り制御部314の制御例を説明する図である。左図は、絞り1020の出射側の面におけるレーザ光11の分布を示す模式図である。縦軸は明るさを示し、横軸は絞り1020を横切るラインの位置に対応する。右図は、左図の開口部、遮光部に対応する観察部位2上のプロファイルである。縦軸は画素値を示し、横軸は位置を示す。絞り制御部314は、プロファイルの明部の最高画素値L2と暗部の最低画素値L1に基づいて、開口部の間隔D1020、及び絞りの幅D1040を制御する。 FIG. 20 is a diagram for explaining a control example of the aperture control unit 314. FIG. The left diagram is a schematic diagram showing the distribution of the laser light 11 on the exit side surface of the diaphragm 1020 . The vertical axis indicates brightness, and the horizontal axis corresponds to the position of a line that crosses aperture 1020 . The right figure is a profile on the observation site 2 corresponding to the opening and the light shielding part in the left figure. The vertical axis indicates pixel values, and the horizontal axis indicates positions. The diaphragm control unit 314 controls the aperture interval D1020 and the diaphragm width D1040 based on the maximum pixel value L2 of the bright portion and the minimum pixel value L1 of the dark portion of the profile.
 図21は、絞り制御部314の制御例を説明する別の図である。左図は、開口部を中心として、絞り1020の出射側の面におけるレーザ光11の分布を示す模式図である。縦軸は明るさを示し、横軸は絞り1020を横切るラインの位置に対応する。右図は、左図の開口部、遮光部に対応する観察部位2上のプロファイルである。縦軸は画素値を示し、横軸は位置を示す。絞り制御部314は、プロファイルの明部の最高画素値L2と暗部の最低画素値L1に基づいて、開口部の間隔D1020、及び絞りの幅D1040を制御する。 FIG. 21 is another diagram illustrating a control example of the aperture control unit 314. FIG. The left diagram is a schematic diagram showing the distribution of the laser light 11 on the surface of the exit side of the diaphragm 1020, centering on the aperture. The vertical axis indicates brightness, and the horizontal axis corresponds to the position of a line that crosses aperture 1020 . The right figure is a profile on the observation site 2 corresponding to the opening and the light shielding part in the left figure. The vertical axis indicates pixel values, and the horizontal axis indicates positions. The diaphragm control unit 314 controls the aperture interval D1020 and the diaphragm width D1040 based on the maximum pixel value L2 of the bright portion and the minimum pixel value L1 of the dark portion of the profile.
 図22は、絞り制御部314の更に別の制御例を説明する図である。(a)図は、制御前の観察部位2上のプロファイルを示す図である。(b)図は、制御後の観察部位2上のプロファイルを示す図である。(c)図は、絞り1020の位相ピッチを示す。観察部位2上のプロファイルを示す図である。縦軸は画素値を示し、横軸は位置を示す。それぞれ、縦軸は画素値を示し、横軸は位置を示す。 FIG. 22 is a diagram explaining still another control example of the aperture control unit 314. In FIG. (a) is a diagram showing a profile on the observed region 2 before control. (b) is a diagram showing a profile on the observation site 2 after control. (c) shows the phase pitch of the diaphragm 1020; FIG. 4 is a diagram showing a profile on an observation site 2; The vertical axis indicates pixel values, and the horizontal axis indicates positions. The vertical axis indicates the pixel value, and the horizontal axis indicates the position.
 図23は、絞り制御部314の処理例を示すフローチャートである。図22を参照しながら、処理例を説明する。 FIG. 23 is a flowchart showing a processing example of the aperture control unit 314. FIG. A processing example will be described with reference to FIG.
 まず、照射制御部31とカメラ制御部33の制御にしたがい、レーザ照射部10が絞り1020を介してレーザを照射し、カメラ20が撮像した画像データを画像取得部32が取得する(ステップS200)。 First, according to the control of the irradiation control unit 31 and the camera control unit 33, the laser irradiation unit 10 irradiates the laser through the diaphragm 1020, and the image acquisition unit 32 acquires the image data captured by the camera 20 (step S200). .
 次に、図22の(a)に示すように、プロファイル生成部362が明部及びを横切るプロファイルを生成する(ステップS202)。続けて、絞り制御部314は、間隔D1020、D1040とプロファイル上の明部、及び暗部の間隔との関係からレーザ照射部10と観察部位2との間の幾何学的な倍率を演算する(ステップS204)。 Next, as shown in (a) of FIG. 22, the profile generator 362 generates a profile that crosses the bright area and the bright area (step S202). Subsequently, the aperture control unit 314 calculates the geometrical magnification between the laser irradiation unit 10 and the observed region 2 from the relationship between the intervals D1020 and D1040 and the intervals between the bright and dark portions on the profile (step S204).
 次に、図22の(a)に示すように、絞り制御部314は、プロファイルの最高値L2の1/e=13.5%の値に基づく範囲L_diffを抽出する。より具体的には、絞り制御部314は、明部と暗部の境界点からプロファイルの最高値L2の1/e=13.5%の値までの範囲を範囲L_diffとして抽出する(ステップS204)。 Next, as shown in (a) of FIG. 22, the aperture control unit 314 extracts a range L_diff based on a value of 1/e 2 =13.5% of the maximum value L2 of the profile. More specifically, the aperture control unit 314 extracts the range from the boundary point between the bright part and the dark part to the value of 1/e 2 =13.5% of the maximum value L2 of the profile as the range L_diff (step S204). .
 次に、図22の(b)に示すように、絞り制御部314は暗部の幅L_darkを範囲L_difの2倍以下となる範囲に設定し、ステップS204で演算した倍率に基づき、遮光部の幅D1040を調整する(ステップS206)。続けて、絞り制御部314は明部の幅L_illを幅L_darkと同等の値に演算し、ステップS204で演算した倍率に基づき、開口部の幅D1020を調整する(ステップS206)。 Next, as shown in (b) of FIG. 22, the aperture control unit 314 sets the width L_dark of the dark portion to a range equal to or less than twice the range L_dif, and determines the width of the light shielding portion based on the magnification calculated in step S204. D1040 is adjusted (step S206). Subsequently, the aperture control unit 314 calculates the width L_ill of the bright portion to be the same value as the width L_dark, and adjusts the width D1020 of the aperture based on the magnification calculated in step S204 (step S206).
 次に、絞り制御部314は絞り1020の移動ステップ幅L_ill_stepを範囲L_diff未満とし、倍率に基づき、絞り1020の移動ステップ幅を設定し、設定処理を終了する。 Next, the diaphragm control unit 314 sets the movement step width L_ill_step of the diaphragm 1020 to be less than the range L_diff, sets the movement step width of the diaphragm 1020 based on the magnification, and ends the setting process.
 以上説明したように、本実施形態によれば、絞り1020を、複数の可動絞り1020a、1020bで構成し、絞り制御部314が明部及びを横切るプロファイルに基づき可動絞り1020a、1020bを制御することとした。これにより、カメラ20が撮像した画像データにおける明部と暗部の画素値の値及び比を所定値に設定することが可能となる。 As described above, according to the present embodiment, the diaphragm 1020 is composed of a plurality of movable diaphragms 1020a and 1020b, and the diaphragm controller 314 controls the movable diaphragms 1020a and 1020b based on the profile crossing the bright area and the and This makes it possible to set the pixel values and the ratio of the bright and dark portions in the image data captured by the camera 20 to predetermined values.
 なお、本技術は以下のような構成を取ることができる。 This technology can be configured as follows.
 (1)光を遮光する遮光部と光を透過する開口部とを有する絞りを介してコヒーレント光を生体の観察部位に照射する照射部と、
 前記コヒーレント光が照射される前記観察部位の領域を含む画素信号を取得する画像取得部と、
 前記画素信号内の前記コヒーレント光が照射されていない暗部領域の画素信号に基づき第1画像を生成する画像生成部と、
  を備える、生体観察システム。
(1) an irradiation unit that irradiates coherent light onto an observation site of a living body through a diaphragm having a light shielding portion that blocks light and an opening that transmits light;
an image acquisition unit that acquires pixel signals including a region of the observation site irradiated with the coherent light;
an image generating unit that generates a first image based on pixel signals in a dark area not irradiated with the coherent light in the pixel signals;
A living body observation system.
 (2)前記遮光部は、光を透過させない金属で構成される、(1)に記載の生体観察システム。 (2) The biological observation system according to (1), wherein the light shielding section is made of metal that does not transmit light.
 (3)前記暗部領域の画素信号は、前記生体に直接的に照射された前記コヒーレント光の一部が前記前記観察部位の組織の下の血流によって散乱された光に基づく、(2)に記載の生体観察システム。 (3) the pixel signals in the dark region are based on light that is part of the coherent light that is directly applied to the living body and is scattered by blood flow under the tissue of the observation site; The in vivo observation system described.
 (4)前記画素信号に基づく画像上の位置変化に対する画素列であるプロファイルに基づき、所定の画素値範囲を設定する画素範囲設定部を更に備え、
 前記画像生成部は、前記所定の画素値範囲に基づき、前記第1画像を生成する、(2)に記載の生体観察システム。
(4) further comprising a pixel range setting unit for setting a predetermined pixel value range based on a profile, which is a pixel row with respect to a positional change on the image based on the pixel signal;
The biological observation system according to (2), wherein the image generator generates the first image based on the predetermined pixel value range.
 (5)前記画像内の前記コヒーレント光が直接的に照射されている明部領域の画素信号に基づき、第2画像を生成する第2画像生成部を、更に備える、(4)に記載の生体観察システム。 (5) The living body according to (4), further comprising: a second image generation unit that generates a second image based on pixel signals of a bright region in the image directly irradiated with the coherent light. observation system.
 (6)前記第1画像及び前記第2画像の少なくとも一方に基づいたスペックルデータを演算するスペックル演算部を、更に備える、(5)に記載の生体観察システム。 (6) The biological observation system according to (5), further comprising a speckle calculation unit that calculates speckle data based on at least one of the first image and the second image.
 (7)記スペックルデータは、スペックルコントラストを含み、
 前記スペックル演算部は、前記スペックルコントラストに基づいて観察像を生成する、(6)に記載の生体観察システム。
(7) the speckle data includes speckle contrast;
The biological observation system according to (6), wherein the speckle calculator generates an observation image based on the speckle contrast.
 (8)前記画素信号取得部は、前記コヒーレント光が照射される前記観察部位の領域をそれぞれ変更した複数の画像を取得し、
 前記画像生成部は、前記複数の画像に基づき、前記第1画像を生成する(7)に記載の生体観察システム。
(8) the pixel signal acquisition unit acquires a plurality of images in which the region of the observation site irradiated with the coherent light is changed, respectively;
The biological observation system according to (7), wherein the image generator generates the first image based on the plurality of images.
 (9)前記観察像における前記スペックルコントラストの値に基づき、前記コヒーレント光の照射強度を制御する照射制御部を更に備える、(8)に記載の生体観察システム。 (9) The biological observation system according to (8), further comprising an irradiation control unit that controls the irradiation intensity of the coherent light based on the speckle contrast value in the observation image.
 (10)前記照射制御部は、前記画像内の前記暗部領域、及び前記明部領域の少なくとも一方の画素値に基づき、前記コヒーレント光の照射強度を制御する、(9)に記載の生体観察システム。 (10) The biological observation system according to (9), wherein the irradiation control unit controls the irradiation intensity of the coherent light based on pixel values of at least one of the dark region and the bright region in the image. .
 (11)前記絞りは、可変絞りであり、前記画像内の前記暗部領域内、及び前記明部領域内の画素値に基づき、開口部の幅及び開口部間の幅を設定する絞り制御部を更に備える、(10)に記載の生体観察システム。 (11) The aperture is a variable aperture, and an aperture control unit that sets a width of an aperture and a width between apertures based on pixel values in the dark region and the bright region in the image. The biological observation system according to (10), further comprising:
 (12)前記絞り制御部は、前記暗部領域内、及び前記明部領域内の所定の画素値の比が所定範囲となるように、前記開口部間の幅を設定する、(11)に記載の生体観察システム。 (12) According to (11), the aperture control unit sets the width between the openings so that a ratio of predetermined pixel values in the dark area and the bright area is within a predetermined range. in vivo observation system.
 (13)(1)乃至(12)のいずれかに記載の生体観察システムを用いた血流計装置。 (13) A blood flow meter device using the biological observation system according to any one of (1) to (12).
 (14)(1)乃至(12)のいずれかに記載の生体観察システムを用いた顕微鏡装置。 (14) A microscope apparatus using the biological observation system according to any one of (1) to (12).
 (15)(1)乃至(12)のいずれかに記載の生体観察システムを用いた中視鏡装置。 (15) An endoscope apparatus using the biological observation system according to any one of (1) to (12).
 (16)光を遮光する遮光部と光を透過する開口部とを有する絞りを介してコヒーレント光を生体の観察部位に照射する照射工程と、
 前記コヒーレント光が照射される前記観察部位の領域を含む画素信号を取得する画素信号取得工程と、
 前記画素信号内の前記コヒーレント光が照射されていない暗部領域の画素信号に基づき第1画像を生成する画像生成工程と、
 を備える、生体観察方法。
(16) an irradiation step of irradiating an observation site of a living body with coherent light through a diaphragm having a light shielding portion for shielding light and an opening for transmitting light;
a pixel signal acquiring step of acquiring a pixel signal including a region of the observation site irradiated with the coherent light;
an image generating step of generating a first image based on the pixel signals in the dark area not irradiated with the coherent light in the pixel signals;
A living body observation method.
 (17)コヒーレント光を生成する光源と、
 前記コヒーレント光を遮光する遮光部と光を透過する開口部とを有する可動絞りと、を備え、
 前記コヒーレント光が照射される生体における観察部位の前記コヒーレント光が照射されていない暗部領域の画素信号に基づき、前記遮光部の幅、前記開口部の幅の少なくとも一方が制御される、照射装置。
(17) a light source that produces coherent light;
A movable diaphragm having a light shielding portion that shields the coherent light and an opening that transmits the light,
At least one of the width of the light shielding portion and the width of the opening is controlled based on pixel signals of a dark region of an observation site not irradiated with the coherent light in a living body irradiated with the coherent light.
 本開示の態様は、上述した個々の実施形態に限定されるものではなく、当業者が想到しうる種々の変形も含むものであり、本開示の効果も上述した内容に限定されない。すなわち、特許請求の範囲に規定された内容およびその均等物から導き出される本開示の概念的な思想と趣旨を逸脱しない範囲で種々の追加、変更および部分的削除が可能である。 Aspects of the present disclosure are not limited to the individual embodiments described above, but include various modifications that can be conceived by those skilled in the art, and the effects of the present disclosure are not limited to the above-described contents. That is, various additions, changes, and partial deletions are possible without departing from the conceptual idea and spirit of the present disclosure derived from the content defined in the claims and equivalents thereof.
 10:レーザ照射部、32:画像取得部、100:生体観察システム、102:絞り、102a:開口部、102b:遮光部、366:第1画像生成部、368:第2画像生成部、370:スペックル演算部、1020:絞り。 10: laser irradiation unit, 32: image acquisition unit, 100: biological observation system, 102: diaphragm, 102a: opening, 102b: light shielding unit, 366: first image generation unit, 368: second image generation unit, 370: Speckle calculator 1020: Aperture.

Claims (17)

  1.  光を遮光する遮光部と光を透過する開口部とを有する絞りを介してコヒーレント光を生体の観察部位に照射する照射部と、
     前記コヒーレント光が照射される前記観察部位の領域を含む画素信号を取得する画素信号取得部と、
     前記画素信号内の前記コヒーレント光が照射されていない暗部領域の画素信号に基づき第1画像を生成する画像生成部と、
     を備える、生体観察システム。
    an irradiation unit that irradiates coherent light onto an observation site of a living body through a diaphragm having a light shielding portion that shields light and an opening that transmits light;
    a pixel signal acquisition unit that acquires pixel signals including a region of the observation site irradiated with the coherent light;
    an image generating unit that generates a first image based on pixel signals in a dark area not irradiated with the coherent light in the pixel signals;
    A living body observation system.
  2.  前記遮光部は、光を透過させない金属で構成される、請求項1に記載の生体観察システム。 The living body observation system according to claim 1, wherein the light shielding part is made of a metal that does not transmit light.
  3.  前記暗部領域の画素信号は、前記生体に直接的に照射された前記コヒーレント光の一部が前記前記観察部位の組織の下の血流によって散乱された光に基づく、請求項2に記載の生体観察システム。 3. The living body according to claim 2, wherein the pixel signals of said dark region are based on light in which part of said coherent light directly irradiated to said living body is scattered by blood flow under tissue of said observation site. observation system.
  4.  前記画素信号に基づく画像上の位置変化に対する画素列であるプロファイルに基づき、所定の画素値範囲を設定する画素範囲設定部を更に備え、
     前記画像生成部は、前記所定の画素値範囲に基づき、前記第1画像を生成する、請求項2に記載の生体観察システム。
    Further comprising a pixel range setting unit for setting a predetermined pixel value range based on a profile, which is a pixel row with respect to a positional change on the image based on the pixel signal,
    3. The biological observation system according to claim 2, wherein said image generator generates said first image based on said predetermined pixel value range.
  5.  前記コヒーレント光が照射されている明部領域の画素信号に基づき、第2画像を生成する画像生成部を、更に備える、請求項4に記載の生体観察システム。 The living body observation system according to claim 4, further comprising an image generation unit that generates a second image based on pixel signals in the bright region irradiated with the coherent light.
  6.  前記第1画像及び前記第2画像の少なくとも一方に基づいたスペックルデータを演算するスペックル演算部を、更に備える、請求項5に記載の生体観察システム。 The biological observation system according to claim 5, further comprising a speckle calculation unit that calculates speckle data based on at least one of the first image and the second image.
  7.  前記スペックルデータは、スペックルコントラストを含み、
     前記スペックル演算部は、前記スペックルコントラストに基づいて観察像を生成する、請求項6に記載の生体観察システム。
    The speckle data includes speckle contrast,
    7. The living body observation system according to claim 6, wherein said speckle calculator generates an observation image based on said speckle contrast.
  8.  前記画素信号取得部は、前記コヒーレント光が照射される前記観察部位の領域をそれぞれ変更した複数の画素信号を取得し、
     前記画像生成部は、前記複数の画素信号に基づき、前記第1画像を生成する請求項7に記載の生体観察システム。
    The pixel signal acquisition unit acquires a plurality of pixel signals obtained by changing the region of the observation site irradiated with the coherent light,
    The biological observation system according to claim 7, wherein the image generator generates the first image based on the plurality of pixel signals.
  9.  前記観察像における前記スペックルコントラストの値に基づき、前記コヒーレント光の照射強度を制御する照射制御部を更に備える、請求項8に記載の生体観察システム。 The living body observation system according to claim 8, further comprising an irradiation control unit that controls the irradiation intensity of the coherent light based on the speckle contrast value in the observation image.
  10.  前記照射制御部は、前記画素信号内の前記暗部領域、及び前記明部領域の少なくとも一方の画素信号に基づき、前記コヒーレント光の照射強度を制御する、請求項9に記載の生体観察システム。 The biological observation system according to claim 9, wherein the irradiation control unit controls the irradiation intensity of the coherent light based on pixel signals of at least one of the dark region and the bright region in the pixel signals.
  11.  前記絞りは、可変絞りであり、前記画素信号内の前記暗部領域内、及び前記明部領域内の画素値に基づき、開口部の幅及び開口部間の幅を設定する絞り制御部を更に備える、請求項10に記載の生体観察システム。 The aperture is a variable aperture, and further includes an aperture controller that sets a width of an aperture and a width between apertures based on pixel values in the dark region and the bright region in the pixel signal. 11. The biological observation system according to claim 10.
  12.  前記絞り制御部は、前記暗部領域内、及び前記明部領域内の所定の画素値の比が所定範囲となるように、前記開口部間の幅を設定する、請求項11に記載の生体観察システム。 12. The living body observation according to claim 11, wherein said aperture control unit sets the width between said openings so that a ratio of predetermined pixel values in said dark area and said bright area is within a predetermined range. system.
  13.  請求項1に記載の生体観察システムを用いた血流計装置。 A blood flow meter device using the biological observation system according to claim 1.
  14.  請求項1に記載の生体観察システムを用いた顕微鏡装置。 A microscope device using the biological observation system according to claim 1.
  15.  請求項1に記載の生体観察システムを用いた中視鏡装置。 An endoscope device using the living body observation system according to claim 1.
  16.  光を遮光する遮光部と光を透過する開口部とを有する絞りを介してコヒーレント光を生体の観察部位に照射する照射工程と、
     前記コヒーレント光が照射される前記観察部位の領域を含む画素信号を取得する画素信号取得工程と、
     前記画像内の前記コヒーレント光が直接的に照射されていない暗部領域の画素信号に基づき第1画像を生成する画像生成工程と、
     を備える、生体観察方法。
    an irradiation step of irradiating coherent light onto an observation site of a living body through an aperture having a light shielding portion for shielding light and an opening for transmitting light;
    a pixel signal acquiring step of acquiring a pixel signal including a region of the observation site irradiated with the coherent light;
    an image generating step of generating a first image based on pixel signals of a dark region in the image not directly irradiated with the coherent light;
    A living body observation method.
  17.  コヒーレント光を生成する光源と、
     前記コヒーレント光を遮光する遮光部と光を透過する開口部とを有する可動絞りと、を備え、
     前記コヒーレント光が照射される生体における観察部位の前記コヒーレント光が照射されていない暗部領域の画素信号に基づき、前記遮光部の幅、前記開口部の幅の少なくとも一方が制御される、照射装置。
    a light source that produces coherent light;
    A movable diaphragm having a light shielding portion that shields the coherent light and an opening that transmits the light,
    At least one of the width of the light shielding portion and the width of the opening is controlled based on pixel signals of a dark region of an observation site not irradiated with the coherent light in a living body irradiated with the coherent light.
PCT/JP2022/007134 2021-03-22 2022-02-22 Biological observation system, biological observation method, and irradiation device WO2022202051A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2021047740A JP2024061695A (en) 2021-03-22 2021-03-22 Biological observation system, biological observation method, and irradiation device
JP2021-047740 2021-03-22

Publications (1)

Publication Number Publication Date
WO2022202051A1 true WO2022202051A1 (en) 2022-09-29

Family

ID=83397025

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2022/007134 WO2022202051A1 (en) 2021-03-22 2022-02-22 Biological observation system, biological observation method, and irradiation device

Country Status (2)

Country Link
JP (1) JP2024061695A (en)
WO (1) WO2022202051A1 (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2009122931A1 (en) * 2008-04-03 2009-10-08 国立大学法人九州工業大学 Authentication method and device using subcutaneous blood flow measurement
WO2018207471A1 (en) * 2017-05-09 2018-11-15 ソニー株式会社 Control device, control system, control method, and program
US20190274548A1 (en) * 2018-03-08 2019-09-12 Hi Llc Devices and methods to convert conventional imagers into lock-in cameras

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2009122931A1 (en) * 2008-04-03 2009-10-08 国立大学法人九州工業大学 Authentication method and device using subcutaneous blood flow measurement
WO2018207471A1 (en) * 2017-05-09 2018-11-15 ソニー株式会社 Control device, control system, control method, and program
US20190274548A1 (en) * 2018-03-08 2019-09-12 Hi Llc Devices and methods to convert conventional imagers into lock-in cameras

Also Published As

Publication number Publication date
JP2024061695A (en) 2024-05-08

Similar Documents

Publication Publication Date Title
US10102646B2 (en) Optical image measuring apparatus
EP2581035B1 (en) Fundus observation apparatus
US9521330B2 (en) Endoscopic image processing device, information storage device and image processing method
RU2633168C2 (en) Image processing device and image processing method
JP5916110B2 (en) Image display device, image display method, and program
TWI467127B (en) Means, observation means and an image processing method for measuring the shape of
JP6276943B2 (en) Ophthalmic equipment
JP2018515759A (en) Device for optical 3D measurement of objects
JP4751689B2 (en) Eye surface analysis system
WO2017141524A1 (en) Imaging device, imaging method, and imaging system
WO2012057284A1 (en) Three-dimensional shape measurement device, three-dimensional shape measurement method, manufacturing method of structure, and structure manufacturing system
JP2017170064A (en) Image analysis apparatus and image analysis method
WO2018229832A1 (en) Endoscope system
US11179218B2 (en) Systems and methods for multi-modal sensing of depth in vision systems for automated surgical robots
JPWO2018211982A1 (en) Image processing apparatus and method, and image processing system
US11050931B2 (en) Control device and control method
WO2022202051A1 (en) Biological observation system, biological observation method, and irradiation device
JP2022044838A (en) Ophthalmologic apparatus and data collection method
JP2010240068A (en) Ophthalmological observation device
JP2019063047A (en) Device, method, and program for visualizing vascular network of skin
JP6564076B2 (en) Ophthalmic equipment
JP7044420B1 (en) OCT device control device and program
US20240249408A1 (en) Systems and methods for time of flight imaging
JP7059049B2 (en) Information processing equipment, information processing methods and programs
JP6527970B2 (en) Ophthalmic device

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22774851

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 22774851

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: JP