WO2020137276A1 - Dispositif d'imagerie - Google Patents

Dispositif d'imagerie Download PDF

Info

Publication number
WO2020137276A1
WO2020137276A1 PCT/JP2019/045582 JP2019045582W WO2020137276A1 WO 2020137276 A1 WO2020137276 A1 WO 2020137276A1 JP 2019045582 W JP2019045582 W JP 2019045582W WO 2020137276 A1 WO2020137276 A1 WO 2020137276A1
Authority
WO
WIPO (PCT)
Prior art keywords
signal
image
period
light
image sensor
Prior art date
Application number
PCT/JP2019/045582
Other languages
English (en)
Japanese (ja)
Inventor
鳴海 建治
貴真 安藤
Original Assignee
パナソニックIpマネジメント株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by パナソニックIpマネジメント株式会社 filed Critical パナソニックIpマネジメント株式会社
Publication of WO2020137276A1 publication Critical patent/WO2020137276A1/fr

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B10/00Other methods or instruments for diagnosis, e.g. instruments for taking a cell sample, for biopsy, for vaccination diagnosis; Sex determination; Ovulation-period determination; Throat striking implements
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/02Detecting, measuring or recording pulse, heart rate, blood pressure or blood flow; Combined pulse/heart-rate/blood pressure determination; Evaluating a cardiovascular condition not otherwise provided for, e.g. using combinations of techniques provided for in this group with electrocardiography or electroauscultation; Heart catheters for measuring blood pressure
    • A61B5/026Measuring blood flow
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/145Measuring characteristics of blood in vivo, e.g. gas concentration, pH value; Measuring characteristics of body fluids or tissues, e.g. interstitial fluid, cerebral tissue
    • A61B5/1455Measuring characteristics of blood in vivo, e.g. gas concentration, pH value; Measuring characteristics of body fluids or tissues, e.g. interstitial fluid, cerebral tissue using optical sensors, e.g. spectral photometrical oximeters
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/17Systems in which incident light is modified in accordance with the properties of the material investigated
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/70SSIS architectures; Circuits associated therewith

Definitions

  • the present disclosure relates to an imaging device.
  • Patent Document 1 discloses an imaging device that acquires internal information of an object according to the depth in a non-contact manner.
  • the present disclosure provides a technology capable of non-contact and highly accurate acquisition of internal information of an object according to a depth.
  • An imaging device includes a light source that emits a plurality of light pulses with which a target portion is irradiated, a plurality of pixels, and a plurality of light sources that have returned from the target portion due to the plurality of light pulses.
  • An image sensor for detecting a reflected light pulse and a control circuit for controlling the light source and the image sensor are provided.
  • the control circuit causes the light source to emit the plurality of light pulses toward the target unit, and causes the image sensor to output n 1 (n 1 is an integer of 1 or more) of the plurality of reflected light pulses.
  • n 1 single reflection from the increase start of the intensity of each of the n 1 or of the reflected light pulse
  • the image sensor includes n 2 (n 2 is an integer of 2 or more) of the plurality of reflected light pulses including at least a part of a period until the reduction of the intensity of each of the light pulses starts.
  • the component of the second period in each of the reflected light pulses is detected, and the second period is a period from the start of the decrease of the intensity of each of the n 2 reflected light pulses to the end of the decrease.
  • an operation of causing the image sensor to detect light during a period in which the plurality of light pulses are not applied to the target portion is executed n 3 times (n 3 is an integer of 2 or more). n 1 ⁇ n 2 and n 1 ⁇ n 3 .
  • FIG. 1A is a diagram schematically illustrating an imaging device according to an exemplary embodiment of the present disclosure.
  • FIG. 1B is a diagram showing an example of a temporal change in the intensity of light reaching the image sensor.
  • FIG. 1C is a diagram in which the width of the input light pulse is represented on the horizontal axis and the amount of light detected by the image sensor is represented on the vertical axis.
  • FIG. 1D is a diagram showing an example of a schematic configuration of one pixel of the image sensor.
  • FIG. 1E is a diagram illustrating an example of the configuration of the image sensor.
  • FIG. 2A is a diagram schematically illustrating an example of a timing chart when the image A is captured.
  • FIG. 2B is a diagram schematically illustrating an example of a timing chart when the image B is captured.
  • FIG. 2C is a diagram schematically illustrating an example of a timing chart when the image C is captured.
  • FIG. 2D is a diagram schematically illustrating an example of a timing chart when the image D is captured.
  • FIG. 2E is a flowchart showing the outline of the operation relating to the light source and the image sensor by the control circuit.
  • FIG. 3 is a diagram illustrating an example of arithmetic processing according to the exemplary embodiment.
  • FIG. 4 is a diagram illustrating another example of the timing chart when the image D is acquired from the image A in the exemplary embodiment.
  • FIG. 5 is a diagram showing another example of the timing chart in the case of acquiring the image D from the image A in the exemplary embodiment.
  • FIG. 6 is a diagram illustrating another example of the timing chart when the image D is acquired from the image A in the exemplary embodiment.
  • FIG. 7 is a diagram illustrating an example of a calculation process for acquiring the image E, the image F, and the image G in the exemplary embodiment.
  • FIG. 8 is a diagram showing an example of a timing chart when acquiring the images A, C, and D and an example of a calculation process for acquiring the images E and G in the exemplary embodiment.
  • FIG. 9 is a diagram showing an example of a timing chart in the case of acquiring the image B, the image C, and the image D and an example of a calculation process for generating the image F and the image G in the exemplary embodiment.
  • FIG. 10 is a diagram showing a measurement result showing the relationship between the number of exposures and the average luminance value of the central portion of the background image.
  • FIG. 11 is a diagram showing a time chart in the second embodiment.
  • FIG. 12 is a diagram showing a result of measuring the relationship between the average luminance value of the image B and the average luminance value of the image C.
  • FIG. 13A is a diagram schematically illustrating an example in which the imaging device according to the exemplary embodiment is applied to a head mounted display.
  • FIG. 13B is a diagram schematically illustrating an example in which the imaging device according to the exemplary embodiment is applied to a smartphone.
  • FIG. 14 is a flowchart of the operation of the control circuit and the signal processing circuit in the example shown in FIGS. 13A and 13B.
  • FIG. 15 is a diagram showing another example of the timing chart when the image D is acquired from the image A in the exemplary embodiment.
  • FIG. 16 is a diagram illustrating another example of the timing chart when the image D is acquired from the image A in the exemplary embodiment.
  • Patent Document 1 discloses an imaging device that images a light component in a limited time range of a reflected light pulse by adjusting a time difference between the emission timing of a light pulse of a light source and the shutter timing of an image sensor. ing.
  • imaging is referred to as “time-resolved imaging”.
  • the detected light amount depends on the depth. The reason for this is as follows.
  • the scatterer is illuminated with light and the return light is detected, the light component reflected by the surface is the largest of the return light.
  • the component of the light returning via the deep portion of the scatterer is smaller than the component of the light reflected by the surface.
  • the amount of light reflected back from the skin surface is the largest.
  • the amount of light scattered back in the scalp is smaller than the amount of light reflected back on the skin surface.
  • the amount of light scattered back in the brain is extremely small.
  • the brightness of the image for obtaining the appearance information of the face is the highest.
  • the brightness of the image for obtaining the skin blood flow information is lower than the brightness of the image for obtaining the appearance information of the face.
  • the brightness of the image for acquiring cerebral blood flow information is extremely low.
  • the dynamic range of the brightness of the image for acquiring these information is larger than the dynamic range of the image sensor. Therefore, if all the images are exposed with the same exposure time, the information is lost due to the saturation of the brightness in the high brightness image.
  • a practical method of preventing the saturation of brightness is to change the exposure time of the image according to the acquired information.
  • time-resolved imaging a light pulse is emitted from the light source one or more times, and the shutter of the image sensor is operated one or more times at the same timing and the same exposure time.
  • the exposure time of the shutter per one light pulse cannot be changed in order to increase or decrease the acquired light amount. This is because when the exposure time of the shutter per light pulse is changed, the relationship between the timing of the light pulse and the timing of opening or closing the shutter changes, and thus the depth of the acquired information changes. is there.
  • the integrated light amount during the exposure time of an image can be increased or decreased by changing the number of times light pulses are emitted and the number of times shutters are used.
  • the “total exposure time of an image” means the time obtained by multiplying the exposure time per light pulse by the number of exposures.
  • Background light is imaged to suppress these errors.
  • the shutter is operated in the same operation as when the light pulse is emitted in a state where the emission of the light pulse is stopped and the light amount of the reflected light pulse is zero.
  • an image of background light is acquired.
  • this image is referred to as a “background image”.
  • An image for acquiring desired information is acquired by subtracting the background image from the image acquired by emitting the light pulse.
  • the image acquired by emitting the light pulse will be referred to as an “original image”, and the image for acquiring desired information will be referred to as a “difference image”.
  • the difference between the shooting conditions when acquiring the original image and the shooting conditions when acquiring the background image is only the emission state of the light pulse. That is, the shutter condition such as the exposure time and the condition of the optical system such as the diaphragm or the ND filter are the same. This is because it is considered that if only the condition of the illumination light pulse is changed, only the action of the illumination light pulse on the object can be extracted by the above-mentioned subtraction.
  • the measurement rate means the speed at which each piece of information is acquired.
  • the increase in the number of times of image acquisition causes an increase in calculation cost associated with the control of the image sensor.
  • the present inventors have come up with the imaging device described in the following items based on the above examination.
  • the imaging device includes a light source that emits a plurality of light pulses with which a target portion is irradiated, and a plurality of pixels, and returns from the target portion due to the plurality of light pulses.
  • An image sensor that detects a plurality of reflected light pulses, and a control circuit that controls the light source and the image sensor are provided.
  • the control circuit causes the light source to emit the plurality of light pulses toward the target unit, and causes the image sensor to output n 1 (n 1 is an integer of 1 or more) of the plurality of reflected light pulses.
  • n 1 single reflection from the increase start of the intensity of each of the n 1 or of the reflected light pulse
  • the image sensor includes n 2 (n 2 is an integer of 2 or more) of the plurality of reflected light pulses including at least a part of a period until the reduction of the intensity of each of the light pulses starts.
  • the component of the second period in each of the reflected light pulses is detected, and the second period is a period from the start of the decrease of the intensity of each of the n 2 reflected light pulses to the end of the decrease.
  • an operation of causing the image sensor to detect light during a period in which the plurality of light pulses are not applied to the target portion is executed n 3 times (n 3 is an integer of 2 or more). n 1 ⁇ n 2 and n 1 ⁇ n 3 .
  • the control circuit changes a timing at which the image sensor detects the plurality of reflected pulsed lights at every certain cycle, and within the cycle, n 1 +n 2 >n 3 It may be.
  • the image pickup apparatus further includes a signal processing circuit, and the signal processing circuit includes a first signal obtained by detecting the n 1 reflected light pulses, and the n 2
  • the second signal obtained by detecting the reflected light pulse and the third signal obtained by the operation of detecting the light n 3 times are obtained, and obtained from the first signal and the third signal.
  • the target portion is a user's head
  • the signal processing circuit is further based on the first signal, the second signal, and the third signal. Then, information indicating a change in the appearance of the user's face and a change in the cerebral blood flow of the user may be generated.
  • the n 3 may be equal to the n 2 .
  • the control circuit further causes the image sensor to output n 4 (n 4 is an integer of 1 or more) reflected light pulses of the plurality of reflected light pulses.
  • the component of the fourth period may be detected.
  • the fourth period may include at least a part of a rising period that is a period from when the intensity of the n 4 reflected light pulses starts to increase to when the increase ends. It may be n 4 ⁇ n 2 .
  • the imaging device may further include a signal processing circuit.
  • the signal processing circuit outputs the first signal obtained by detecting the n 1 reflected light pulses, the second signal obtained by detecting the n 2 reflected light pulses, and the light n 3 times.
  • a third signal obtained by the detecting operation and a sixth signal obtained by detecting the n 4 reflected light pulses are obtained, and a third signal obtained from the first signal and the third signal is obtained.
  • Generating second image data in which the influence of the background light is reduced, and the influence of the background light based on the seventh signal obtained from the sixth signal and the third signal May be performed at least one of generating the third image data with reduced noise.
  • the target portion is a user's head
  • the signal processing circuit further includes the first signal, the second signal, the third signal, and the Information indicating changes in the appearance of the user's face, changes in the skin blood flow of the user, and changes in cerebral blood flow of the user may be generated based on the sixth signal.
  • An imaging device includes a light source that emits a plurality of light pulses with which a target portion is irradiated, and a plurality of pixels, and returns from the target portion due to the plurality of light pulses.
  • An image sensor that detects a plurality of reflected light pulses, and a control circuit that controls the light source and the image sensor are provided. The control circuit causes the light source to emit the plurality of light pulses toward the target unit, and causes the image sensor to output n 1 (n 1 is an integer of 1 or more) of the plurality of reflected light pulses.
  • n 1 single reflection from the increase start of the intensity of each of the n 1 or of the reflected light pulse
  • the image sensor includes n 2 (n 2 is an integer of 2 or more) of the plurality of reflected light pulses including at least a part of a period until the reduction of the intensity of each of the light pulses starts.
  • the component of the second period in each of the reflected light pulses is detected, and the second period is a period from the start of the decrease of the intensity of each of the n 2 reflected light pulses to the end of the decrease.
  • an operation of causing the image sensor to detect light during a period in which the plurality of light pulses are not applied to the target portion is executed n 3 times (n 3 is an integer of 2 or more). n 1 ⁇ n 2 and n 3 ⁇ n 2 .
  • all or part of a circuit, unit, device, member or part, or all or part of a functional block in a block diagram may be, for example, a semiconductor device, a semiconductor integrated circuit (IC), or an LSI (large scale integration). ) Can be implemented by one or more electronic circuits including.
  • the LSI or IC may be integrated on one chip, or may be configured by combining a plurality of chips.
  • the functional blocks other than the memory element may be integrated on one chip.
  • the name may be changed depending on the degree of integration, and may be referred to as a system LSI, VLSI (very large scale integration), or ULSI (ultra large scale integration).
  • the Field Programmable Gate Array (FPGA), which is programmed after the LSI is manufactured, or the reconfigurable logic device that can reconfigure the bonding relation inside the LSI or set up the circuit section inside the LSI can be used for the same purpose.
  • FPGA Field Programmable Gate Array
  • the functions or operations of all or some of the circuits, units, devices, members or parts can be executed by software processing.
  • the software is recorded on a non-transitory recording medium such as one or more ROMs, optical discs, hard disk drives, etc., and when the software is executed by a processor, the functions specified by the software are recorded. It is performed by the processor and peripherals.
  • the system or apparatus may include one or more non-transitory storage media having software recorded on it, a processor, and any required hardware devices, such as interfaces.
  • the object to be imaged is a light scatterer such as a living body. More specifically, the imaging apparatus according to the present embodiment detects, for example, the appearance of the user, the distribution of skin blood flow and the distribution of blood flow in the brain, and changes with time. Accordingly, it is possible to generate a two-dimensional still image or a two-dimensional moving image showing the appearance of the user, the distribution of the skin blood flow, and the distribution of the blood flow in the brain. By using the information of the image, for example, the state of the user can be estimated. The state of the user is, for example, the degree of concentration or emotion.
  • the imaging device according to the present embodiment can improve the measurement rate of biometric information as compared with the related art. The configuration and operation of the image pickup apparatus according to this embodiment that enables such detection will be described below.
  • FIG. 1A is a diagram schematically illustrating an imaging device 100 according to an exemplary embodiment of the present disclosure.
  • the image pickup apparatus 100 includes a light source 101, an image sensor 102, a control circuit 105, and a signal processing circuit 106.
  • the image sensor 102 includes a photoelectric conversion unit 103 and a charge storage unit 104.
  • the signal processing circuit 106 includes an image generation unit 107 and a measurement unit 108.
  • the light source 101 is, for example, a laser light source that emits a narrow band optical pulse having a center wavelength of 750 nm.
  • the image sensor 102 detects the reflected light from the user 10 as an image.
  • the control circuit 105 is connected to the light source 101 and the image sensor 102, and controls the operations of these. More specifically, the control circuit 105 controls the emission timing of the light source 101 and the timing of signal accumulation and signal emission in each pixel of the image sensor 102 in synchronization.
  • the signal processing circuit 106 receives the image signal output from the image sensor 102 in the image generation unit 107, calculates the image signal, and outputs image data used for measuring the information of the user 10.
  • the information of the user 10 is, for example, at least one of appearance information of the face of the user 10, skin blood flow information, and brain blood flow information.
  • the image signal used for the calculation may be a plurality of different types of image signals.
  • the signal processing circuit 106 estimates the state of the user 10 based on the image data output from the image generation unit 107, and outputs information indicating the state of the user 10.
  • the output information may be image information, numerical information, a code such as ⁇ , ⁇ , ⁇ , or a determination result such as normal or abnormal.
  • biometric information means a measurable amount of a living body.
  • Biological information for example, blood flow, blood pressure, heart rate, pulse rate, respiratory rate, body temperature, EEG, oxygenated hemoglobin concentration in blood, deoxygenated hemoglobin concentration in blood, blood oxygen saturation, skin Various quantities are included, such as reflectance spectra. A part of biometric information may be called a vital sign. Below, each component of the imaging device 100 will be described.
  • the light source 101 emits light toward the target portion of the user 10.
  • the target part may be, for example, the head of the user 10, and more specifically, the forehead of the user 10.
  • the target part of the user 10 may be, for example, an arm, a torso, or a foot.
  • the light emitted from the light source 101 and reaching the user 10 is divided into a surface reflection component A1 reflected on the surface of the user 10 and an internal scattering component A2 scattered inside the user 10.
  • the internal scattering component A2 is a component that is reflected or scattered once or multiple-scattered inside the living body.
  • the surface reflection component A1 includes three components, a direct reflection component, a diffuse reflection component, and a scattering reflection component.
  • the direct reflection component is a reflection component having the same incident angle and reflection angle.
  • the diffuse reflection component is a component that is diffused and reflected by the uneven shape of the surface.
  • the scattered reflection component is a component that is scattered and reflected by the internal tissue near the surface. When light is emitted toward the head of the user 10, the scattered reflection component is a component that is scattered and reflected inside the epidermis.
  • the surface reflection component A1 reflected by the surface of the user 10 may include these three components. The traveling directions of the surface reflection component A1 and the internal scattering component A2 change due to reflection or scattering, and part of them reaches the image sensor 102.
  • the wavelength of the light emitted from the light source 101 may be any wavelength included in the wavelength range of 650 nm or more and 950 nm or less, for example. This wavelength range is included in the wavelength range from red to near infrared.
  • the term "light” is used not only for visible light but also for infrared light.
  • the above wavelength range is called a "living body window" and has a property of being relatively hard to be absorbed by moisture and skin in the living body. When a living body is to be detected, the detection sensitivity can be increased by using light in the above wavelength range.
  • the light used is mainly absorbed by oxygenated hemoglobin (HbO 2 ) and deoxygenated hemoglobin (Hb). Conceivable. Oxygenated hemoglobin and deoxygenated hemoglobin have different wavelength dependence of light absorption. Generally, when blood flow changes, the concentrations of oxygenated hemoglobin and deoxygenated hemoglobin change. Along with this change, the degree of light absorption also changes. Therefore, when the blood flow changes, the detected light amount also changes with time.
  • HbO 2 oxygenated hemoglobin
  • Hb deoxygenated hemoglobin
  • the light source 101 may emit light of two or more wavelengths included in the above wavelength range. Such light having a plurality of wavelengths may be emitted from each of a plurality of light sources.
  • the optical path lengths of the light beams of the two wavelengths returned to the image sensor 102 via the target portion of the user 10 may be designed to be substantially equal. .. In this design, for example, the distance between the image sensor 102 and one light source and the distance between the image sensor 102 and the other light source are the same, and the two light sources are rotationally symmetrical about the image sensor 102. Can be placed at
  • the user 10 is measured in a non-contact manner, so that the light source 101 designed in consideration of the influence on the retina can be used.
  • the light source 101 that satisfies Class 1 of the laser safety standard established in each country may be used.
  • the class 1 is satisfied, the user 10 is irradiated with light having a low illuminance such that the exposure emission limit (AEL) is less than 1 mW.
  • the light source 101 itself may not satisfy Class 1.
  • a laser safety standard class 1 may be met by installing a diffuser or ND filter in front of the light source 101 to diffuse or attenuate the light.
  • the amount of light of the internal scattering component A2 is several thousandth to tens of thousands of times that of the surface reflection component A1. It can be a small value. Furthermore, considering the safety standards of lasers, the amount of light that can be emitted is extremely small. Therefore, it becomes very difficult to detect the internal scattering component A2. Even in that case, if the light source 101 emits a light pulse having a relatively large pulse width, the integrated amount of the internal scattering component A2 with a time delay can be increased. Thereby, the amount of detected light can be increased and the S/N can be improved.
  • the light source 101 emits an optical pulse having a pulse width of 3 ns or more, for example.
  • the temporal spread of light scattered in a living tissue such as the brain is about 4 ns.
  • FIG. 1B is a diagram showing an example of a temporal change in the intensity of light reaching the image sensor 102.
  • FIG. 1B shows an example of three cases in which the width of the input light pulse emitted from the light source 101 is 0 ns, 3 ns, and 10 ns. As shown in FIG. 1B, as the width of the light pulse from the light source 101 is increased, the amount of light of the internal scattered component A2 that appears at the rear end of the light pulse returned from the user 10 increases.
  • FIG. 1C is a diagram in which the horizontal axis represents the width of the input light pulse, and the vertical axis represents the amount of light detected by the image sensor 102.
  • the image sensor 102 includes an electronic shutter.
  • the result of FIG. 1C was obtained under the condition that the electronic shutter was opened 1 ns after the time when the rear end of the light pulse was reflected by the surface of the user 10 and reached the image sensor 102.
  • the reason for selecting this condition is that the ratio of the surface reflection component A1 is high immediately after the rear end of the light pulse arrives, as compared with the internal scattering component A2.
  • the pulse width of the light pulse emitted from the light source 101 is 3 ns or more, the detected light amount can be maximized.
  • the light source 101 may emit a light pulse having a pulse width of 5 ns or more, further 10 ns or more. On the other hand, if the pulse width is too large, the amount of unused light increases and it is wasted. Therefore, the light source 101 emits an optical pulse having a pulse width of 50 ns or less, for example. Alternatively, the light source 101 may emit a light pulse having a pulse width of 30 ns or less, further 20 ns or less.
  • the irradiation pattern of the light source 101 may be, for example, a pattern having a uniform intensity distribution within the irradiation area.
  • the present embodiment is different from the conventional biometric device disclosed in, for example, Japanese Patent Laid-Open No. 11-164826.
  • the detector and the light source are separated by about 3 cm, and the surface reflection component is spatially separated from the internal scattering component. Therefore, there is no choice but to use discrete light irradiation.
  • the imaging device 100 according to the present embodiment can reduce the surface reflection component A1 by temporally separating it from the internal scattering component A2. Therefore, the light source 101 having an irradiation pattern having a uniform intensity distribution can be used.
  • the irradiation pattern having a uniform intensity distribution may be formed by diffusing the light emitted from the light source 101 with a diffusion plate.
  • the internal scattered component A2 can be detected even just below the irradiation point of the user 10.
  • the measurement resolution can be increased.
  • the image sensor 102 outputs a signal indicating the light amount of at least a part of the light emitted from the light source 101 and returned from the target portion of the user 10.
  • the signal is, for example, a signal according to the intensity included in at least a part of the rising period or a signal according to the intensity included in at least a part of the falling period of the reflected light pulse.
  • the image sensor 102 may include a plurality of photoelectric conversion elements and a plurality of charge storage units. Specifically, the image sensor 102 may include a plurality of photodetection cells arranged two-dimensionally. Such an image sensor 102 can acquire the two-dimensional information of the user 10 at one time. In the present specification, the photodetection cell is also referred to as “pixel”.
  • the image sensor 102 can be, for example, any image sensor such as a CCD image sensor or a CMOS image sensor. More generally, a photodetector including at least one photoelectric conversion element and at least one charge storage section may be used.
  • the image sensor 102 may include an electronic shutter.
  • the electronic shutter is a circuit that controls the timing of image capturing.
  • the control circuit 105 has a function of an electronic shutter.
  • the electronic shutter controls a single signal accumulation period in which the received light is converted into an effective electric signal and accumulated, and a period in which the signal accumulation is stopped.
  • the single signal accumulation period can also be referred to as an “exposure period”.
  • the time length of the “exposure period” corresponds to the above-mentioned “shutter exposure time per pulse”.
  • the width of the exposure period may be referred to as the “shutter width”.
  • the time from the end of one exposure period to the start of the next exposure period may be referred to as the "non-exposure period”.
  • the exposure state may be referred to as “OPEN”, and the exposure stop state may be referred to as “CLOSE”.
  • the image sensor 102 can adjust the exposure period and the non-exposure period by sub-nanoseconds, for example, in the range of 30 ps to 1 ns, using an electronic shutter.
  • the conventional TOF camera whose purpose is to measure the distance detects all the light emitted from the light source 101, reflected by the subject, and returned.
  • the shutter width needs to be larger than the pulse width of light.
  • the shutter width can be set to a value of 1 ns or more and 30 ns or less, for example. According to the image pickup apparatus 100 of the present embodiment, the shutter width can be reduced, so that the influence of dark current included in the detection signal can be reduced.
  • the light attenuation rate inside is very large.
  • the emitted light may be attenuated to about one millionth of the incident light.
  • the light amount may be insufficient by irradiating only one pulse.
  • the light amount is particularly weak.
  • the light source 101 emits a light pulse a plurality of times, and the image sensor 102 is also exposed a plurality of times by the electronic shutter in response thereto, whereby the detection signals can be integrated to improve the sensitivity.
  • the image sensor 102 may include a plurality of pixels arranged two-dimensionally on the imaging surface.
  • Each pixel may include a photoelectric conversion element such as a photodiode and one or more charge storage units.
  • each pixel has a photoelectric conversion element that generates a signal charge according to the amount of received light by photoelectric conversion, a charge storage unit that stores the signal charge generated by the surface reflection component A1 of the light pulse, and an internal scattering component of the light pulse.
  • An example including a charge storage unit that stores signal charges generated by A2 will be described.
  • the control circuit 105 causes the image sensor 102 to detect the surface reflection component A1 by detecting the portion of the light pulse returning from the head of the user 10 before the start of the fall.
  • the control circuit 105 also causes the image sensor 102 to detect the internal scattered component A2 by detecting the portion of the light pulse returned from the head of the user 10 after the start of the fall.
  • the light source 101 in this example emits light of two types of wavelengths.
  • FIG. 1D is a diagram showing an example of a schematic configuration of one pixel 201 of the image sensor 102. Note that FIG. 1D schematically illustrates the structure of one pixel 201, and does not necessarily reflect the actual structure.
  • the pixel 201 in this example includes a photodiode 203 that performs photoelectric conversion, a first floating diffusion layer (Floating Diffusion) 204 that is a charge storage unit, a second floating diffusion layer 205, a third floating diffusion layer 206, and It includes a fourth floating diffusion layer 207 and a drain 202 that drains signal charges.
  • Floating Diffusion floating diffusion layer
  • Photons that have entered the pixel 201 due to one emission of a light pulse are converted into signal electrons, which are signal charges, by the photodiode 203.
  • the converted signal electrons are discharged to the drain 202 or distributed to the first floating diffusion layer 204 to the fourth floating diffusion layer 207 according to the control signal input from the control circuit 105.
  • the discharging of the signal charges to the drain 202 is repeated in this order.
  • This repetitive operation is fast, and can be repeated tens of thousands to hundreds of millions of times within one frame of a moving image, for example.
  • the time for one frame is, for example, about 1/30 second.
  • the pixel 201 finally generates and outputs four image signals based on the signal charges accumulated in the first floating diffusion layer 204 to the fourth floating diffusion layer 207.
  • the control circuit 105 in this example causes the light source 101 to repeatedly emit the first light pulse having the first wavelength and the second light pulse having the second wavelength in order.
  • the state of the user 10 can be analyzed. For example, a wavelength longer than 805 nm may be selected as the first wavelength and a wavelength shorter than 805 nm may be selected as the second wavelength. This makes it possible to detect changes in the oxygenated hemoglobin concentration and the deoxygenated hemoglobin concentration in the blood of the user 10.
  • the control circuit 105 first causes the light source 101 to emit a first light pulse.
  • the control circuit 105 accumulates signal charges in the first floating diffusion layer 204 during the first period in which the surface reflection component A1 of the first light pulse is incident on the photodiode 203.
  • the control circuit 105 accumulates signal charges in the second floating diffusion layer 205 during the second period in which the internal scattering component A2 of the first light pulse is incident on the photodiode 203.
  • the control circuit 105 causes the light source 101 to emit the second light pulse.
  • the control circuit 105 accumulates signal charges in the third floating diffusion layer 206 during the third period in which the surface reflection component A1 of the second light pulse is incident on the photodiode 203.
  • the control circuit 105 accumulates signal charges in the fourth floating diffusion layer 207 during the fourth period in which the internal scattering component A2 of the second light pulse is incident on the photodiode 203.
  • the control circuit 105 allows the first floating diffusion layer 204 and the second floating diffusion layer 205 to be provided to the first floating diffusion layer 204 and the second floating diffusion layer 205 with a predetermined time difference after starting the emission of the first optical pulse.
  • the signal charges are sequentially accumulated.
  • the control circuit 105 starts emission of the second optical pulse, and then makes a predetermined time difference to the third floating diffusion layer 206 and the fourth floating diffusion layer 207, and outputs a signal from the photodiode 203 to the third floating diffusion layer 206 and the fourth floating diffusion layer 207.
  • the charges are accumulated in sequence. The above operation is repeated a plurality of times.
  • a period during which signal charges are accumulated in another floating diffusion layer may be provided with the light source 101 turned off. Obtaining a signal from which ambient light and ambient light components have been removed by subtracting the signal charge amount of the other floating diffusion layer from the signal charge amount of the first floating diffusion layer 204 to the fourth floating diffusion layer 207. You can
  • the number of charge storage units is four, but it may be designed to be two or more depending on the purpose. For example, when only one type of wavelength is used, the number of charge storage units may be two. Further, in the application in which only one type of wavelength is used and the surface reflection component A1 is not detected, the number of charge storage units for each pixel may be one. Further, even when two or more types of wavelengths are used, the number of charge storage units may be one if the imaging using each wavelength is performed in another frame. Further, as will be described later, if the detection of the surface reflection component A1 and the detection of the internal scattering component A2 are performed in different frames, the number of charge storage units may be one.
  • FIG. 1E is a diagram showing an example of the configuration of the image sensor 102.
  • a region surrounded by a two-dot chain line frame corresponds to one pixel 201.
  • the pixel 201 includes one photodiode.
  • FIG. 1E shows only four pixels arranged in two rows and two columns, a larger number of pixels may be actually arranged.
  • the pixel 201 includes a first floating diffusion layer 204 to a fourth floating diffusion layer 207.
  • the signals accumulated in the first floating diffusion layer 204 to the fourth floating diffusion layer 207 are handled as if they were signals of four pixels of a general CMOS image sensor, and are output from the image sensor 102.
  • Each pixel 201 has four signal detection circuits.
  • Each signal detection circuit includes a source follower transistor 309, a row selection transistor 308, and a reset transistor 310.
  • the reset transistor 310 corresponds to the drain 202 shown in FIG. 1D
  • the pulse input to the gate of the reset transistor 310 corresponds to the drain discharge pulse.
  • Each transistor is, for example, a field effect transistor formed on a semiconductor substrate, but is not limited to this.
  • one of the input terminal and the output terminal of the source follower transistor 309 is connected to one of the input terminal and the output terminal of the row selection transistor 308.
  • the one of the input terminal and the output terminal of the source follower transistor 309 is typically the source.
  • the one of the input terminal and the output terminal of the row selection transistor 308 is typically the drain.
  • the gate which is the control terminal of the source follower transistor 309, is connected to the photodiode 203.
  • the signal charge of holes or electrons generated by the photodiode 203 is stored in the floating diffusion layer which is a charge storage unit between the photodiode 203 and the source follower transistor 309.
  • the first floating diffusion layer 204 to the fourth floating diffusion layer 207 are connected to the photodiode 203.
  • a switch may be provided between the photodiode 203 and each of the first floating diffusion layer 204 to the fourth floating diffusion layer 207. This switch switches the conduction state between the photodiode 203 and each of the first floating diffusion layer 204 to the fourth floating diffusion layer 207 according to the signal accumulation pulse from the control circuit 105. This controls the start and stop of the accumulation of the signal charges from the first floating diffusion layer 204 to each of the fourth floating diffusion layers 207.
  • the electronic shutter in this embodiment has a mechanism for such exposure control.
  • the signal charge accumulated in the first floating diffusion layer 204 to the fourth floating diffusion layer 207 is read out by turning on the gate of the row selection transistor 308 by the row selection circuit 302. At this time, the current flowing from the source follower power source 305 to the source follower transistor 309 and the source follower load 306 is amplified according to the signal potentials of the first floating diffusion layer 204 to the fourth floating diffusion layer 207.
  • the analog signal based on this current read from the vertical signal line 304 is converted into digital signal data by the analog-digital (AD) conversion circuit 307 connected for each column. This digital signal data is read out for each column by the column selection circuit 303 and output from the image sensor 102.
  • AD analog-digital
  • the row selection circuit 302 and the column selection circuit 303 after reading one row, read the next row, and in the same manner, read the information of the signal charges of the floating diffusion layers of all the rows.
  • the control circuit 105 resets all floating diffusion layers by turning on the gate of the reset transistor 310 after reading all the signal charges. This completes the imaging of one frame. Similarly, by repeating high-speed image capturing of frames, the image sensor 102 completes capturing a series of frames.
  • the image sensor 102 may be another type of image sensor.
  • the image sensor 102 may be, for example, a CCD type, a single photon counting type element, or an amplification type image sensor such as an EMCCD or ICCD.
  • the control circuit 105 adjusts the time difference between the emission timing of the light pulse of the light source 101 and the shutter timing of the image sensor 102.
  • the “emission timing” of the light source 101 is the timing at which the light pulse emitted from the light source 101 starts rising.
  • “Shutter timing” is the timing at which exposure is started.
  • the control circuit 105 may change the emission timing to adjust the time difference, or may change the shutter timing to adjust the time difference.
  • the control circuit 105 may be configured to remove the offset component from the signal detected by each pixel of the image sensor 102.
  • the offset component is a signal component due to ambient light such as sunlight or fluorescent light, or ambient light.
  • An offset component due to ambient light or ambient light is estimated by detecting a signal by the image sensor 102 in a state where the light source 101 is turned off and no light is emitted from the light source 101.
  • the control circuit 105 may be, for example, a combination of a processor and a memory, or an integrated circuit such as a microcontroller including a processor and a memory.
  • the control circuit 105 adjusts the emission timing and the shutter timing, for example, by the processor executing a program recorded in the memory, for example.
  • the signal processing circuit 106 is a circuit that processes an image signal output from the image sensor 102.
  • the signal processing circuit 106 performs arithmetic processing such as image processing.
  • the signal processing circuit 106 is, for example, a digital signal processor (DSP), a programmable logic device (PLD) such as a field programmable gate array (FPGA), a central processing unit (CPU) or an image processing arithmetic processor (GPU), and a computer program. Can be realized in combination with.
  • the control circuit 105 and the signal processing circuit 106 may be one integrated circuit or may be separate and independent circuits.
  • the signal processing circuit 106 may be a component of an external device such as a server provided in a remote place. In this case, an external device such as a server exchanges data with the light source 101, the image sensor 102, and the control circuit 105 by wireless communication or wired communication.
  • the signal processing circuit 106 can generate moving image data showing the temporal changes in the appearance of the face, the skin blood flow, and the cerebral blood flow based on the signal output from the image sensor 102.
  • the signal processing circuit 106 is not limited to such moving image data, and may generate other information.
  • biological information such as blood flow in the brain, blood pressure, blood oxygen saturation, or heart rate may be generated by synchronizing with other devices.
  • the signal processing circuit 106 may estimate an offset component due to ambient light and remove the offset component.
  • the appearance of the face is, for example, the expression, the line of sight, or the state of the pupil.
  • the signal processing circuit 106 may cause the measuring unit 108 to estimate the facial expression, the line of sight, or the state of the pupil of the user 10 based on the change in the appearance of the face, and output information indicating the estimated state.
  • the skin blood flow in the head is adjusted by the change in flow velocity due to the expansion/contraction of blood vessels in the skin. Expansion and contraction of blood vessels are controlled by autonomic nerves. Autonomic nerves are mainly sympathetic nerves. Sympathetic nerves are activated during emotional, drowsiness, or stress responses to regulate facial blood flow. The activity of autonomic nerves is controlled by physical and mental conditions such as human emotions, stress, and physical condition. Therefore, the skin blood flow is considered to reflect the physical and mental condition of humans.
  • the signal processing circuit 106 may cause the measuring unit 108 to estimate the physical and mental state of the user 10 based on the change in the skin flow rate, and output information indicating the estimated state.
  • the psychological state of the user 10 means, for example, mood, emotion, health, or temperature sensation. Mood may include moods such as pleasant or unpleasant. Emotions may include, for example, feelings of security, anxiety, sadness, or resentment.
  • the health condition can include, for example, a condition such as healthy or tired.
  • the temperature sensation may include, for example, a sensation of being hot, cold, or sultry.
  • Derivatives of these may include indexes indicating the degree of brain activity, such as skill level, proficiency level, and concentration level, in the psychological state.
  • the signal processing circuit 106 may cause the measurement unit 108 to estimate a psychological state such as the degree of concentration of the user 10 based on the change in the cerebral blood flow, and output a signal indicating the estimated state.
  • the imaging device 100 may include an imaging optical system that forms a two-dimensional image of the user 10 on the light receiving surface of the image sensor 102.
  • the optical axis of the imaging optical system is substantially orthogonal to the light receiving surface of the image sensor 102.
  • the imaging optical system may include a zoom lens. When the position of the zoom lens changes, the magnification of the two-dimensional image of the user 10 changes and the resolution of the two-dimensional image on the image sensor 102 changes. Therefore, even if the distance to the user 10 is long, it is possible to enlarge a desired measurement region and observe it in detail.
  • the image pickup apparatus 100 may include a bandpass filter between the user 10 and the image sensor 102, which passes only light in the wavelength band emitted from the light source 101 or light in the vicinity thereof. As a result, the influence of disturbance components such as ambient light can be reduced.
  • the bandpass filter can be constituted by, for example, a multilayer filter or an absorption filter.
  • the band width of the band pass filter may have a width of about 20 to 100 nm in consideration of the band shift caused by the temperature change of the light source 101 and the oblique incidence on the filter.
  • FIG. 2A is a diagram schematically illustrating an example of a timing chart when the image A is captured.
  • the control circuit 105 causes the light source 101 to emit the first light pulse toward the target portion of the user 10.
  • the control circuit 105 causes the image sensor 102 to open the shutter in a period including at least a part of a period from when the light intensity of the first reflected light pulse from the target portion starts to increase to when the light intensity starts to decrease.
  • a signal mainly containing the surface reflection component A1 can be acquired.
  • the signal indicates the appearance information of the face.
  • the period in which the shutter is open is referred to as “period A”.
  • the time length of the period A is T A. As shown in FIG.
  • the period A may include the entire period of the first reflected light pulse. This is because the surface reflection component A1 is sufficiently larger than the internal scattering component A2, as described above.
  • the shutter timing may be set so that as many parts of the first reflected light pulse as possible can be acquired. As a result, the number of exposures can be reduced and the frame rate can be increased.
  • the control circuit 105 repeats this exposure n A times before switching the charge storage unit. n A is an integer of 2 or more.
  • the control circuit 105 causes the light source 101 to emit the first light pulse toward the target portion of the user 10, and causes the image sensor 102 to cause the first light pulse.
  • the operation of accumulating the signal charge Q A corresponding to the light quantity of the component of the period A of the first reflected light pulse in any one of the one or more charge accumulating portions of each pixel is executed n A times.
  • the control circuit 105 causes the light source 101 to emit a plurality of light pulses toward the target portion of the user 10, and causes the image sensor 102 to output a plurality of reflected light pulses resulting from the plurality of light pulses. It can also be said that the component of the period A in each of the n A reflected light pulses is detected.
  • FIG. 2B is a diagram schematically illustrating an example of a timing chart when the image B is captured.
  • the control circuit 105 causes the light source 101 to emit the second light pulse toward the target portion of the user 10.
  • the control circuit 105 causes the image sensor 102 to include at least a part of a rising period, which is a period from when the light intensity of the second reflected light pulse from the target portion starts to increase to when the increase ends, Open the shutter.
  • a rising period which is a period from when the light intensity of the second reflected light pulse from the target portion starts to increase to when the increase ends.
  • Open the shutter As a result, it is possible to obtain a light signal including the component that has reached the skin, that is, the inside of the scalp and emitted from the forehead surface.
  • the signal indicates skin blood flow information.
  • the period in which the shutter is open is referred to as “period B”.
  • the time length of the period B is T B.
  • the control circuit 105 repeats this exposure n B times before switching
  • the control circuit 105 causes the light source 101 to emit the second light pulse toward the target portion of the user 10, and causes the image sensor 102 to cause the second light pulse.
  • the operation of accumulating the signal charge Q B corresponding to the light quantity of the component of the period B of the second reflected light pulse in any one of the one or more charge accumulating portions of each pixel is performed n B times.
  • the control circuit 105 causes the light source 101 to emit a plurality of light pulses toward the target portion of the user 10, and causes the image sensor 102 to output a plurality of reflected light pulses resulting from the plurality of light pulses. It can be said that the component of the period B in each of the n B reflected light pulses is detected.
  • FIG. 2C is a diagram schematically illustrating an example of a timing chart when the image C is captured.
  • the control circuit 105 causes the light source 101 to emit the third light pulse toward the target portion of the user 10.
  • the control circuit 105 causes the image sensor 102 to be after the start of the falling period, which is the period from when the light intensity of the third reflected light pulse from the target portion starts to decrease to when the decrease ends.
  • the shutter is opened during a period including a part of the falling period. This makes it possible to remove the surface reflection component A1 and obtain a signal of light including the internal scattering component A2 that has reached the brain and emitted from the forehead surface.
  • the signal indicates cerebral blood flow information.
  • the period in which the shutter is open is referred to as “period C”.
  • the time length of the period C is T C.
  • the control circuit 105 repeats this exposure n C times before switching the charge storage unit.
  • n C is an integer of 1 or more.
  • the control circuit 105 causes the light source 101 to emit a third light pulse toward the target portion of the user 10, and causes the image sensor 102 to cause the third light pulse.
  • the signal charge Q C corresponding to the amount of component corresponding to the period C of the third reflected light pulse, performing n C once or more an operation of accumulating in any one of the charge storage part of each pixel.
  • the control circuit 105 causes the light source 101 to emit a plurality of light pulses toward the target portion of the user 10, and causes the image sensor 102 to output a plurality of reflected light pulses resulting from the plurality of light pulses. It can also be said that the component of the period C in each of the n C reflected light pulses is detected.
  • FIG. 2D is a diagram schematically illustrating an example of a timing chart when the image D is captured.
  • Image D represents background light.
  • the control circuit 105 causes the image sensor 102 to open the shutter during the period when the intensity of the reflected light pulse is zero. Thereby, it is possible to acquire a signal including ambient light and/or dark current noise.
  • the period in which the shutter is open is referred to as “period D”.
  • the time length of the period D is T D.
  • the control circuit 105 may open the shutter with the emission of the light source 101 stopped as shown in FIG. 2D.
  • control circuit 105 may open the shutter while the light source 101 is being emitted and during a period in which the reflected light pulse does not reach.
  • the control circuit 105 repeats this exposure n D times before switching the charge storage unit.
  • n D is an integer of 2 or more.
  • the control circuit 105 supplies the image sensor 102 with a signal charge Q D corresponding to the amount of background light in a period D different from the period A, the period B, and the period C.
  • the operation of accumulating in any one of the one or more charge accumulating portions of the pixel is executed n D times.
  • the control circuit 105 causes the image sensor 102 to execute the operation of detecting the background light in the period D n D times during the non-irradiation period of the plurality of light pulses.
  • Control circuit 105 the image sensor 102, the signal charges accumulated Q signal indicating the sum of A S A, the signal S B indicating the sum of the accumulated signal charges Q B, signals indicating the total accumulated signal charge Q C S C, and to output a signal S D indicating the sum of the accumulated signal charge Q D.
  • the signal S A is a signal obtained by detecting n A reflected light pulses.
  • the signal S B is a signal obtained by detecting n B reflected light pulses.
  • the signal S C can also be said to be a signal obtained by detecting n C reflected light pulses.
  • the signal S D is a signal obtained by detecting the background light n D times in the period D.
  • the magnitude relationship between the time length T A and the time length T D can be set arbitrarily.
  • the period A includes the entire period of the reflected light pulse.
  • the period B and the period C each include a partial period of the reflected light pulse.
  • the time length T D of the period D may match at least one time period of the period A, the period B, and the period C. Therefore, the time length T A may be longer than or the same as each of the time length T B to the time length T D.
  • FIG. 2E is a flowchart showing an outline of the operation of the control circuit 105 regarding the light source 101 and the image sensor 102.
  • the control circuit 105 generally performs the operations shown in FIGS. 2A to 2D.
  • step S101 the control circuit 105 first causes the light source 101 to emit an optical pulse, as shown in FIGS. 2A to 2C. At this time, the electronic shutter of the image sensor 102 is in a state where exposure is stopped. The control circuit 105 causes the electronic shutter to stop the exposure. Note that step S101 is omitted in the example shown in FIG. 2D.
  • step S102 the control circuit 105 causes the electronic shutter to start exposure at a predetermined timing, as shown in FIGS. 2A to 2D. After the elapse of the predetermined period, in step S103, the control circuit 105 causes the electronic shutter to stop the exposure, as shown in FIGS. 2A to 2D.
  • step S104 the control circuit 105 determines whether or not the number of times of executing the above-mentioned signal accumulation has reached a predetermined number. When the determination in step S104 is No, steps S101 to S103 are repeated until Yes is determined. When the determination in step S104 is Yes, in step S105, the control circuit 105 causes the image sensor 102 to generate and output a signal indicating an image based on the signal charges accumulated in each floating diffusion layer.
  • the inventors examined the imaging operation of each of the images A to D described above and newly focused on the following two points.
  • the first point is the regularity of the image acquired by capturing the background light. That is, when acquiring the internal information of the object according to the depth, the time difference between the emission timing of the illumination light pulse and the shutter timing differs for each image. However, the brightness of each pixel in the background image that is subtracted from the image depends almost on the exposure time and not on the shutter timing.
  • the second point is that the noise immunity of each image when estimating the state of the user is not necessarily equal. That is, the image for acquiring the appearance information of the face is more resistant to noise than the image for acquiring the cerebral blood flow information, that is, more robust to noise.
  • FIG. 3 illustrates an example of a timing chart when the image A is acquired from the image A in one frame, and the image E, the image F, and the image G are acquired from the image A to the image D in the exemplary embodiment. It is a figure showing an example of a calculation process to do.
  • the image E is an image for acquiring the appearance information of the face.
  • the image F is an image for acquiring skin blood flow information.
  • the image G is an image for acquiring cerebral blood flow information.
  • the operation of the signal processing circuit 106 for acquiring the image D from the image A is as follows.
  • the signal processing circuit 106 causes the image generator 107 to generate image data A to image data D based on the signals S A to S D , respectively.
  • the image data A to the image data D indicate the image A to the image D, respectively.
  • the number of times of accumulation that is, the number of times of exposure when acquiring the images A, B, and C is set as follows. n A ⁇ n B ⁇ n C
  • the total exposure time length T A ⁇ n A of the image A is as follows. Is set as follows. T A ⁇ n A ⁇ T B ⁇ n B ⁇ T C ⁇ n C
  • T A ⁇ n A is shorter than T B ⁇ n B.
  • T B ⁇ n B is shorter than T C ⁇ n C.
  • T C ⁇ n C is longer than T A ⁇ n A.
  • image C the amount of light obtained with the same exposure time is smaller than in images A and B. In other words, the S/N of image C is low. For this reason, the image C is most susceptible to an error caused by ambient light and/or dark current noise.
  • two values being “substantially equal” means that one of the two values is 0.8 times or more and 1.2 times or less of the other.
  • the influence of the ambient light and/or the dark current noise can be removed only by subtracting the image D from the image C.
  • the image G can be acquired most accurately.
  • the background light corresponding to the images A and B is not imaged. Instead, the background images corresponding to the images A and B are estimated by calculating from the image D using the formula described below. Thereby, in the images E and F, the influence of ambient light and/or dark current noise is removed.
  • one frame shown in FIG. 3 is repeated. That is, one frame shown in FIG. 3 is one cycle for changing the detection timing of the image sensor 102 for a plurality of reflected pulsed lights.
  • the number of times of accumulation when acquiring the image A, the image B, the image C, and the background image D in the one cycle may be set as follows. n A +n B +n C >n D
  • the number of times of storage may be set as follows. n A +n C >n D or n B +n C >n D
  • the operation of the signal processing circuit 106 for acquiring the image E, the image F, and the image G is as follows.
  • the signal processing circuit 106 based on the signal S E obtained by calculating the signal S A and the signal S D , outputs image data E indicating an image E in which the influence of background light is reduced compared to the image A.
  • the signal processing circuit 106 based on the signal S F obtained by calculating the signal S B and the signal S D , outputs the image data F indicating the image F in which the influence of the background light is less than that of the image B.
  • the signal processing circuit 106 based on the signal S G obtained by calculating the signal S C and the signal S D , outputs image data G indicating an image G in which the influence of background light is reduced as compared with the image C. To generate.
  • the intensity values of the signals S A to S G are I A to I G , respectively, and a 1 , a 2 , and b 1 are constants.
  • the intensity values I A to I G correspond to the luminance values in the images A to G, respectively.
  • a signal intensity value and an image luminance value may be used interchangeably.
  • the signal processing circuit 106 causes the image generation unit 107 to generate the image data E indicating the appearance information of the face by the following calculation.
  • I E I A ⁇ (I D ⁇ a 1 +b 1 )
  • the signal processing circuit 106 causes the image generation unit 107 to generate image data F indicating skin blood flow information by the following calculation.
  • I F I B ⁇ (I D ⁇ a 2 +b 1 ).
  • the signal processing circuit 106 causes the image generation unit 107 to generate image data G indicating cerebral blood flow information by the following calculation.
  • I G I C -I D
  • Image E and image F are acquired using the background image estimated by calculation. Therefore, compared with the image G, the error in removing the influence of ambient light and/or dark current noise is large. However, in image A and image B, a large amount of light is obtained with the same exposure time. In other words, the S/N ratios of image A and image B are high. Therefore, the influence of the error is so small that it can be ignored in obtaining the appearance information of the face and the skin blood flow information.
  • the appearance information of the face, the skin blood flow information, and the cerebral blood flow information can be accurately acquired without capturing the background light corresponding to the images A and B.
  • the “period A” corresponds to the above-mentioned “first period”
  • the “period B” corresponds to the above-mentioned “fourth period”
  • the “period C” corresponds to the above-mentioned “second period”.
  • the “period D” corresponds to the "period”
  • the “period D” corresponds to the above-mentioned "third period”.
  • n A corresponds to the above-mentioned “n 1 ”
  • n B corresponds to the above-mentioned “n 4 ”
  • n C corresponds to the above-mentioned “n 2 ”
  • N D corresponds to the above-mentioned “n 3 ”.
  • the “signal S A ” corresponds to the above-mentioned “first signal”
  • the “signal S B ” corresponds to the above-mentioned “sixth signal”
  • the “signal S C ” corresponds to the above-mentioned “first signal”.
  • the “signal S D ” corresponds to the “second signal”
  • the “signal S D ” corresponds to the aforementioned “third signal”
  • the “signal S E ” corresponds to the aforementioned “fourth signal”
  • “signal S G ” corresponds to the above “fifth signal”.
  • the image data E corresponds to the above-mentioned “first image data”
  • the image data F corresponds to the above-mentioned “third image data”
  • the image data G corresponds to the above-mentioned “second image data”.
  • the image D which is a common background image, is used to acquire the appearance information of the face, the skin blood flow information, and the cerebral blood flow information. Therefore, the time required to capture one frame is reduced. be able to. That is, it is possible to improve the measurement rates of facial appearance information, skin blood flow information, and cerebral blood flow information. Further, the calculation cost associated with the control of the image sensor 102 can be reduced.
  • the signal charges corresponding to the captured image are accumulated in the four charge accumulating units in each pixel.
  • the image pickup apparatus 100 according to the present embodiment has a configuration in which the number of charge storage units in each pixel is relatively small, but on the other hand, a large amount of information necessary for calculation can be obtained. can do.
  • the imaging device 100 can improve the measurement rates of facial appearance information, skin blood flow information, and cerebral blood flow information. Further, the calculation cost associated with the control of the image sensor can be reduced. Moreover, the circuit configuration of the image sensor can be simplified.
  • the control circuit 105 causes the image sensor 102 to accumulate the signal charge Q A in the charge accumulation unit ACC A in each pixel including the charge accumulation unit ACC A to the charge accumulation unit ACC D. repeated n a times, the operation of accumulating the signal charges Q B in the charge storage unit ACC B repeated n B times, the operation of accumulating the signal charges Q C in the charge storage unit ACC C repeated n C times, the signal charge Q D
  • the operation of accumulating in the charge accumulating unit ACC D is repeated n D times.
  • the images A and B are likely to have higher brightness than the images C and D.
  • a part of the signal charge Q A and/or a part of the signal charge Q B may leak to the charge storage unit ACC C and/or the charge storage unit ACC D.
  • the part of the signal charge Q A and/or the part of the signal charge Q B can be a noise component in the image C and/or the image D.
  • the noise component in the image C and/or the image D can be suppressed as follows.
  • 4 to 6 are diagrams showing other examples of timing charts in the case of acquiring the image D from the image A in the exemplary embodiment.
  • the image A and the image B may be captured in a frame different from that of the image C and the image D.
  • a step (not shown) for discharging the signal charges accumulated in the charge accumulating section and generating an image signal is inserted. Therefore, a portion of the part and / or the signal charge Q B of the signal charge Q A is the signal charge Q C and / or the signal charge Q D does not leak into the charge storage unit to be stored. As a result, the S/N ratios of the images C and D can be increased, and the cerebral blood flow information can be acquired more accurately.
  • FIG. 1 the example shown in FIG.
  • the total of the total exposure time T A ⁇ n A of the image A and the total exposure time T B ⁇ n B of the image B is the total exposure time of the image C. It is shorter than the sum of T C ⁇ n C and the total exposure time length T D ⁇ n D of the image D.
  • the two frames shown are one cycle in which the detection timing of the image sensor 102 for a plurality of reflected pulsed lights is changed.
  • the number of times of accumulation when acquiring the image A, the image B, the image C, and the background image D in the one cycle may be set as follows. n A +n B +n C >n D
  • the number of times of storage may be set as follows. n A +n C >n D or n B +n C >n D
  • the image A, the image B, the image C, and the image D may be captured in different frames.
  • the total exposure time length T A ⁇ n A of the image A is the total exposure time length T C ⁇ n C of the image C , and the total exposure time length T D of the image D. It is shorter than the sum of ⁇ n D.
  • the total exposure time length T B ⁇ n B of image B is greater than the total exposure time length T C ⁇ n C of image C and the total exposure time length T D ⁇ n D of image D. short.
  • the three frames shown are one cycle in which the detection timing of the image sensor 102 for a plurality of reflected pulse lights is changed.
  • the number of times of accumulation when acquiring the image A, the image B, the image C, and the background image D in the one cycle may be set as follows. n A +n B +n C >n D
  • the control circuit 105 causes the image sensor 102 to accumulate the signal charge Q A n A times and/or the signal charge Q B n B times to generate the signal S A. And/or output the signal S B. Thereafter, the control circuit 105, the image sensor 102, a signal charge Q C to accumulate n C times, and / or a signal charge Q D to accumulate n D times to output the signal S C and / or signal S D.
  • the frame rate for capturing the image A may be set higher than the frame rate for capturing the images B, C, and D.
  • the frame rate for capturing the images B, C, and D may be set higher than the frame rate for capturing the images B, C, and D.
  • the image A for acquiring the appearance information of the face is captured multiple times so as not to exceed the time of one frame for capturing the image B, the image C, and the image D. Good. This makes it possible to more accurately obtain, for example, changes in the line of sight and/or facial expression from the plurality of images A.
  • the plurality of frames illustrated is one cycle in which the detection timing of the image sensor 102 with respect to the plurality of reflected pulsed lights is changed.
  • the number of times of accumulation when acquiring the image A, the image B, the image C, and the background image D in the one cycle may be set as follows.
  • the number of frames of the image A in one cycle is i (i is an integer of 2 or more).
  • control circuit 105 causes the image sensor 102 to accumulate the signal charge Q A to the signal charge Q D in each of the four charge accumulation units in each pixel including four charge accumulation units, and
  • the signal S D may be output from S A.
  • control circuit 105 causes the image sensor 102 to store the signal charge Q A and the signal charge Q B in the respective two charge storage units in each pixel including the two charge storage units. to output S a and the signal S B, then the signal charge Q C and the signal charge Q D is accumulated respectively on the two charge storage portion, capable of outputting a signal S C and the signal S D.
  • the control circuit 105 causes the image sensor 102 to accumulate the signal charge Q A in any one of the two charge storage units in each pixel including the two charge storage units, and outputs the signal S A. is output, then the signal charge Q B is accumulated in either of the two charge storage unit, to output the signal S B, then each signal charge Q C and the signal charge Q D on the two charge storage section
  • the signals may be accumulated and the signals S C and S D may be output.
  • the control circuit 105 causes the image sensor 102 to accumulate the signal charge Q A in any one of the three charge accumulation units in each pixel including the three charge accumulation units, and to generate the signal S A.
  • the output operation may be repeated multiple times. Thereafter, the control circuit 105, the image sensor 102, in each pixel, the signal charge Q B, the signal charge Q C, and a signal charge Q D is accumulated respectively to the three charge storage section, the signal S B, signal S C , And the signal S D can be output.
  • the control circuit 105 controls the image sensor 102 to supply the signal charge Q A to each of the pixels having one charge storage unit.
  • the signal can be stored in the storage unit and the signal S A can be output.
  • the control circuit 105, storage and output of the signal S B of the signal charges Q B, the output of the accumulation and signal S C of the signal charge Q C, and the output of the accumulation and the signal S D of the signal charges Q D Can be executed respectively.
  • the image E is acquired from the images A and D
  • the image F is acquired from the images B and D.
  • FIG. 7 is a diagram illustrating an example of a calculation process for acquiring the image E, the image F, and the image G in the exemplary embodiment.
  • the image E may be the same image as the image A
  • the image F may be the same image as the image B. That is, the background image is not used for the calculation.
  • I E I A ⁇ (I D ⁇ a 1 +b 1 )
  • I F I B ⁇ (I D ⁇ a 2 +b 1 )
  • the brightness tends to be high in the images A and B. That is, the S/N ratios of the image A and the image B are high. This makes it possible to ignore the influence of ambient light and/or dark current noise in obtaining the appearance information of the face and the skin blood flow information.
  • FIG. 8 is a diagram showing an example of a timing chart when acquiring the images A, C, and D and an example of a calculation process for acquiring the images E and G in the exemplary embodiment.
  • the image B may not be captured, but the images A, C, and D may be captured within one frame, and only the images E and G may be acquired.
  • FIG. 9 is a diagram showing an example of a timing chart when acquiring the images B, C, and D and an example of a calculation process for acquiring the images F and G in the exemplary embodiment.
  • the image A may not be captured, but the images B, C, and D may be captured within one frame, and only the images F and G may be acquired.
  • the imaging device 100 shown in FIG. 1A in the above-described embodiment is installed so as to face the user 10.
  • the user 10 uses a material that simulates the scattering characteristics of a living body.
  • the material is referred to as "living body phantom”.
  • the material of the living body phantom was polyacetal resin, and the size was 10 cm ⁇ 10 cm ⁇ thickness 5 cm.
  • the control circuit 105 causes the image sensor 102 to repeatedly open the shutter with the light source 101 turned off.
  • the width of the shutter pulse was 10 ns.
  • the wavelength of the light source 101 was 750 nm.
  • Image D which is a background image, was obtained by changing the number of exposures n D as a parameter.
  • the image resolution was 320 pixels x 240 pixels.
  • the average luminance value was calculated by performing the spatial averaging process using the range of 50 pixels ⁇ 50 pixels at the center of the image as the target area.
  • the dynamic range of the brightness of the image sensor 102 is 12 bits.
  • FIG. 10 is a diagram showing a measurement result showing the relationship between the number of exposures and the average luminance value of the central portion of the background image. As shown in FIG. 10, it can be seen that the linearity is sufficiently maintained in the range of 10 times to 4000 times of exposure. Thereby, the background images corresponding to the image A and the image B can be estimated from the image obtained by performing the linear operation on the image D.
  • image D is taken as a background image according to the embodiment shown in FIG. 3, the distribution within the image at zero exposure may be negligibly small or sufficiently random.
  • the brightness value I E of the image E can be estimated by the following calculation.
  • n A /n C ⁇ 1 in general, so ⁇ 1-(n A /n C ) ⁇ .
  • the brightness value I F of the image F can be estimated by the following calculation.
  • n B /n C ⁇ 1-(n A /n C ) ⁇ .
  • FIG. 11 is a diagram showing a time chart in the second embodiment.
  • the imaging device 100 shown in FIG. 1A was installed so as to face the living body phantom.
  • the control circuit 105 controls the image sensor 102 to include the signal charge Q B in one period in each pixel including two charge storage units adjacent to each other. It is accumulated in and to accumulate the signal charges Q C to the other charge accumulating portion during the period C. As a result, the image B and the image C are acquired.
  • the time length T B of the period B was changed stepwise, and the average luminance value of the image B and the average luminance value of the image C were measured.
  • the total exposure time of image B is T B ⁇ n B
  • the total exposure time of image C is T C ⁇ n C.
  • the spatial averaging process is performed with the range of 50 pixels ⁇ 50 pixels in the central portion of the image as the target area, as in the first embodiment.
  • FIG. 12 is a diagram showing a result of measuring the relationship between the average luminance value of the image B and the average luminance value of the image C.
  • Time length T C of period C is fixed. Nevertheless, it can be seen that as the average brightness value of image B increases, the average brightness value of image C also increases slightly. This is because the signal charge Q B accumulated in one of the charge accumulating portions adjacent to the period B by the image capturing of the image B and the image C in the same frame is adjacent to the signal charge Q C accumulated in the period C. It is considered that this is because it leaked to the other charge storage section.
  • the S/N is low when the images C and D are picked up because the amount of light is small. Therefore, even a slight amount of mixed signal charge may become a noise component that cannot be ignored.
  • the signal charges tend to leak to other charge storage portions. Therefore, by capturing the image C and/or the image D in a frame different from that of the image A and/or the image B, it is possible to prevent the signal charge from being mixed into the image C and/or the image D. Thereby, the image C or the image D having a small noise component can be acquired. As a result, more accurate cerebral blood flow information can be acquired.
  • FIG. 13A is a diagram schematically showing an example in which the image pickup apparatus 100 according to the exemplary embodiment is applied to a head mounted display 400A.
  • FIG. 13B is a diagram schematically illustrating an example in which the imaging device 100 according to the exemplary embodiment is applied to the smartphone 400B.
  • the imaging device 100 it is possible to know what the user 10 is looking at on the display of the head mounted display 400A or the smartphone 400B from the change of the line of sight, pupil diameter, blink, or facial expression of the user 10.
  • the imaging device 100 according to the present embodiment can estimate whether the user 10 feels stress based on a change in facial skin blood flow and/or whether the user 10 concentrates based on a change in cerebral blood flow. it can. As a result, it is possible to estimate what the user 10 sees and feels stressed, or what the user 10 sees and concentrates on.
  • Information indicating the estimated state of the user 10 is displayed on the display. If the user 10 is feeling stressed, advice for relieving stress may be displayed on the display. If the user 10 is not concentrated, advice that encourages concentration may be displayed on the display.
  • control circuit 105 The operation of the control circuit 105 and the signal processing circuit 106 will be described below.
  • FIG. 14 is a flowchart of the operation of the control circuit 105 and the signal processing circuit 106 in the example shown in FIGS. 13A and 13B.
  • step S201 the control circuit 105 causes the image sensor 102 to output the signal S A to the signal S D for each frame.
  • the signal processing circuit 106 causes the image generation unit 107 to generate image data E, image data F, and image data G for each frame.
  • steps S201 and S202 after outputting the signal S D from the signal S A , a series of operations for generating the image data E, the image data F, and the image data G may be executed a plurality of times for each frame. Further, the image data E, the image data F, and the image data G in each frame may be generated after performing the operation of outputting the signal S D from the signal S A for each frame a plurality of times.
  • step S203 the signal processing circuit 106 causes the measuring unit 108 to detect a change in the appearance of the face of the user 10 based on the change in the image data E generated for each frame.
  • step S204 the signal processing circuit 106 causes the measuring unit 108 to detect a change in the skin blood flow of the user 10 based on the change in the image data F generated for each frame.
  • step S205 the signal processing circuit 106 causes the measurement unit 108 to detect a change in the cerebral blood flow of the user 10 based on the change in the image data G generated for each frame.
  • a change in the appearance of the face of the user 10 for example, it may be checked whether or not the difference between the intensity values I E of the signals of the image data E between frames exceeds a reference value.
  • a change in the skin blood flow of the user 10 and a change in the cerebral blood flow of the user 10 may be detected by the same method.
  • step S206 the signal processing circuit 106 causes the measuring unit 108 to estimate the state of the user 10 based on the change in the appearance of the face of the user 10, the change in the skin blood flow, and the change in the cerebral blood flow.
  • step S207 the signal processing circuit 106 outputs information indicating the estimated state of the user 10. The information is displayed on the display of the head mounted display 400A shown in FIG. 13A or the smartphone 400B shown in FIG. 13B, for example.
  • step S207 If it is not necessary to detect a change in the appearance of the face of the user 10, part of the operation from step S201 to step S207 is modified as follows.
  • step S201 the control circuit 105 causes the image sensor 102 to output the signal S B to the signal S D for each frame.
  • the signal processing circuit 106 causes the image generation unit 107 to generate the image data F and the image data G for each frame.
  • step S203 the operation of step S203 is omitted.
  • step S206 the signal processing circuit 106 causes the measuring unit 108 to estimate the state of the user 10 based on the change in the skin blood flow and the change in the cerebral blood flow.
  • step S204, step S205, and step S207 are the same.
  • step S201 When it is not necessary to detect a change in the appearance of the face of the user 10 and a change in the skin blood flow, part of the operation from step S201 to step S207 is modified as follows.
  • step S201 the control circuit 105 causes the image sensor 102 to output the signal S C and the signal S D for each frame.
  • the signal processing circuit 106 causes the image generation unit 107 to generate the image data G for each frame.
  • the operations of steps S203 and S204 are omitted.
  • step S206 the signal processing circuit 106 causes the measuring unit 108 to estimate the state of the user 10 based on the change in cerebral blood flow.
  • the operations of step S205 and step S207 are the same.
  • the number of times of accumulation of the background image is adjusted to the number of times of accumulation of the image having the lowest S/N.
  • the number of times the background image is stored is not limited to this. 15 and 16 show another embodiment.
  • the number of accumulations n D when acquiring the image D is set as follows. n A ⁇ n C , and n D ⁇ n C
  • the number of accumulations n D when acquiring the image D is set as follows. n B ⁇ n C , and n D ⁇ n C
  • the frame rate can be made higher and the facial appearance information, the skin blood flow information, and the cerebral blood flow information can be acquired.
  • Such a method can be used because there is sufficient linearity between the number of exposures, which is the number of times the background image is accumulated, and the average luminance of the background image, as shown in FIG. is there. It is possible to estimate a background image with a small number of accumulations from a background image with a large number of accumulations, and conversely, it is possible to estimate a background image with a large number of accumulations from a background image with a small accumulation number. ..
  • the imaging device according to the present disclosure can be applied to a camera or a measurement device that acquires internal information of a user without contact.
  • the imaging device according to the present disclosure can be applied to, for example, biometric or medical sensing, car driver sensing, game console or atlas device user sensing, learner sensing at educational institutions, and worker sensing at work. ..

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Pathology (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Medical Informatics (AREA)
  • Molecular Biology (AREA)
  • Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Biophysics (AREA)
  • Spectroscopy & Molecular Physics (AREA)
  • Hematology (AREA)
  • Optics & Photonics (AREA)
  • Multimedia (AREA)
  • Chemical & Material Sciences (AREA)
  • Analytical Chemistry (AREA)
  • Biochemistry (AREA)
  • General Physics & Mathematics (AREA)
  • Immunology (AREA)
  • Signal Processing (AREA)
  • Cardiology (AREA)
  • Physiology (AREA)
  • Measuring Pulse, Heart Rate, Blood Pressure Or Blood Flow (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
  • Investigating Or Analysing Materials By Optical Means (AREA)
  • Studio Devices (AREA)
  • Image Input (AREA)
  • Transforming Light Signals Into Electric Signals (AREA)

Abstract

La présente invention concerne un dispositif d'imagerie comprenant : une source de lumière qui émet une pluralité d'impulsions de lumière à rayonner vers une section cible ; un capteur d'image qui comprend une pluralité de pixels et qui détecte une pluralité d'impulsions de lumière reflétée qui provenaient de la pluralité des impulsions de lumière et qui sont revenues de la section cible ; et un circuit de commande qui commande la source de lumière et le capteur d'image. Le circuit de commande : amène la source de lumière à émettre la pluralité des impulsions de lumière vers la section cible ; amène le capteur d'image à détecter un composant d'une première période dans chacun d'un nombre n1 d'impulsions de lumière reflétée parmi la pluralité des impulsions de lumière reflétée ; amène le capteur d'image à détecter un composant à partir d'une seconde période dans chacun d'un nombre n2 d'impulsions de lumière reflétée parmi la pluralité des impulsions de lumière reflétée ; et amène le capteur d'image à exécuter une opération de détection de lumière d'un nombre n3 de fois durant une période dans laquelle la pluralité des impulsions de lumière ne sont pas rayonnées sur la section cible. n1 < n2 et n1 < n3.
PCT/JP2019/045582 2018-12-27 2019-11-21 Dispositif d'imagerie WO2020137276A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2018245038A JP2022037256A (ja) 2018-12-27 2018-12-27 撮像装置
JP2018-245038 2018-12-27

Publications (1)

Publication Number Publication Date
WO2020137276A1 true WO2020137276A1 (fr) 2020-07-02

Family

ID=71127937

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2019/045582 WO2020137276A1 (fr) 2018-12-27 2019-11-21 Dispositif d'imagerie

Country Status (2)

Country Link
JP (1) JP2022037256A (fr)
WO (1) WO2020137276A1 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11882366B2 (en) 2021-02-26 2024-01-23 Hill-Rom Services, Inc. Patient monitoring system

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2011023872A (ja) * 2009-07-14 2011-02-03 Honda Motor Co Ltd 撮像装置、露光量調節方法
US20110230738A1 (en) * 1999-08-26 2011-09-22 Britton Chance Optical examination of biological tissue using non-contact irradiation and detection
JP2017229057A (ja) * 2016-06-17 2017-12-28 パナソニックIpマネジメント株式会社 撮像装置
JP2018096988A (ja) * 2016-12-15 2018-06-21 パナソニックIpマネジメント株式会社 撮像装置
JP2018196722A (ja) * 2017-05-23 2018-12-13 パナソニックIpマネジメント株式会社 計測装置

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110230738A1 (en) * 1999-08-26 2011-09-22 Britton Chance Optical examination of biological tissue using non-contact irradiation and detection
JP2011023872A (ja) * 2009-07-14 2011-02-03 Honda Motor Co Ltd 撮像装置、露光量調節方法
JP2017229057A (ja) * 2016-06-17 2017-12-28 パナソニックIpマネジメント株式会社 撮像装置
JP2018096988A (ja) * 2016-12-15 2018-06-21 パナソニックIpマネジメント株式会社 撮像装置
JP2018196722A (ja) * 2017-05-23 2018-12-13 パナソニックIpマネジメント株式会社 計測装置

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11882366B2 (en) 2021-02-26 2024-01-23 Hill-Rom Services, Inc. Patient monitoring system

Also Published As

Publication number Publication date
JP2022037256A (ja) 2022-03-09

Similar Documents

Publication Publication Date Title
JP6887097B2 (ja) 撮像装置
JP7065421B2 (ja) 撮像装置および対象物の内部の情報を取得する方法
JP7386440B2 (ja) 生体計測装置、生体計測装置の作動方法、及び判定装置
JP6998529B2 (ja) 撮像装置
JP6814967B2 (ja) 撮像装置
WO2020044854A1 (fr) Dispositif de mesure biologique et procédé de mesure biologique
JP7308430B2 (ja) ユーザの心理状態を推定するためのシステム、記録媒体、および方法
JP2020103879A (ja) 生体計測方法、地図データ生成方法、プログラム、コンピュータ読み取り可能な記録媒体、および生体計測装置
JP7386438B2 (ja) 生体計測装置、生体計測方法、コンピュータ読み取り可能な記録媒体、およびプログラム
JPWO2019230306A1 (ja) 識別装置および識別方法
JP2024016142A (ja) コンピュータにおける情報処理方法および生体計測システム
WO2020137276A1 (fr) Dispositif d&#39;imagerie
JP7417867B2 (ja) 光計測装置
WO2021182018A1 (fr) Appareil de mesure, et procédé de commande d&#39;un appareil de mesure
WO2020137352A1 (fr) Procédé de biodétection, procédé de génération de données cartographiques, programme, support lisible par ordinateur, et dispositif de biodétection
WO2022138063A1 (fr) Dispositif de mesure biologique, procédé de mesure biologique et programme informatique
WO2023090188A1 (fr) Système de détection de lumière, dispositif de traitement, procédé de commande de système de détection de lumière et programme
WO2022202057A1 (fr) Procédé et dispositif d&#39;estimation de l&#39;état émotionnel d&#39;un utilisateur
WO2022044718A1 (fr) Dispositif et procédé de mesure de la consommation d&#39;oxygène d&#39;un muscle, et programme informatique
JP2020032105A (ja) 生体計測装置、生体計測システム、制御方法、およびコンピュータプログラム

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19901936

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19901936

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: JP