WO2019241982A1 - Device and method for acquiring biological information - Google Patents

Device and method for acquiring biological information Download PDF

Info

Publication number
WO2019241982A1
WO2019241982A1 PCT/CN2018/092313 CN2018092313W WO2019241982A1 WO 2019241982 A1 WO2019241982 A1 WO 2019241982A1 CN 2018092313 W CN2018092313 W CN 2018092313W WO 2019241982 A1 WO2019241982 A1 WO 2019241982A1
Authority
WO
WIPO (PCT)
Prior art keywords
artery
image
calculating
distance
indication information
Prior art date
Application number
PCT/CN2018/092313
Other languages
French (fr)
Inventor
Sato Shohei
Original Assignee
Huawei Technologies Co., Ltd.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co., Ltd. filed Critical Huawei Technologies Co., Ltd.
Priority to PCT/CN2018/092313 priority Critical patent/WO2019241982A1/en
Priority to JP2020571471A priority patent/JP2021528169A/en
Priority to CN201880094660.7A priority patent/CN112292072A/en
Publication of WO2019241982A1 publication Critical patent/WO2019241982A1/en

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/02Detecting, measuring or recording pulse, heart rate, blood pressure or blood flow; Combined pulse/heart-rate/blood pressure determination; Evaluating a cardiovascular condition not otherwise provided for, e.g. using combinations of techniques provided for in this group with electrocardiography or electroauscultation; Heart catheters for measuring blood pressure
    • A61B5/026Measuring blood flow
    • A61B5/0285Measuring or recording phase velocity of blood waves
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/02Detecting, measuring or recording pulse, heart rate, blood pressure or blood flow; Combined pulse/heart-rate/blood pressure determination; Evaluating a cardiovascular condition not otherwise provided for, e.g. using combinations of techniques provided for in this group with electrocardiography or electroauscultation; Heart catheters for measuring blood pressure
    • A61B5/026Measuring blood flow
    • A61B5/0261Measuring blood flow using optical means, e.g. infrared light
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/48Other medical applications
    • A61B5/4887Locating particular structures in or on the body
    • A61B5/489Blood vessels
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2218/00Aspects of pattern recognition specially adapted for signal processing
    • G06F2218/08Feature extraction
    • G06F2218/10Feature extraction by analysing the shape of a waveform, e.g. extracting parameters relating to peaks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2218/00Aspects of pattern recognition specially adapted for signal processing
    • G06F2218/12Classification; Matching

Definitions

  • the present disclosure relates to a device and method for acquiring a physological parameter about the cardiovascular system by imaging photo plethysmography (iPPG) .
  • iPPG photo plethysmography
  • a device acquires an image of a subject by imaging the subject, and calculates a pulse wavelength velocity from this image.
  • a device that acquires an image of a subject by imaging the subject, and calculates a pulse wavelength velocity from this image.
  • a publicly known device that images the face of a subject, identifies two different target regions in the facial image, and calculates a pulse wavelength velocity from the amount of shifting of pulse waves in both regions.
  • a first aspect of an embodiment provides the following method.
  • a method for acquiring biological information including:
  • the first aspect of the embodiment further includes:
  • step of calculating the indication information calculates the indication information by using the calculated distance between the artery parts and the amount of shifting.
  • the distance to an artery part is acquired to calculate indication information.
  • a second aspect of the embodiment provides the following device.
  • a device for acquiring biological information including:
  • a first acquisition unit for acquiring a first image of an artery part of a subject imaged in a contactless manner
  • a detection unit for detecting individual pulse waves corresponding to a plurality of positions in the artery part based on the first image
  • a shift amount calculating unit for calculating an amount of shifting between the detected individual pulse waves
  • an indication information calculating unit for calculating indication information on an artery flow of the subject, as the biological information, by using the amount of shifting.
  • the second aspect of the embodiment further includes:
  • a second acquisition unit for acquiring a second image including depth information of the artery part
  • a calculation unit for calculating a distance between artery parts associated with the amount of shifting based on the depth information included in the second image
  • the indication information calculating unit is configured to calculate the indication information by using the calculated distance between the artery parts and the amount of shifting.
  • the distance to an artery part is acquired to calculate indication information.
  • a third aspect of the embodiment provides the following device.
  • a device including: the device of the second aspect; a near-infrared camera configured to image the first image; and a depth sensor used to acquire the indication information, and acquire depth information of the plurality of artery parts.
  • a fourth aspect of the embodiment provides a computer readable storage medium recording a program for allowing a computer to execute a first aspect of the method according to the embodiment.
  • a fifth aspect of the embodiment provides a computer program for allowing a computer to execute the first aspect of the method according to the embodiment.
  • FIG. 1 is a schematic diagram showing an example of the configuration of a device according to an embodiment
  • Fig. 2A is a diagram for describing an aspect in a case where light from an NIR light source is reflected at a subject in the device according to the embodiment;
  • Fig. 2B is a diagram for describing an aspect for acquiring an NIR image and a depth image in the device according to the embodiment
  • Fig. 3 is a diagram for describing the relationship between the wavelength of irradiated light and the transmission depth thereof;
  • Fig. 4 is a diagram for describing the outline of an indication information calculating process, which is implemented by the device according to the embodiment;
  • Fig. 5 is a diagram for describing a distance between two regions of interest (ROIs) in the device according to the embodiment
  • Fig. 6A is a diagram for describing a pixel size in the device according to the embodiment.
  • Fig. 6B is a diagram showing an example of the pixel size in Fig. 6A;
  • FIG. 7 is a diagram showing an example of the functional configuration of the device according to the embodiment.
  • FIG. 8 is a flowchart illustrating an example of an overall indication information calculating process in the device according to the embodiment
  • Fig. 9 is a flowchart illustrating an example of a process which is executed to calculate a distance between artery parts as detection targets in the device according to the embodiment;
  • Fig. 10 is a flowchart illustrating the calculation process of step S22 in Fig. 9;
  • Fig. 11 is a diagram for describing a separation process of step S221 in Fig. 10;
  • Fig. 12 is a diagram for describing an aspect in which the locations of artery parts are estimated to calculate a distance between the artery parts;
  • Fig. 13 is a diagram showing a detection range in a case where a radial artery is detected.
  • a device 10 of the embodiment is configured to obtain biological information of a subject (i.e., a physological parameter about the cardiovascular system from a video image of the subject which is imaged in a contactless manner.
  • Fig. 1 is a schematic diagram showing an example of the hardware configuration of the device 10 according to an embodiment.
  • the device 10 includes a processing unit 11, a ROM (Read Only Memory) 12, a RAM (Random Access Memory) 13, a display device 14, an input device 15, and a ToF (time of Flight) camera 20.
  • the ToF camera 20 includes a near-infrared (NIR) sensor 30, an NIR light source 31, and a depth sensor 40.
  • the device 10 is, for example, a cellular phone, a personal digital assistant (PDA) , a personal computer, a robot, a measuring instrument or a game machine or the like.
  • PDA personal digital assistant
  • the processing unit 11 is connected to the individual components by a bus to transfer a control signal and data.
  • the processing unit 11 runs various kinds of programs for implementing the general operation of the device 10, and performs arithmetic operations and timing control or the like.
  • the aforementioned programs may be stored in a computer readable storage medium such as a DVD (Digital Versatile Disc) -ROM (Read Only Memory) or CD (Compact Disk) -ROM (Read Only Memory) .
  • the programs for an operating system and various kinds of data, which are needed to control the operation of the entire device 10, are stored in the ROM 12.
  • the RAM 13 is provided with a storage area for temporarily storing data and the programs, where the programs and data as well as other data necessary for the operation of the device 10 are stored.
  • the display device 14 may be, for example, a flat panel display such as a liquid crystal display or an EL (Electro-Luminescence) display.
  • the input device 15 includes operational buttons, a touch panel, an input pen, a sensor, and the like.
  • the NIR camera 30 mounted in the ToF camera 20 receives light from the NIR light source 31 reflected at the subject.
  • a near-infrared (NIR) image (first image) d10 to be described later can be acquired.
  • the depth sensor 40 also receives light from the NIR light source 31 reflected at the subject.
  • a distance to the subject is calculated pixel by pixel from a time for irradiated light from the NIR camera 30 to return to the depth sensor 40 and the velocity of light (3 ⁇ 10 8 m/s) .
  • a depth image (second image) d20 to be described later including depth information representing the distance to the subject pixel by pixel is acquired.
  • the NIR image d10 and the depth image d20 are both designed to be structured to be acquired in a contactless manner.
  • Fig. 2A shows an aspect in a case where light from the NIR light source 31 is reflected at a subject 100.
  • light from the NIR light source 31 is emitted to the subject 100, and then the NIR camera 30 and the depth sensor 40 both receive light reflected at the subject 100. Accordingly, the NIR camera 30 and the depth sensor 40 acquire the NIR image d10 and the depth image d20, respectively.
  • Fig. 2B shows an aspect for acquiring the NIR image d10 and the depth image d20.
  • the NIR image d10 and the depth image d20 are images of the same angle of view acquired from a same imaging point P.
  • the imaging point P is the imaging point for both of the NIR camera 30 and the depth sensor 40.
  • the images d10, d20 are periodically output to the processing unit 11 simultaneously in a frame format.
  • Such synchronous timings include, for example, a timing at which pulse light is emitted to the subject 100 from the NIR light source 31.
  • the acquisition of the NIR image d10 and the depth image d20 enables it possible to execute a process of acquiring a pulse wavelength velocity (PWV) to be described later.
  • PWV pulse wavelength velocity
  • the ToF camera 20 may be externally mounted to the device 10. Further, the device 10 may be configured to achieve the same functions as the ROM 12 or the RAM 13 or the like using an external storage device such as a hard disk or an optical disc.
  • the ToF camera 20 may be implemented by other alternative schemes as long as the NIR image d10 and the depth image d20 can be acquired.
  • a camera included in the stereo camera may acquire an NIR image.
  • an image acquired by the depth camera may be treated as the NIR image d10.
  • Fig. 3 is a diagram for describing the relationship between the wavelength of irradiated light and the transmission depth thereof.
  • Fig. 4 is a diagram for describing the outline of measurement.
  • Fig. 5 is a diagram for describing a distance between two regions of interest (ROIs) .
  • Fig. 6A is a diagram for describing a pixel size.
  • Fig. 6B is a diagram exemplifying the size of one pixel g shown in Fig. 6A.
  • the transmission distance of light to a deep part from a body surface varies according to wavelengths d1 to d8 of light (wavelength being about 800 nm to 400 nm) . As the wavelength gets larger, the transmission distance of light to the deep part increases. d1 to d8 show a wavelength range from the wavelength of reddish purple to the wavelength of purple.
  • This device 10 uses near-infrared light having a wavelength range of reddish purple light with the wavelength d1, among the wavelengths d1 to d8, which has the largest distance to a deep part.
  • the wavelength d1 is, for example, about 750 nm to 800 nm, but is not limited thereto if the NIR image d10 including a deep part of a human body to be detected can be acquired.
  • near-infrared light with the wavelength d1 transmissively reaches an artery located at a deep part (at a depth of 3.0 mm or more from a skin) in a human body as a detection target. Consequently, the device 10 generates an NIR image d10 to be described later, which represents the luminance according to a change in a blood flow (artery flow) flowing through the artery, by using infrared light reflected at an artery, and calculates indication information d40 on the artery flow of a person to be examined or a subject 100 based on the result of evaluation of the NIR image d10.
  • the wavelength d1 has only to allow reflection at an artery, and a wavelength other than that of the aforementioned near-infrared light representing reddish purple may be available.
  • the indication information d40 that is handled by the device 10 of the embodiment is, but not limited to, for example, a pulse wavelength velocity (PWV) .
  • PWV pulse wavelength velocity
  • the PWV is used as an indication for the progressing rate of arterial sclerosis. For example, the greater the value of the PWV, the more likely cardiac infarction will occur.
  • the device 10 of the embodiment uses near-infrared light whose wavelength has a large light transmission depth as mentioned above, it is possible to acquire indication information d40 of a person to be examined according to a change in a blood flow in an arterial vessel, not from a blood flow in a blood capillary.
  • This indication information d40 is equivalent to reflection of a change in a blood flow which flows through an arterial vessel, enhancing the reliability of the indication information d40.
  • this device 10 outputs the NIR image d10 and the depth image d20 (Fig. 2B) , imaged at the same point of view, from the ToF camera 20 to the processing unit 11 in synchronism.
  • the depth sensor 40 in the ToF camera 20 acquires light emitted from the NIR light source 31 and later reflected at the subject 100, thereby providing the depth image d20 of the subject 100. Further, as the NIR camera 30 images the subject 100 during the period of acquisition of the depth image d20 by the depth sensor 40, the NIR image d10 of the subject 100 is acquired. As shown in Fig. 4, for example, the NIR image d10 is an image including the neck region of the person to be examined, and this image (frame image) is acquired from the ToF camera 20 in order.
  • the processing unit 11 which has acquired the NIR image d10 sets two regions of interest (ROIs) 1 and 2 for an artery part located in the neck of the person to be examined included in the NIR image d10 being treated as a detection target.
  • the ROI 1 includes an artery part far from a heart
  • the ROI 2 includes an artery part closer to the heart.
  • the artery part is, for example, a part of a radial artery. Accordingly, indication information reflecting a change in an artery flow in a carotid artery is acquired, which enhances the availability of indication information.
  • one way of setting the ROIs 1 and 2 may be, but not limited to, setting the ROIs 1 and 2 at a preset interval therebetween.
  • shapes of or positions in artery parts may be registered in advance in the device 10.
  • the processing unit 11 may set the ROIs 1 and 2 from the registered information after an artery part is identified.
  • the processing unit 11 detects time series signals f (t) and g (t) which vary according to artery flows flowing through artery parts within two ROIs 1 and 2 included in the NIR image d10.
  • the time series signals f(t) and g (t) are extracted in such a way that a photo plethysmography (PPG) is acquired from the NIR image d10.
  • PPG photo plethysmography
  • the lateral direction represents time t
  • the longitudinal direction represents an average value of luminance of all the pixels in the corresponding ROI.
  • the number of samples representing the values of the time series signals f (t) and g (t) increases.
  • values of the time series signals f (t) and g (t) may be given more accurately.
  • a cross-correlation function 111 is a function for calculating convolution of the two time series signals.
  • a coherence indicating the degree of correlation of the two time series signals is calculated by changing the phases of the time series signals. Then, the phase deviations of the time series signals and the similarity of the periodicity of the time series signals are evaluated from the result.
  • the cross-correlation function 111 When two identical time series signals are input to the cross-correlation function 111, the then cross-correlation function become equivalent to an auto-correlation function, and shows a maximum.
  • the processing unit 11 outputs to a subsequent stage the value of "m” indicating the number of samples for the phase delay of the time series signal g (t) when the value of the cross-correlation function 111 shows a maximum, as a phase delay (phase offset) d30.
  • the cross-correlation function 111 may be expressed by, for example, the following equation (1) .
  • n represents the length (e.g., two cycles) of the time series signals f (t) and g (t)
  • m represents the number of samples for the phase delay of the time series signal g (t)) .
  • the processing unit 11 acquires a distance between two ROIs 1 and 2 from the depth image d20 through a calculation process 112.
  • a distance L between a center Fo of the ROI 1 and a center G 0 of the ROI 2 is set as the distance between the two ROIs 1 and 2.
  • the ROIs 1 and 2 are the same as those shown in the NIR image d10
  • the distance L may be set to a value different from the value exemplified in Fig. 5.
  • the maximum distance or the minimum distance between the two ROIs 1 and 2 or a distance for a preset number of pixels between the two ROIs 1 and 2 may be used as the distance L.
  • the distance L between the two ROIs 1 and 2 is set as the distance between artery parts as detection targets.
  • a field of view (FOV) and a resolution are set in a setting unit 113 of the processing unit 11. Further, the processing unit 11 acquires the scale of each pixel in the depth image d20 through an acquisition process 114.
  • a depth image (image having 600 pixels in width and 360 pixels in height) d20 with a horizontal field of view of h° and a vertical field of view of v° from an imaging point P of the depth sensor 40 is acquired as shown in Fig. 6A, for example, the size (Lh, Lv) (Fig. 6B) per pixel ( "g" shown in Fig. 6A) is expressed by the following equation (2) .
  • d represents the distance to the depth image d20 from the imaging point P.
  • an average distance to a corresponding ROI (average distance among all the pixels within the ROI) is used as one example of the distance d in the device 10 of the embodiment, as will be described later, the distance may take a different value.
  • the equation (2) merely shows an example size per pixel, which may be changed. The values of the size, Lh and Lv, may be shown according to the value of the resolution.
  • the processing unit 11 acquires the value of the distance L (Fig. 5) between the ROIs 1 and 2 from the pixel size (Lh, Lv) shown in the equation (2) .
  • the distance L of ten pixels in the vertical direction is shown, for example, the value of "L” is given by Lv ⁇ 10.
  • the processing unit 11 calculates and outputs the PWV as indication information d40 relating to an artery flow of the subject.
  • the PWV as the indication information d40 is acquired by, for example, the following equation (3) .
  • L represents the distance (Fig. 5) between the ROIs 1 and 2
  • D indicates a time for the aforementioned phase delay d30.
  • d is given from m/(r ⁇ N) where m represents the number of samples for the phase delay of the time series signal g (t) indicated by the phase delay d30, r represents the frame rate of the NIR camera 30, and N represents the upsampling number.
  • the device 10 of the embodiment obtains the indication information d40 from the NIR image d10 and the depth image d20.
  • Fig. 7 is a diagram showing an example of the functional configuration of the device 10 that is implemented on the hardware configuration shown in Fig. 1. The following describes the functional configuration of the device 10 with reference to Fig. 7.
  • the device 10 includes a first acquisition unit 101, a detection unit 102, a shift amount calculating unit 103, a second acquisition unit 104, a calculation unit 105, an indication information calculating unit 106, and an output unit 107.
  • Those components are implemented by the processing unit 11 shown in Fig. 1, and are configured as follows.
  • the first acquisition unit 101 acquires an NIR image d10 obtained by imaging an artery part of a subject in a contactless manner.
  • the detection unit 102 detects individual pulse waves corresponding to a plurality of positions of an artery part (time series signals f (t) and g (t) in Fig. 4) based on the NIR image d10.
  • the shift amount calculating unit 103 calculates an amount of shifting (phase delay d30 in Fig. 4) between the individual pulse waves detected by the detection unit 102.
  • the second acquisition unit 104 acquires a depth image d20 including depth information of the artery part.
  • the calculation unit 105 calculates a distance between artery parts (distance L between ROIs 1 and 2 in Fig. 5) associated with the amount of shifting calculated by the shift amount calculating unit 103, based on the depth information included in the depth image d20.
  • the indication information calculating unit 106 calculates indication information (PWV) relating to an artery flow of the subject as biological information by using the amount of shifting. Further, the indication information calculating unit 106 may calculate the indication information d40 by using the distance between the artery parts calculated by the calculation unit 105 and the amount of shifting.
  • PWV indication information
  • the output unit 107 outputs the indication information d40.
  • the components of the individual units 101 to 107 shown in Fig. 7 may be implemented by an ASIC (Application Specific Integrated Circuit) or FGA (Field Programmable Gate Array) or the like. Those components are referred to in the following description of the operation of the device 10 as needed.
  • ASIC Application Specific Integrated Circuit
  • FGA Field Programmable Gate Array
  • the processing unit 11 in this embodiment can execute various processes to be described later according to a program.
  • Fig. 8 is a flowchart illustrating one example of the general process of calculating indication information d40.
  • the processing unit 11 acquires an NIR image d10 of the subject 100 (step S11) .
  • an NIR image d10 including the range of the neck of the subject is acquired.
  • the processing unit 11 is implemented as the first acquisition unit 101.
  • the processing unit 11 detects individual pulse waves corresponding to a plurality of positions in an artery part of the subject based on the NIR image d10 (step S12) .
  • the detected pulse waves are indicated as time series signals f (t) and g (t) as exemplified in Fig. 4.
  • Fig. 4 shows an example in which a change in an artery flow flowing through an artery part located in an ROI 1 with the passage of time is represented by the time series signal f (t) .
  • Fig. 4 also shows an example in which a change in an artery flow flowing through an artery part located in an ROI 2 with the passage of time is represented by the time series signal g (t) .
  • the ROIs 1 and 2 are set in correspondence with the positions in an artery part as detection targets in the NIR image d10.
  • the processing unit 11 may perform the detection by upsampling the time series signals f (t) and g (t) .
  • the time series signals f (t) and g (t) are shown more precisely.
  • the processing unit 11 is implemented as the detection unit 102.
  • the processing unit 11 calculates a phase delay d30 between the aforementioned pulse waves (step S13) .
  • the processing unit 11 determines whether the value of the cross-correlation function 111 shown in the equation (1) shows a maximum by shifting the phase of the time series signal g (t) .
  • the processing unit 11 calculates the phase delay d30 of the time series signal g (t) (e.g., the value of "m” indicating the number of samples for the phase delay of the time series signal g(t) ) when the value of the cross-correlation function 111 shows the maximum.
  • the processing unit 11 is implemented as the shift amount calculating unit 103.
  • the processing unit 11 calculates indication information d40 (PWV in Fig. 8) on an artery part of the subject, as biological information, by using the phase delay d30 (step S14) . Further, the processing unit 11 outputs the indication information d40 (step S15) . Accordingly, the indication information d40 may be provided visually.
  • step S14 the processing unit 11 is implemented as the indication information calculating unit 106.
  • step S15 the processing unit 11 is implemented as the output unit 107.
  • the process of acquiring the value of "L" in the equation (3) is illustrated in a flowchart in Fig. 9.
  • the value of "L" mentioned above may be input via the input device 15. Even in this manner, the indication information d40 may also be obtained from the equation (3) .
  • Fig. 9 is a flowchart illustrating one example of the process of calculating the distance L in the equation (3) .
  • the processing unit 11 acquires, from the ToF camera 20, a depth image d20 having the same point of view as the NIR image d10 in synchronism with the NIR image d10 (step S21) .
  • the depth image d20 includes depth information representing a distance to the subject pixel by pixel.
  • the processing unit 11 is implemented as the second acquisition unit 104.
  • the processing unit 11 then calculates a distance to an artery part as a detection target based on the depth image d20 (step S22) . This process is illustrated in detail in a flowchart in Fig. 10 to be described later.
  • the processing unit 11 calculates a distance between the artery parts from the result of the calculation of step S22 (step S23) .
  • the distance L between the center F 0 of the ROI 1 and the center G 0 of the ROI 2 is calculated as the distance between the artery parts in the step S23.
  • the processing unit 11 is implemented as the calculation unit 105.
  • Fig. 10 is a flowchart illustrating the calculation process of step S22 in Fig. 9.
  • Fig. 11 is a diagram for describing a separation process of step S221 in Fig. 10.
  • the processing unit 11 separates a foreground and a background from the depth image d20 acquired in step S21 in Fig. 9 (step S22) .
  • a foreground G1 and a background G2 are distinguished through filter performed by determining whether the value of depth information included in the depth image d20, for example, as a target (distance to the target from the imaging point P) is equal to or greater than a threshold value. Then, a pixel region having a value equal to or greater than the threshold value is removed as a background G3 far from the imaging point P.
  • an edge G2 represents a portion of an active motion, and is eliminated from the foreground G1.
  • Examples of the edge detection include threshold processing of calculating a gradient, for example.
  • the ROIs 1 and 2 shown in Fig. 4 are specified as the foreground through the separation process of step S221.
  • the processing unit 11 creates a histogram (the amounts of the features of pixels) with a distance indicated by the depth information on the ROIs 1 and 2 (Fig. 4) as targets (step S222) .
  • the processing unit 11 calculates an average value of the distances of all the pixels in each ROI 1 and 2 (distances indicated by the depth information) as the distance to each ROI 1 and 2 based on the histogram created in step S222 (step S223) .
  • the processing unit 11 eliminates that value as an unfit value, and then calculates the average value.
  • the average value calculated in step S223 is set as the distance d (fig. 6) to an artery part in each ROI from the imaging point P.
  • the size (Lh, Lv) per pixel is acquired from the equation (2) .
  • the distance between artery parts is calculated in step S23 in Fig. 9. That is, the distance L between the two ROIs 1 and 2 (Fig. 5) as the distance between the artery parts is calculated from the pixel size (Lh, Lv) acquired from the equation (2) .
  • the value of "L" is given by Lv ⁇ 10.
  • the processing unit 11 may calculate the PWV shown in the equation (3) for the time series signals f (t) and g (t) of preset cycles (e.g., five cycles, ten cycles, or the like) .
  • the average value, the maximum value or the minimum value of the PWV, for example may also be used as the indication information d40. Even if the PWV cannot be properly calculated from the time series signals f(t) and g (t) at a certain timing, therefore, the adequate indication information d40 is obtained by using the average value or the like of the PWV acquired from the time series signals f(t) and g (t) for the aforementioned cycles.
  • the distance to an artery part within each ROI from the imaging point P is acquired based on the depth image d20, and the actual distance L or the like between the ROIs 1, w in the detection target is calculated based on this distance d.
  • the distance d to an artery part under a skin which cannot be directly observed is acquired at the time of acquiring the time series signals f (t) and g (t) based on the image of the artery part as the detection target. Accordingly, the time series signals of pulse waves to be detected can be made to adequately reflect the actual pulse waves.
  • the pulse waves of blood capillaries near a skin surface are measured.
  • an indication such as the PWV cannot be obtained accurately.
  • the present embodiment which accurately acquires an indication relating to a blood flow by obtaining the pulse waves of an artery, can surely identify an artery part by acquiring the distance d in the above-described manner.
  • the indication information d40 is calculated to reflect a change in a blood flow flowing through an artery vessel. This can improve the reliability of the indication information.
  • the distance between artery parts (distance L between the ROIs 1 and 2 in Fig. 5) is acquired from the depth image d20, thus eliminating the need for an operation of inputting the distance L. This eliminates an error in an input value. Thus, correct indication information d40 can be obtained.
  • the NIR image d10 and the depth image d20 at the same field of view are output from the ToF camera 20 to the processing unit 11 in synchronism with each other.
  • the processing unit 11 can obtain the indication information d40 through steps S21 to S23 in Fig. 9 in synchronism with one another.
  • Fig. 12 exemplifies an aspect in which with an artery part 71 being located in the ROIs 1 and 2 shown in Fig. 4, the locations of the artery part 71 are estimated to calculate a distance between the artery parts 71 according to the result of the estimation.
  • the processing unit 11 registers patterns of distances to artery parts around the neck of a subject in advance, and compares depth information included in the depth image d20 with the registered patterns of distance information to estimate the location of the artery part 71.
  • the processing unit 11 calculates the entire length of the artery part 71 along the estimated locations of the artery parts 71 as a distance L1. As a result, a more accurate distance L1 between the artery parts 71 may be obtained, and more accurate indication information d40 may be calculated.
  • shapes of artery parts may be patterned in advance, and the location of an artery part included in the depth image d20 may be estimated from the patterns.
  • Fig. 13 exemplifies a case where a radial artery within an arm range 81 of a subject is a detection target. Even in this case, indication information d40 reflecting a change in the blood flow of a radial artery in an arm is acquired.
  • the device-relating embodiment and the method-relating embodiment are based on the same concept, so that the technical advantages that are brought about by the device-relating embodiment are also the same as those brought about by the method-relating embodiment.
  • the foregoing description of the embodiment of the device should be referred to, and the details thereof will not be repeated herein.

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Medical Informatics (AREA)
  • Physics & Mathematics (AREA)
  • Veterinary Medicine (AREA)
  • Biophysics (AREA)
  • Pathology (AREA)
  • Engineering & Computer Science (AREA)
  • Biomedical Technology (AREA)
  • Hematology (AREA)
  • Public Health (AREA)
  • Molecular Biology (AREA)
  • Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • General Health & Medical Sciences (AREA)
  • Physiology (AREA)
  • Cardiology (AREA)
  • Vascular Medicine (AREA)
  • Measuring Pulse, Heart Rate, Blood Pressure Or Blood Flow (AREA)
  • Image Analysis (AREA)

Abstract

A device (10) and a method for acquiring biological information are provided. The method for acquiring biological information includes: acquiring a first image of an artery part of a subject imaged in a contactless manner (S11); detecting individual pulse waves corresponding to a plurality of positions in the artery part based on the first image (S12); calculating an amount of shifting between the detected individual pulse waves (S13); and calculating indication information on an artery flow of the subject, as the biological information, by using the amount of shifting (S14).

Description

DEVICE AND METHOD FOR ACQUIRING BIOLOGICAL INFORMATION Technical Field
The present disclosure relates to a device and method for acquiring a  physological parameter about the  cardiovascular system by imaging photo plethysmography (iPPG) .
Background Art
A device is known that acquires an image of a subject by imaging the subject, and calculates a pulse wavelength velocity from this image. For example, there is a publicly known device that images the face of a subject, identifies two different target regions in the facial image, and calculates a pulse wavelength velocity from the amount of shifting of pulse waves in both regions.
Summary of Invention
Under the above circumstances, a technical advantage is provided by an embodiment of the present disclosure describing a device and method for acquiring biological information.
A first aspect of an embodiment provides the following method.
A method for acquiring biological information, including:
acquiring a first image of an artery part of a subject imaged in a contactless manner;
detecting individual pulse waves corresponding to a plurality of positions in the artery part based on the first image;
calculating an amount of shifting between the detected individual pulse waves; and
calculating indication information on an artery flow of the subject, as the biological information, by using the amount of shifting.
According to the first aspect, highly reliable indication information is provided.
The first aspect of the embodiment further includes:
acquiring a second image including depth information of the artery part; and
calculating a distance between artery parts associated with the amount of shifting based on the depth information included in the second image,
wherein the step of calculating the indication information calculates the indication information by using the calculated distance between the artery parts and the amount of shifting.
According to this first aspect, the distance to an artery part is acquired to calculate indication information.
A second aspect of the embodiment provides the following device.
A device for acquiring biological information, including:
a first acquisition unit for acquiring a first image of an artery part of a subject imaged in a contactless manner;
a detection unit for detecting individual pulse waves corresponding to a plurality of positions in the artery part based on the first image;
a shift amount calculating unit for calculating an amount of shifting between the detected individual pulse waves; and
an indication information calculating unit for calculating indication information on an artery flow of the subject, as the biological information, by using the amount of shifting.
According to the second aspect, highly reliable indication information is provided.
The second aspect of the embodiment further includes:
a second acquisition unit for acquiring a second image including depth information of the artery part; and
a calculation unit for calculating a distance between artery parts associated with the amount of shifting based on the depth information included in the second image,
wherein the indication information calculating unit is configured to calculate the indication information by using the calculated distance between the artery parts and the amount of shifting.
According to this second aspect, the distance to an artery part is acquired to calculate indication information.
A third aspect of the embodiment provides the  following device.
A device including: the device of the second aspect; a near-infrared camera configured to image the first image; and a depth sensor used to acquire the indication information, and acquire depth information of the plurality of artery parts.
According to the third aspect, highly reliable indication information is provided.
A fourth aspect of the embodiment provides a computer readable storage medium recording a program for allowing a computer to execute a first aspect of the method according to the embodiment.
According to the fourth aspect, highly reliable indication information is provided.
A fifth aspect of the embodiment provides a computer program for allowing a computer to execute the first aspect of the method according to the embodiment.
According to the fifth aspect, highly reliable indication information is provided.
Brief Description of Drawings
To describe the technical solutions in the embodiments more clearly, the following briefly describes the accompanying drawings required for describing the present embodiments. Apparently, the accompanying drawings in the following description depict merely some of the possible embodiments, and a person of ordinary skill in the art may still derive other drawings, without creative efforts, from these  accompanying drawings, in which:
[Fig. 1] Fig. 1 is a schematic diagram showing an example of the configuration of a device according to an embodiment;
[Fig. 2A] Fig. 2A is a diagram for describing an aspect in a case where light from an NIR light source is reflected at a subject in the device according to the embodiment;
[Fig. 2B] Fig. 2B is a diagram for describing an aspect for acquiring an NIR image and a depth image in the device according to the embodiment;
[Fig. 3] Fig. 3 is a diagram for describing the relationship between the wavelength of irradiated light and the transmission depth thereof;
[Fig. 4] Fig. 4 is a diagram for describing the outline of an indication information calculating process, which is implemented by the device according to the embodiment;
[Fig. 5] Fig. 5 is a diagram for describing a distance between two regions of interest (ROIs) in the device according to the embodiment;
[Fig. 6A] Fig. 6A is a diagram for describing a pixel size in the device according to the embodiment;
[Fig. 6B] Fig. 6B is a diagram showing an example of the pixel size in Fig. 6A;
[Fig. 7] Fig. 7 is a diagram showing an example of the functional configuration of the device according to the embodiment;
[Fig. 8] Fig. 8 is a flowchart illustrating an example  of an overall indication information calculating process in the device according to the embodiment;
[Fig. 9] Fig. 9 is a flowchart illustrating an example of a process which is executed to calculate a distance between artery parts as detection targets in the device according to the embodiment;
[Fig. 10] Fig. 10 is a flowchart illustrating the calculation process of step S22 in Fig. 9;
[Fig. 11] Fig. 11 is a diagram for describing a separation process of step S221 in Fig. 10;
[Fig. 12] Fig. 12 is a diagram for describing an aspect in which the locations of artery parts are estimated to calculate a distance between the artery parts; and
[Fig. 13] Fig. 13 is a diagram showing a detection range in a case where a radial artery is detected.
Description of Embodiments
The following clearly describes technical solutions of the embodiments of the present disclosure with reference to the accompanying drawings of the embodiments of the present disclosure. Apparently, described embodiments are not all but just some of the embodiments of the present disclosure. It is to be noted that all other embodiments which may be obtained by a person skilled in the art based on the embodiments of the present disclosure without creative efforts shall fall within the protection scope of embodiments of the present disclosure.
device 10 of the embodiment is configured to obtain biological information of a subject (i.e., a physological parameter about the cardiovascular system from a video image of the subject which is imaged in a contactless manner.
[Configuration of Device 10]
Fig. 1 is a schematic diagram showing an example of the hardware configuration of the device 10 according to an embodiment.
As shown in Fig. 1, the device 10 includes a processing unit 11, a ROM (Read Only Memory) 12, a RAM (Random Access Memory) 13, a display device 14, an input device 15, and a ToF (time of Flight) camera 20. The ToF camera 20 includes a near-infrared (NIR) sensor 30, an NIR light source 31, and a depth sensor 40. In this embodiment, the device 10 is, for example, a cellular phone, a personal digital assistant (PDA) , a personal computer, a robot, a measuring instrument or a game machine or the like.
The processing unit 11 is connected to the individual components by a bus to transfer a control signal and data. The processing unit 11 runs various kinds of programs for implementing the general operation of the device 10, and performs arithmetic operations and timing control or the like. The aforementioned programs may be stored in a computer readable storage medium such as a DVD (Digital Versatile Disc) -ROM (Read Only Memory) or CD (Compact Disk) -ROM (Read Only Memory) .
The programs for an operating system and various  kinds of data, which are needed to control the operation of the entire device 10, are stored in the ROM 12.
The RAM 13 is provided with a storage area for temporarily storing data and the programs, where the programs and data as well as other data necessary for the operation of the device 10 are stored.
The display device 14 may be, for example, a flat panel display such as a liquid crystal display or an EL (Electro-Luminescence) display. The input device 15 includes operational buttons, a touch panel, an input pen, a sensor, and the like.
The NIR camera 30 mounted in the ToF camera 20 receives light from the NIR light source 31 reflected at the subject. Thus, a near-infrared (NIR) image (first image) d10 to be described later can be acquired.
The depth sensor 40 also receives light from the NIR light source 31 reflected at the subject. Thus, in the ToF camera 20, a distance to the subject is calculated pixel by pixel from a time for irradiated light from the NIR camera 30 to return to the depth sensor 40 and the velocity of light (3×10 8 m/s) . Accordingly, a depth image (second image) d20 to be described later including depth information representing the distance to the subject pixel by pixel is acquired. The NIR image d10 and the depth image d20 are both designed to be structured to be acquired in a contactless manner.
Fig. 2A shows an aspect in a case where light from the NIR light source 31 is reflected at a subject 100. In the  example shown in Fig. 2A, light from the NIR light source 31 is emitted to the subject 100, and then the NIR camera 30 and the depth sensor 40 both receive light reflected at the subject 100. Accordingly, the NIR camera 30 and the depth sensor 40 acquire the NIR image d10 and the depth image d20, respectively.
Fig. 2B shows an aspect for acquiring the NIR image d10 and the depth image d20. In the example shown in Fig. 2B, the NIR image d10 and the depth image d20 are images of the same angle of view acquired from a same imaging point P. In this case, the imaging point P is the imaging point for both of the NIR camera 30 and the depth sensor 40.
The images d10, d20 are periodically output to the processing unit 11 simultaneously in a frame format. Such synchronous timings include, for example, a timing at which pulse light is emitted to the subject 100 from the NIR light source 31. The acquisition of the NIR image d10 and the depth image d20 enables it possible to execute a process of acquiring a pulse wavelength velocity (PWV) to be described later.
It should be noted that the ToF camera 20 may be externally mounted to the device 10. Further, the device 10 may be configured to achieve the same functions as the ROM 12 or the RAM 13 or the like using an external storage device such as a hard disk or an optical disc.
The ToF camera 20 may be implemented by other alternative schemes as long as the NIR image d10 and the depth image d20 can be acquired. For example, when the depth is measured with a stereo camera, a camera included in the stereo  camera may acquire an NIR image. When the depth is measured with a depth camera, an image acquired by the depth camera may be treated as the NIR image d10.
[Outline of Indication Information Calculating Process]
Next, the outline of a process of calculating a PWV as information indicative of an indication relating to a blood flow, which is implemented by the device 10 will be described with reference to Figs. 1 to 6B. Fig. 3 is a diagram for describing the relationship between the wavelength of irradiated light and the transmission depth thereof. Fig. 4 is a diagram for describing the outline of measurement. Fig. 5 is a diagram for describing a distance between two regions of interest (ROIs) . Fig. 6A is a diagram for describing a pixel size. Fig. 6B is a diagram exemplifying the size of one pixel g shown in Fig. 6A.
As shown in Fig. 3, the transmission distance of light to a deep part from a body surface varies according to wavelengths d1 to d8 of light (wavelength being about 800 nm to 400 nm) . As the wavelength gets larger, the transmission distance of light to the deep part increases. d1 to d8 show a wavelength range from the wavelength of reddish purple to the wavelength of purple.
This device 10 uses near-infrared light having a wavelength range of reddish purple light with the wavelength d1, among the wavelengths d1 to d8, which has the largest distance to a deep part. The wavelength d1 is, for example, about 750 nm to 800 nm, but is not limited thereto if the NIR  image d10 including a deep part of a human body to be detected can be acquired.
In the example shown in Fig. 3, near-infrared light with the wavelength d1 transmissively reaches an artery located at a deep part (at a depth of 3.0 mm or more from a skin) in a human body as a detection target. Consequently, the device 10 generates an NIR image d10 to be described later, which represents the luminance according to a change in a blood flow (artery flow) flowing through the artery, by using infrared light reflected at an artery, and calculates indication information d40 on the artery flow of a person to be examined or a subject 100 based on the result of evaluation of the NIR image d10.
The wavelength d1 has only to allow reflection at an artery, and a wavelength other than that of the aforementioned near-infrared light representing reddish purple may be available.
The indication information d40 that is handled by the device 10 of the embodiment is, but not limited to, for example, a pulse wavelength velocity (PWV) . The PWV is used as an indication for the progressing rate of arterial sclerosis. For example, the greater the value of the PWV, the more likely cardiac infarction will occur.
Since the device 10 of the embodiment uses near-infrared light whose wavelength has a large light transmission depth as mentioned above, it is possible to acquire indication information d40 of a person to be examined according  to a change in a blood flow in an arterial vessel, not from a blood flow in a blood capillary. This indication information d40 is equivalent to reflection of a change in a blood flow which flows through an arterial vessel, enhancing the reliability of the indication information d40.
As shown in Fig. 4, this device 10 outputs the NIR image d10 and the depth image d20 (Fig. 2B) , imaged at the same point of view, from the ToF camera 20 to the processing unit 11 in synchronism.
In Fig. 4, for example, the depth sensor 40 in the ToF camera 20 acquires light emitted from the NIR light source 31 and later reflected at the subject 100, thereby providing the depth image d20 of the subject 100. Further, as the NIR camera 30 images the subject 100 during the period of acquisition of the depth image d20 by the depth sensor 40, the NIR image d10 of the subject 100 is acquired. As shown in Fig. 4, for example, the NIR image d10 is an image including the neck region of the person to be examined, and this image (frame image) is acquired from the ToF camera 20 in order.
The processing unit 11 which has acquired the NIR image d10 sets two regions of interest (ROIs) 1 and 2 for an artery part located in the neck of the person to be examined included in the NIR image d10 being treated as a detection target. In the example of Fig. 4, the ROI 1 includes an artery part far from a heart, and the ROI 2 includes an artery part closer to the heart. In this case, the artery part is, for example, a part of a radial artery. Accordingly, indication information  reflecting a change in an artery flow in a carotid artery is acquired, which enhances the availability of indication information.
It is to be noted that one way of setting the  ROIs  1 and 2 may be, but not limited to, setting the  ROIs  1 and 2 at a preset interval therebetween. For example, shapes of or positions in artery parts may be registered in advance in the device 10. Thus, the processing unit 11 may set the  ROIs  1 and 2 from the registered information after an artery part is identified.
Further, the processing unit 11 detects time series signals f (t) and g (t) which vary according to artery flows flowing through artery parts within two  ROIs  1 and 2 included in the NIR image d10. In this case, the time series signals f(t) and g (t) are extracted in such a way that a photo plethysmography (PPG) is acquired from the NIR image d10. In the time series signals f (t) and g (t) , the lateral direction represents time t, and the longitudinal direction represents an average value of luminance of all the pixels in the corresponding ROI.
The aforementioned time series signals f (t) and g(t) may be subjected to Nx upsampling (e.g., N=8) . In this case, the number of samples representing the values of the time series signals f (t) and g (t) increases. Thus, values of the time series signals f (t) and g (t) may be given more accurately.
cross-correlation function 111 is a function for calculating convolution of the two time series signals. A  coherence indicating the degree of correlation of the two time series signals is calculated by changing the phases of the time series signals. Then, the phase deviations of the time series signals and the similarity of the periodicity of the time series signals are evaluated from the result. When two identical time series signals are input to the cross-correlation function 111, the then cross-correlation function become equivalent to an auto-correlation function, and shows a maximum. In this embodiment, the processing unit 11 outputs to a subsequent stage the value of "m" indicating the number of samples for the phase delay of the time series signal g (t) when the value of the cross-correlation function 111 shows a maximum, as a phase delay (phase offset) d30.
The cross-correlation function 111 may be expressed by, for example, the following equation (1) .
Figure PCTCN2018092313-appb-000001
In the equation (1) , n represents the length (e.g., two cycles) of the time series signals f (t) and g (t) , and m represents the number of samples for the phase delay of the time series signal g (t)) .
Moreover, in Fig. 4, the processing unit 11 acquires a distance between two  ROIs  1 and 2 from the depth image d20 through a calculation process 112. In Fig. 5, for example, a distance L between a center Fo of the ROI 1 and a center G 0 of the ROI 2 is set as the distance between the two  ROIs  1 and 2.  The  ROIs  1 and 2 are the same as those shown in the NIR image d10
The distance L may be set to a value different from the value exemplified in Fig. 5. For example, the maximum distance or the minimum distance between the two  ROIs  1 and 2 or a distance for a preset number of pixels between the two  ROIs  1 and 2 may be used as the distance L.
In this embodiment, the distance L between the two  ROIs  1 and 2 is set as the distance between artery parts as detection targets.
In Fig. 4, a field of view (FOV) and a resolution are set in a setting unit 113 of the processing unit 11. Further, the processing unit 11 acquires the scale of each pixel in the depth image d20 through an acquisition process 114. When a depth image (image having 600 pixels in width and 360 pixels in height) d20 with a horizontal field of view of h° and a vertical field of view of v° from an imaging point P of the depth sensor 40 is acquired as shown in Fig. 6A, for example, the size (Lh, Lv) (Fig. 6B) per pixel ( "g" shown in Fig. 6A) is expressed by the following equation (2) .
Lh=2·d·tan (h/2) /600
Lv=2·d·tan (v/2) /360    (2)
In the equation (2) , d represents the distance to the depth image d20 from the imaging point P. Although an average distance to a corresponding ROI (average distance among all the pixels within the ROI) is used as one example of the distance d in the device 10 of the embodiment, as will be  described later, the distance may take a different value. It is to be understood that the equation (2) merely shows an example size per pixel, which may be changed. The values of the size, Lh and Lv, may be shown according to the value of the resolution.
The processing unit 11 acquires the value of the distance L (Fig. 5) between the  ROIs  1 and 2 from the pixel size (Lh, Lv) shown in the equation (2) . When the distance L of ten pixels in the vertical direction is shown, for example, the value of "L" is given by Lv×10.
Moreover, in Fig. 4, the processing unit 11 calculates and outputs the PWV as indication information d40 relating to an artery flow of the subject. The PWV as the indication information d40 is acquired by, for example, the following equation (3) .
PWV=L/D    (3)
In the equation (3) , L represents the distance (Fig. 5) between the  ROIs  1 and 2, and D indicates a time for the aforementioned phase delay d30. In this case, d is given from m/(r×N) where m represents the number of samples for the phase delay of the time series signal g (t) indicated by the phase delay d30, r represents the frame rate of the NIR camera 30, and N represents the upsampling number.
As explained above, the device 10 of the embodiment obtains the indication information d40 from the NIR image d10 and the depth image d20.
[Functional Configuration of Device 10]
Fig. 7 is a diagram showing an example of the  functional configuration of the device 10 that is implemented on the hardware configuration shown in Fig. 1. The following describes the functional configuration of the device 10 with reference to Fig. 7. As shown in Fig. 7, the device 10 includes a first acquisition unit 101, a detection unit 102, a shift amount calculating unit 103, a second acquisition unit 104, a calculation unit 105, an indication information calculating unit 106, and an output unit 107.
Those components are implemented by the processing unit 11 shown in Fig. 1, and are configured as follows.
The first acquisition unit 101 acquires an NIR image d10 obtained by imaging an artery part of a subject in a contactless manner.
The detection unit 102 detects individual pulse waves corresponding to a plurality of positions of an artery part (time series signals f (t) and g (t) in Fig. 4) based on the NIR image d10.
The shift amount calculating unit 103 calculates an amount of shifting (phase delay d30 in Fig. 4) between the individual pulse waves detected by the detection unit 102.
The second acquisition unit 104 acquires a depth image d20 including depth information of the artery part.
The calculation unit 105 calculates a distance between artery parts (distance L between  ROIs  1 and 2 in Fig. 5) associated with the amount of shifting calculated by the shift amount calculating unit 103, based on the depth information included in the depth image d20.
The indication information calculating unit 106 calculates indication information (PWV) relating to an artery flow of the subject as biological information by using the amount of shifting. Further, the indication information calculating unit 106 may calculate the indication information d40 by using the distance between the artery parts calculated by the calculation unit 105 and the amount of shifting.
The output unit 107 outputs the indication information d40.
The components of the individual units 101 to 107 shown in Fig. 7 may be implemented by an ASIC (Application Specific Integrated Circuit) or FGA (Field Programmable Gate Array) or the like. Those components are referred to in the following description of the operation of the device 10 as needed.
[Operation of Device 10]
The following describes the general processing of the device 10 with reference to Figs. 1 to 8. The processing unit 11 in this embodiment can execute various processes to be described later according to a program.
Fig. 8 is a flowchart illustrating one example of the general process of calculating indication information d40.
In Fig. 8, when a subject 100 is imaged by the NIR camera 30 of the ToF camera 20, the processing unit 11 acquires an NIR image d10 of the subject 100 (step S11) . In Fig. 4, for example, an NIR image d10 including the range of the neck of the subject is acquired.
In this step, the processing unit 11 is implemented as the first acquisition unit 101.
The processing unit 11 detects individual pulse waves corresponding to a plurality of positions in an artery part of the subject based on the NIR image d10 (step S12) . The detected pulse waves are indicated as time series signals f (t) and g (t) as exemplified in Fig. 4. For example, Fig. 4 shows an example in which a change in an artery flow flowing through an artery part located in an ROI 1 with the passage of time is represented by the time series signal f (t) . Fig. 4 also shows an example in which a change in an artery flow flowing through an artery part located in an ROI 2 with the passage of time is represented by the time series signal g (t) . The  ROIs  1 and 2 are set in correspondence with the positions in an artery part as detection targets in the NIR image d10.
In step S12, the processing unit 11 may perform the detection by upsampling the time series signals f (t) and g (t) . In a case of 8x upsampling, for example, samples of the time series signals f (t) and g (t) are interpolated at timings of t=0.125, 0.25, 0.375, 0.625, 0.75, and 0.875 over an interval of t=0 to 1. Thus, the time series signals f (t) and g (t) are shown more precisely.
In this step, the processing unit 11 is implemented as the detection unit 102.
Then, the processing unit 11 calculates a phase delay d30 between the aforementioned pulse waves (step S13) . In the example of Fig. 4, with reference to the two time series  signals f (t) and g (t) , the processing unit 11 determines whether the value of the cross-correlation function 111 shown in the equation (1) shows a maximum by shifting the phase of the time series signal g (t) . When the determination results in that the value of the cross-correlation function 111 shows the maximum, the processing unit 11 calculates the phase delay d30 of the time series signal g (t) (e.g., the value of "m" indicating the number of samples for the phase delay of the time series signal g(t) ) when the value of the cross-correlation function 111 shows the maximum.
In this step, the processing unit 11 is implemented as the shift amount calculating unit 103.
The processing unit 11 calculates indication information d40 (PWV in Fig. 8) on an artery part of the subject, as biological information, by using the phase delay d30 (step S14) . Further, the processing unit 11 outputs the indication information d40 (step S15) . Accordingly, the indication information d40 may be provided visually.
In step S14, the processing unit 11 is implemented as the indication information calculating unit 106. In addition, in step S15, the processing unit 11 is implemented as the output unit 107.
In the step S14, the PWV as the indication information d40 is obtained from PWV=L/D (where L is the distance between the  ROIs  1 and 2, and D is the time corresponding to the phase delay d30 resulting from the calculation of step S13) as expressed in the aforementioned equation (3) . The  process of acquiring the value of "L" in the equation (3) is illustrated in a flowchart in Fig. 9.
The value of "L" mentioned above may be input via the input device 15. Even in this manner, the indication information d40 may also be obtained from the equation (3) .
Fig. 9 is a flowchart illustrating one example of the process of calculating the distance L in the equation (3) .
In Fig. 9, the processing unit 11 acquires, from the ToF camera 20, a depth image d20 having the same point of view as the NIR image d10 in synchronism with the NIR image d10 (step S21) . In this example, the depth image d20 includes depth information representing a distance to the subject pixel by pixel.
In this step, the processing unit 11 is implemented as the second acquisition unit 104.
The processing unit 11 then calculates a distance to an artery part as a detection target based on the depth image d20 (step S22) . This process is illustrated in detail in a flowchart in Fig. 10 to be described later.
Moreover, the processing unit 11 calculates a distance between the artery parts from the result of the calculation of step S22 (step S23) . In Fig. 5, for example, the distance L between the center F 0 of the ROI 1 and the center G 0 of the ROI 2 is calculated as the distance between the artery parts in the step S23.
In steps S22 and S23, the processing unit 11 is implemented as the calculation unit 105.
The following describes an example of the calculation process of step S22 with reference to Figs. 10 and 11. Fig. 10 is a flowchart illustrating the calculation process of step S22 in Fig. 9. Fig. 11 is a diagram for describing a separation process of step S221 in Fig. 10.
In Fig. 10, the processing unit 11 separates a foreground and a background from the depth image d20 acquired in step S21 in Fig. 9 (step S22) . In Fig. 11, a foreground G1 and a background G2 are distinguished through filter performed by determining whether the value of depth information included in the depth image d20, for example, as a target (distance to the target from the imaging point P) is equal to or greater than a threshold value. Then, a pixel region having a value equal to or greater than the threshold value is removed as a background G3 far from the imaging point P.
In Fig. 11, an edge G2 represents a portion of an active motion, and is eliminated from the foreground G1. Examples of the edge detection include threshold processing of calculating a gradient, for example.
In this embodiment, the  ROIs  1 and 2 shown in Fig. 4 are specified as the foreground through the separation process of step S221.
Next, in Fig. 10, the processing unit 11 creates a histogram (the amounts of the features of pixels) with a distance indicated by the depth information on the ROIs 1 and 2 (Fig. 4) as targets (step S222) . The processing unit 11 then calculates an average value of the distances of all the pixels  in each ROI 1 and 2 (distances indicated by the depth information) as the distance to each  ROI  1 and 2 based on the histogram created in step S222 (step S223) . In this case, when the value of the target pixel has a difference of a threshold value or more from other values in the distribution of the created histogram, for example, the processing unit 11 eliminates that value as an unfit value, and then calculates the average value.
In this embodiment, the average value calculated in step S223 is set as the distance d (fig. 6) to an artery part in each ROI from the imaging point P. As a result, the size (Lh, Lv) per pixel is acquired from the equation (2) . Further, the distance between artery parts is calculated in step S23 in Fig. 9. That is, the distance L between the two ROIs 1 and 2 (Fig. 5) as the distance between the artery parts is calculated from the pixel size (Lh, Lv) acquired from the equation (2) . In Fig. 5, for example, when the distance L of ten pixels in the vertical direction is shown, for example, the value of "L" is given by Lv×10.
Consequently, the processing unit 11 substitutes the value of "L" calculated in step S23 in Fig. 9 into PWV=L/D, shown in the equation (3) in step S14 in Fig. 8 to calculate the value of the PWV as the indication information d40.
In step S14 in Fig. 8, the processing unit 11 may calculate the PWV shown in the equation (3) for the time series signals f (t) and g (t) of preset cycles (e.g., five cycles, ten cycles, or the like) . In this case, the average value, the  maximum value or the minimum value of the PWV, for example, may also be used as the indication information d40. Even if the PWV cannot be properly calculated from the time series signals f(t) and g (t) at a certain timing, therefore, the adequate indication information d40 is obtained by using the average value or the like of the PWV acquired from the time series signals f(t) and g (t) for the aforementioned cycles.
According to the embodiment, as described above, the distance to an artery part within each ROI from the imaging point P is acquired based on the depth image d20, and the actual distance L or the like between the ROIs 1, w in the detection target is calculated based on this distance d. In this manner, the distance d to an artery part under a skin which cannot be directly observed is acquired at the time of acquiring the time series signals f (t) and g (t) based on the image of the artery part as the detection target. Accordingly, the time series signals of pulse waves to be detected can be made to adequately reflect the actual pulse waves. According to the related art that acquires time series signals of pulse waves based on a change in the color of a skin surface, the pulse waves of blood capillaries near a skin surface are measured. Thus, an indication such as the PWV cannot be obtained accurately. In contrast, the present embodiment, which accurately acquires an indication relating to a blood flow by obtaining the pulse waves of an artery, can surely identify an artery part by acquiring the distance d in the above-described manner.
Unlike information reflecting a change in a blood  flow flowing through a blood capillary of a subject, the indication information d40 is calculated to reflect a change in a blood flow flowing through an artery vessel. This can improve the reliability of the indication information.
Moreover, the distance between artery parts (distance L between the  ROIs  1 and 2 in Fig. 5) is acquired from the depth image d20, thus eliminating the need for an operation of inputting the distance L. This eliminates an error in an input value. Thus, correct indication information d40 can be obtained.
Further, the NIR image d10 and the depth image d20 at the same field of view are output from the ToF camera 20 to the processing unit 11 in synchronism with each other. Thus, the processing unit 11 can obtain the indication information d40 through steps S21 to S23 in Fig. 9 in synchronism with one another.
Although the distance L between two ROIs 1 and 2 (Fig. 5) has been described as the distance between artery parts by way of example, the distance may be changed as needed. For example, Fig. 12 exemplifies an aspect in which with an artery part 71 being located in the  ROIs  1 and 2 shown in Fig. 4, the locations of the artery part 71 are estimated to calculate a distance between the artery parts 71 according to the result of the estimation. In Fig. 12, the processing unit 11 registers patterns of distances to artery parts around the neck of a subject in advance, and compares depth information included in the depth image d20 with the registered patterns of distance  information to estimate the location of the artery part 71. Then, the processing unit 11 calculates the entire length of the artery part 71 along the estimated locations of the artery parts 71 as a distance L1. As a result, a more accurate distance L1 between the artery parts 71 may be obtained, and more accurate indication information d40 may be calculated. As the process of estimating the location of an artery part, for example, shapes of artery parts may be patterned in advance, and the location of an artery part included in the depth image d20 may be estimated from the patterns.
The aforementioned artery part as a detection target is not limited to the part of the subject 100 shown in Fig. 4. For example, Fig. 13 exemplifies a case where a radial artery within an arm range 81 of a subject is a detection target. Even in this case, indication information d40 reflecting a change in the blood flow of a radial artery in an arm is acquired.
The device-relating embodiment and the method-relating embodiment are based on the same concept, so that the technical advantages that are brought about by the device-relating embodiment are also the same as those brought about by the method-relating embodiment. For the specific principle, the foregoing description of the embodiment of the device should be referred to, and the details thereof will not be repeated herein.
It will be appreciated by those skilled in the art that the foregoing embodiments and all or part of the processes implementing the equivalent modified examples made within the  scope of claims of the present invention will also fall within the scope of the present invention.

Claims (23)

  1. A method for obtaining biological information, comprising:
    acquiring a first image of an artery part of a subject imaged in a contactless manner;
    detecting individual pulse waves corresponding to a plurality of positions in the artery part based on the first image;
    calculating an amount of shifting between the detected individual pulse waves; and
    calculating indication information on an artery flow of the subject, as the biological information, by using the amount of shifting.
  2. The method according to claim 1, further comprising:
    acquiring a second image including depth information of the artery part; and
    calculating a distance between artery parts associated with the amount of shifting based on the depth information included in the second image,
    wherein the calculating the indication information calculates the indication information by using the calculated distance between the artery parts and the amount of shifting.
  3. The method according to claim 2, wherein the calculating the distance between the artery parts includes separating a foreground and a background from the second image, calculating  distances to the artery parts as a detection target based on an amount of features of pixels in the image on the foreground, and calculating the distance between the artery parts by using the calculated distances.
  4. The method according to claim 2 or 3, wherein the calculating the distance between the artery parts includes estimating positions where the artery parts are located, based on the depth information, and calculating the distance between the artery parts by using a result of the estimation.
  5. The method according to any one of claims 1 to 4, wherein the detecting the pulse waves includes detecting the pulse waves by upsampling time series signals corresponding to the positions where the artery parts included in the first image are located.
  6. The method according to any one of claims 1 to 5, further comprising outputting the indication information.
  7. The method according to any one of claims 1 to 6, wherein the indication information is a pulse wavelength velocity (PWV) .
  8. The method according to any one of claims 1 to 7, wherein the artery part is a part of a carotid artery.
  9. The method according to any one of claims 1 to 8, wherein the first image is acquired by using a near-infrared camera.
  10. The method according to any one of claims 2 to 4, wherein the second image is acquired by using a depth sensor.
  11. A device for obtaining biological information, comprising:
    a first acquisition unit for acquiring a first image of an artery part of a subject imaged in a contactless manner;
    a detection unit for detecting individual pulse waves corresponding to a plurality of positions in the artery part based on the first image;
    a shift amount calculating unit for calculating an amount of shifting between the detected individual pulse waves; and
    an indication information calculating unit for calculating indication information on an artery flow of the subject, as the biological information, by using the amount of shifting.
  12. The device according to claim 11, further comprising:
    a second acquisition unit for acquiring a second image including depth information of the artery part; and
    a calculation unit for calculating a distance between artery parts associated with the amount of shifting based on the depth information included in the second image,
    wherein the indication information calculating unit is configured to calculate the indication information by using the calculated distance between the artery parts and the amount of shifting.
  13. The device according to claim 12, wherein the calculation unit is configured to separate a foreground and a background from the second image, calculate distances to the artery parts as a detection target based on an amount of features of pixels in the image on the foreground, and calculate the distance between the artery parts by using the calculated distances.
  14. The device according to claim 12 or 13, wherein the calculation unit is configured to estimate positions where the artery parts as detection targets are located, based on the depth information, and calculate the distance between the artery parts by using a result of the estimation.
  15. The device according to any one of claims 11 to 14, wherein the detection unit is configured to detect the pulse waves by upsampling time series signals corresponding to the positions where the artery parts included in the first image are located.
  16. The device according to any one of claims 11 to 15, further comprising an output unit for outputting the indication information.
  17. The device according to any one of claims 11 to 16, wherein the indication information is a pulse wavelength velocity (PWV) .
  18. The device according to any one of claims 11 to 17, wherein the artery part is a part of a carotid artery.
  19. The device according to any one of claims 11 to 18, wherein the first image is acquired by using a near-infrared camera.
  20. The device according to any one of claims 12 to 14, wherein the second image is acquired by using a depth sensor.
  21. A device comprising:
    the device as recited in any one of claims 11 to 20;
    a near-infrared camera configured to image the first image; and
    a depth sensor used to obtain the indication information, and obtain depth information of the plurality of artery parts.
  22. A computer readable storage medium recording a program for allowing a computer to execute the method as recited in any one of claims 1 to 10.
  23. A computer program for allowing a computer to execute the method as recited in any one of claims 1 to 10.
PCT/CN2018/092313 2018-06-22 2018-06-22 Device and method for acquiring biological information WO2019241982A1 (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
PCT/CN2018/092313 WO2019241982A1 (en) 2018-06-22 2018-06-22 Device and method for acquiring biological information
JP2020571471A JP2021528169A (en) 2018-06-22 2018-06-22 Equipment and methods for obtaining biological information
CN201880094660.7A CN112292072A (en) 2018-06-22 2018-06-22 Apparatus and method for acquiring biological information

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2018/092313 WO2019241982A1 (en) 2018-06-22 2018-06-22 Device and method for acquiring biological information

Publications (1)

Publication Number Publication Date
WO2019241982A1 true WO2019241982A1 (en) 2019-12-26

Family

ID=68982598

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/092313 WO2019241982A1 (en) 2018-06-22 2018-06-22 Device and method for acquiring biological information

Country Status (3)

Country Link
JP (1) JP2021528169A (en)
CN (1) CN112292072A (en)
WO (1) WO2019241982A1 (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2013084698A1 (en) * 2011-12-09 2013-06-13 ソニー株式会社 Measurement device, measurement method, program and recording medium
WO2014140148A1 (en) * 2013-03-14 2014-09-18 Koninklijke Philips N.V. Device and method for determining vital signs of a subject
WO2014184447A1 (en) * 2013-05-15 2014-11-20 Pulseon Oy Portable pulse measuring device
WO2015121070A1 (en) * 2014-02-12 2015-08-20 Koninklijke Philips N.V. Device, system and method for determining vital signs of a subject based on reflected and transmitted light
US20170055853A1 (en) * 2015-08-25 2017-03-02 Koninklijke Philips N.V. Device and system for monitoring of pulse-related information of a subject

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN100496388C (en) * 2005-08-31 2009-06-10 深圳迈瑞生物医疗电子股份有限公司 Device for calculating blood pressure by using signal transformation
CA2785764C (en) * 2009-12-28 2020-04-07 Gambro Lundia Ab Monitoring a property of the cardiovascular system of a subject
JP3180987U (en) * 2012-11-02 2013-01-17 中原大學 Imaging pulse wave velocity measuring device
CN106073742A (en) * 2013-05-13 2016-11-09 天津点康科技有限公司 A kind of blood pressure measuring system and method
WO2015078735A1 (en) * 2013-11-27 2015-06-04 Koninklijke Philips N.V. Device and method for obtaining pulse transit time and/or pulse wave velocity information of a subject
EP3087915B1 (en) * 2015-04-27 2022-02-09 Tata Consultancy Services Limited Method and system for noise cleaning of photoplethysmogram signals for estimating blood pressure
JP2018531640A (en) * 2015-09-04 2018-11-01 コーニンクレッカ フィリップス エヌ ヴェKoninklijke Philips N.V. System, method and processor for monitoring vital signs of an object
WO2017060342A1 (en) * 2015-10-06 2017-04-13 Koninklijke Philips N.V. Device, system and method for obtaining vital sign related information of a living being
JP6854612B2 (en) * 2015-10-06 2021-04-07 三星電子株式会社Samsung Electronics Co.,Ltd. Biological information measuring device, biometric information measuring method, and computer-readable recording medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2013084698A1 (en) * 2011-12-09 2013-06-13 ソニー株式会社 Measurement device, measurement method, program and recording medium
WO2014140148A1 (en) * 2013-03-14 2014-09-18 Koninklijke Philips N.V. Device and method for determining vital signs of a subject
WO2014184447A1 (en) * 2013-05-15 2014-11-20 Pulseon Oy Portable pulse measuring device
WO2015121070A1 (en) * 2014-02-12 2015-08-20 Koninklijke Philips N.V. Device, system and method for determining vital signs of a subject based on reflected and transmitted light
US20170055853A1 (en) * 2015-08-25 2017-03-02 Koninklijke Philips N.V. Device and system for monitoring of pulse-related information of a subject

Also Published As

Publication number Publication date
JP2021528169A (en) 2021-10-21
CN112292072A (en) 2021-01-29

Similar Documents

Publication Publication Date Title
JP6125648B2 (en) Biological information acquisition apparatus and biological information acquisition method
US10383531B2 (en) Physiological signals measurement systems and methods thereof
KR100889014B1 (en) Extraction method of tongue region using graph-based approach
EP2405818B1 (en) Automatic analysis of cardiac m-mode views
US9674447B2 (en) Apparatus and method for adaptive computer-aided diagnosis
US20150148687A1 (en) Method and apparatus for measuring heart rate
US11701026B2 (en) Respiratory state estimating device, portable device, wearable device, medium, respiratory state estimating method and respiratory state estimator
KR101334064B1 (en) Apparatus and method for measureing velocity vector imaging of blood vessel
US9202273B2 (en) Methods and systems for color flow dynamic frame persistence
WO2002019268A2 (en) Extracting a string of points following a threadlike structure in a sequence of images
MX2014001125A (en) Device and method for obtaining and processing measurement readings of a living being.
EP3954206A1 (en) Fish size calculation with compensation of the tail beat
WO2019241982A1 (en) Device and method for acquiring biological information
EP2848193B1 (en) System and method for determining video-based pulse transit time with time-series signals
CN104688199B (en) A kind of contactless pulses measure method based on skin pigment concentration difference
JP2021023490A (en) Biological information detection device
JPWO2020003910A1 (en) Heart rate detector, heart rate detection method and program
US20080240338A1 (en) Evaluation method for mapping the myocardium of a patient
JP2723467B2 (en) Ultrasound diagnostic equipment
KR101726505B1 (en) Apparatus and method for acquiring and processing tongue image
US20170004626A1 (en) Image processing device, image processing method, and computer-readable recording medium
EP4094696A1 (en) Noninvasive measurement of left ventricular compliance
CN113971659B (en) Respiratory gating system for percutaneous lung and abdominal puncture
KR102493242B1 (en) Method and System for Judging Aortic Valve Stenosis Risk and Other Cardiovascular Diseases Risk from Photoplethysmography through Artificial Intelligence Learning
JP7387802B2 (en) Inspection equipment, inspection methods and programs

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18923242

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2020571471

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18923242

Country of ref document: EP

Kind code of ref document: A1