CN114663865A - Physiological state detection method and device, electronic equipment and storage medium - Google Patents

Physiological state detection method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN114663865A
CN114663865A CN202210346417.8A CN202210346417A CN114663865A CN 114663865 A CN114663865 A CN 114663865A CN 202210346417 A CN202210346417 A CN 202210346417A CN 114663865 A CN114663865 A CN 114663865A
Authority
CN
China
Prior art keywords
region
interest
physiological state
area
brightness
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210346417.8A
Other languages
Chinese (zh)
Inventor
何裕康
高勇
毛宁元
许亮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Sensetime Lingang Intelligent Technology Co Ltd
Original Assignee
Shanghai Sensetime Lingang Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Sensetime Lingang Intelligent Technology Co Ltd filed Critical Shanghai Sensetime Lingang Intelligent Technology Co Ltd
Priority to CN202210346417.8A priority Critical patent/CN114663865A/en
Publication of CN114663865A publication Critical patent/CN114663865A/en
Priority to PCT/CN2022/113755 priority patent/WO2023184832A1/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The present disclosure provides a physiological state detection method, apparatus, electronic device and storage medium, wherein the method comprises: acquiring a video stream acquired by camera equipment; extracting a face image of a target object from a plurality of frame images of a video stream; extracting at least one region of interest in each frame of facial image, wherein the region of interest comprises at least one connected face smoothing sub-region; respectively determining a first contribution degree and a second contribution degree of each interested area to a physiological state detection result according to the area of each interested area and the pixel brightness information; fusing image information of at least one region of interest based on the first contribution degree and the second contribution degree; and extracting physiological state information based on the image information obtained by fusion to obtain a physiological state detection result of the target object. The method and the device realize physiological state detection based on an image processing mode, can measure in real time at any time and any place, and have better practicability.

Description

Physiological state detection method and device, electronic equipment and storage medium
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to a method and an apparatus for detecting a physiological state, an electronic device, and a storage medium.
Background
Accurate physiological state data is the basis for analyzing human variability, so the method has important significance for detecting the physiological state, and is widely applied to various application scenes.
Taking a safe driving scene as an example, effective physiological state detection can help to know the physiological state of passengers in the vehicle, so that auxiliary decision is provided for safe driving. In the correlation technique, mainly rely on dedicated check out test set, carry out the physiological state and detect like equipment such as sphygmomanometer, cardiotachometer, oximetry, in addition, can also realize the measurement of physiological state with the help of wearing equipment such as intelligent wrist-watch, intelligent bracelet that the integration has relevant response components and parts.
It can be seen that the above detection scheme requires contact measurement by means of a dedicated instrument, which causes inconvenience to the detection and thus does not well meet the requirements such as a safe driving scenario.
Disclosure of Invention
The embodiment of the disclosure at least provides a physiological state detection method and device, electronic equipment and a storage medium.
In a first aspect, an embodiment of the present disclosure provides a physiological state detection method, including:
acquiring a video stream acquired by camera equipment;
extracting a face image of a target object from a plurality of frame images of the video stream;
extracting at least one region of interest in each frame of the facial image, wherein the region of interest comprises at least one connected face smoothing sub-region;
determining a first contribution degree of each interested area to a physiological state detection result according to the area of each interested area;
determining a second contribution degree of each interested area to a physiological state detection result according to the pixel brightness information of each interested area;
fusing image information of at least one region of interest based on the first contribution degree and the second contribution degree;
and extracting physiological state information based on the image information obtained by fusion to obtain a physiological state detection result of the target object.
In a possible embodiment, the first contribution comprises an area weight and the second contribution comprises a brightness weight;
the fusing the image information of at least one region of interest based on the first contribution degree and the second contribution degree includes:
determining a total weight value corresponding to at least one region of interest respectively based on the product of the area weight and the brightness weight corresponding to the at least one region of interest respectively;
and performing weighted fusion on the image information of the at least one region of interest based on the corresponding total weight value, and determining the image information obtained by fusion.
In one possible embodiment, the area weight is determined as follows:
summing the areas of the regions of interest to obtain an area sum value;
and calculating the ratio of the area of the region of interest to the area sum value for each region of interest to obtain the area ratio of the region of interest as the area weight corresponding to the region of interest.
In a possible implementation, in case the pixel luminance information comprises an average luminance value, the luminance weight is determined as follows:
summing the average brightness values of the interested regions to obtain a brightness sum value;
and calculating the ratio of the average brightness value of the region of interest to the brightness sum value for each region of interest to obtain the brightness ratio of the region of interest as the brightness weight corresponding to the region of interest.
In a possible implementation manner, the determining a first contribution degree of each of the regions of interest to the physiological state detection result according to the area of each of the regions of interest includes:
determining that the first contribution degree of the region of interest is 0 under the condition that the area of the region of interest is smaller than a preset area threshold value; and/or
The determining a second contribution degree of each region of interest to the physiological state detection result according to the pixel brightness information of each region of interest includes:
determining an average brightness value of each interested area according to the pixel brightness information of each interested area;
determining that the second contribution degree of the region of interest is 0 in case that the average brightness value is smaller than a first preset brightness threshold, or determining that the second contribution degree of the region of interest is 0 in case that the average brightness value is greater than a second preset brightness threshold.
In a possible implementation, the fusing the image information of the at least one region of interest based on the first contribution degree and the second contribution degree includes:
for each region of interest, performing time domain processing on the brightness value of the region of interest in the multi-frame image, and determining a time domain brightness signal corresponding to the region of interest;
fusing the time domain luminance signals corresponding to the plurality of regions of interest based on the first contribution degree and the second contribution degree to obtain fused time domain luminance signals;
the extracting of the physiological state information based on the image information obtained by the fusion to obtain the physiological state detection result of the target object includes:
and performing frequency domain processing on the fused time domain brightness signal, and determining the physiological state value of the target object based on the peak value of the frequency domain signal obtained by the frequency domain processing.
In a possible implementation manner, in a case that the image information obtained by fusion includes brightness values corresponding to a plurality of different color channels, the performing physiological state information extraction based on the image information obtained by fusion to obtain a physiological state detection result of the target object includes:
performing time domain processing on brightness values corresponding to the three color channels in the image information obtained by fusion, and determining a time domain brightness signal corresponding to each color channel;
performing principal component analysis on the time domain brightness signals corresponding to the plurality of different color channels to obtain time domain signals representing the physiological state of the target object;
performing frequency domain processing on the time domain signal representing the physiological state of the target object to obtain a frequency domain signal representing the physiological state of the target object;
determining a physiological state value of the target subject based on the peak value of the frequency domain signal.
In one possible embodiment, the method further comprises:
extracting facial feature points of each frame of facial image;
and determining at least one connected face smoothing sub-region in each frame of the face image based on the extracted face feature points.
In a possible embodiment, said first contribution value is proportional to the area of the corresponding region of interest; the second contribution degree is in direct proportion to the average pixel brightness value of the corresponding interested area and/or the second contribution degree is in inverse proportion to the variance of the pixel brightness value of the corresponding interested area.
In a second aspect, an embodiment of the present disclosure further provides a physiological status detection apparatus, including:
the acquisition module is used for acquiring a video stream acquired by the camera equipment;
the first extraction module is used for extracting a face image of a target object from a multi-frame image of the video stream;
the second extraction module is used for extracting at least one region of interest in each frame of the face image, and the region of interest comprises at least one connected face smoothing sub-region;
the determination module is used for determining a first contribution degree of each interested area to a physiological state detection result according to the area of each interested area; determining a second contribution degree of each interested area to a physiological state detection result according to the pixel brightness information of each interested area;
a fusion module for fusing image information of at least one region of interest based on the first contribution degree and the second contribution degree;
and the detection module is used for extracting physiological state information based on the image information obtained by fusion to obtain a physiological state detection result of the target object.
In a third aspect, an embodiment of the present disclosure further provides an electronic device, including: a processor, a memory and a bus, the memory storing machine-readable instructions executable by the processor, the processor and the memory communicating via the bus when the electronic device is running, the machine-readable instructions when executed by the processor performing the steps of the physiological state detection method according to the first aspect and any of its various embodiments.
In a fourth aspect, the disclosed embodiments also provide a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, performs the steps of the physiological state detection method according to the first aspect and any one of its various embodiments.
According to the physiological state detection method, the physiological state detection device, the electronic device and the storage medium provided by the embodiment of the disclosure, under the condition that the video stream is obtained, the face image of the target object can be extracted from the video stream firstly, and then at least one region of interest in the face image is extracted. In this way, under the condition that the first contribution degree and the second contribution degree of each region of interest to the physiological state detection result are respectively determined according to the area and the pixel brightness information of each region of interest, the image information of at least one region of interest can be fused, and finally the physiological state information is extracted based on the image information obtained by fusion to obtain the physiological state detection result. Compared with the problem of inconvenient measurement caused by the fact that contact measurement needs to be carried out by means of a special instrument in the related art, the method and the device for detecting the physiological state realize physiological state detection based on an image processing mode, can carry out real-time measurement anytime and anywhere, have better practicability, can combine the area and the area brightness to realize the fusion of image information of an interested area to carry out in the process of detecting the physiological state, and can further improve the accuracy of measurement.
In order to make the aforementioned objects, features and advantages of the present disclosure more comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present disclosure, the drawings required for use in the embodiments will be briefly described below, and the drawings herein incorporated in and forming a part of the specification illustrate embodiments consistent with the present disclosure and, together with the description, serve to explain the technical solutions of the present disclosure. It is appreciated that the following drawings depict only certain embodiments of the disclosure and are therefore not to be considered limiting of its scope, for those skilled in the art will be able to derive additional related drawings therefrom without the benefit of the inventive faculty.
Fig. 1 illustrates a flow chart of a physiological state detection method provided by an embodiment of the present disclosure;
FIG. 2 is a flowchart illustrating a specific ROI extraction method in a physiological state detection method provided by an embodiment of the disclosure;
fig. 3 shows a schematic diagram of a physiological state detection device provided by an embodiment of the present disclosure;
fig. 4 shows a schematic diagram of an electronic device provided by an embodiment of the present disclosure.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present disclosure more clear, the technical solutions of the embodiments of the present disclosure will be described clearly and completely with reference to the drawings in the embodiments of the present disclosure, and it is obvious that the described embodiments are only a part of the embodiments of the present disclosure, not all of the embodiments. The components of the embodiments of the present disclosure, generally described and illustrated in the figures herein, can be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present disclosure, presented in the figures, is not intended to limit the scope of the claimed disclosure, but is merely representative of selected embodiments of the disclosure. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the disclosure without making creative efforts, shall fall within the protection scope of the disclosure.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures.
The term "and/or" herein merely describes an associative relationship, meaning that three relationships may exist, e.g., a and/or B, may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the term "at least one" herein means any one of a plurality or any combination of at least two of a plurality, for example, including at least one of A, B, C, and may mean including any one or more elements selected from the group consisting of A, B and C.
Through research discovery, in the correlation technique, mainly rely on dedicated check out test set, carry out the physiological state and detect like equipment such as sphygmomanometer, cardiotachometer, oximetry, in addition, can also realize the measurement of physiological state with the help of wearing equipment such as intelligent wrist-watch, intelligent bracelet that the integration has relevant response components and parts.
It can be seen that the above detection scheme requires contact measurement by means of a dedicated instrument, which causes inconvenience to the detection and thus does not well meet the requirements such as a safe driving scenario.
In order to solve the above problems, a contactless detection scheme is provided in the related art, that is, remote photoplethysmography (rPPG), which can complete detection only by using a mobile phone terminal with a camera widely used by people at present, and is very convenient to use without extra hardware cost. However, the current bottleneck of the rPPG method is that the detection accuracy is inferior to that of some special detection devices, and the rPPG method is also easily influenced by external light. In addition, the physiological feature detection using the rPPG method requires that the detected object is kept still for a period of time, and can only be used for active detection.
Based on the research, the present disclosure provides at least one physiological state detection scheme that performs weighting based on the area and the brightness of the region, so as to extract the ppg signals in a weighting manner, and the extracted ppg signals include more effective pixel samples, thereby effectively improving the detection accuracy.
To facilitate understanding of the present embodiment, first, a physiological status detection method disclosed in an embodiment of the present disclosure is described in detail, where an execution subject of the physiological status detection method provided in the embodiment of the present disclosure is generally an electronic device with certain computing capability, and the electronic device includes, for example: a terminal device, which may be a User Equipment (UE), a mobile device, a User terminal, a cellular phone, a cordless phone, a Personal Digital Assistant (PDA), a handheld device, a vehicle-mounted device, a wearable device, or a server or other processing device. In some possible implementations, the physiological state detection method may be implemented by a processor calling computer readable instructions stored in a memory.
Referring to fig. 1, a flowchart of a physiological status detection method provided by an embodiment of the present disclosure is shown, where the method includes steps S101 to S107, where:
s101: acquiring a video stream acquired by camera equipment;
s102: extracting a face image of a target object from a plurality of frame images of a video stream;
s103: extracting at least one region of interest in each frame of facial image, wherein the region of interest comprises at least one connected face smoothing sub-region;
s104: determining a first contribution degree of each interested area to a physiological state detection result according to the area of each interested area;
s105: determining a second contribution degree of each interested area to the physiological state detection result according to the pixel brightness information of each interested area;
s106: fusing image information of at least one region of interest based on the first contribution degree and the second contribution degree;
s107: and extracting physiological state information based on the image information obtained by fusion to obtain a physiological state detection result of the target object.
In order to facilitate understanding of the physiological state detection method provided by the embodiments of the present disclosure, first, a brief description is provided below on an application scenario of the method. The physiological state detection method in the embodiment of the disclosure can be applied to the field of automobiles, that is, the embodiment of the disclosure can realize the physiological state detection of the human body object in the vehicle cabin environment, can timely know whether the target object has abnormal physical condition under the condition of obtaining the physiological state detection result of the target object in the vehicle cabin environment, and can timely remind or provide assistance under the condition that the physical condition is abnormal, thereby providing more sufficient practical possibility for safe driving.
Besides, the embodiments of the present disclosure may also be applied to any other related fields that require physiological status detection, such as medical treatment, home life, etc., and are not limited specifically here. In view of the wide application of the automotive field, the following description will be given by way of example.
The video stream in the embodiment of the present disclosure may be acquired by a camera device (such as a fixed camera in a vehicle cabin), may also be acquired by a camera of a user terminal, and may also be acquired by other manners, which is not limited herein.
In order to enable detection of a physiological state of a specific target object, the mounting position of the camera may be preset based on the specific target object, for example, in order to enable detection of a physiological state of a driver in a vehicle, the camera may be mounted at a position where the imaging range covers the driving area, such as the inside of the a-pillar of the vehicle, on a console, or the position of a steering wheel; for another example, in order to detect the physiological state of the occupant of the vehicle with respect to various riding attributes including the driver and the passenger, the camera may be mounted in a position where the imaging range of the interior mirror, the roof trim, the reading lamp, and the like can cover a plurality of seating areas in the vehicle cabin.
In practical applications, the video stream of the driving area may be acquired by using an in-vehicle image acquisition device included in a Driver Monitoring System (DMS), or the video stream of the riding area may be acquired by using an in-vehicle image acquisition device included in a passenger Monitoring System (OMS).
Considering that the skin color and brightness change caused by the flow of facial blood vessels can reflect the physiological states of heartbeat, respiration and the like, the method can firstly carry out face detection on multi-frame images in a video stream, extract multi-frame face images of a target object in a vehicle cabin, and then realize the extraction of physiological state information aiming at the face images.
The target object may be an object of a particular ride attribute, such as a driver, a passenger in a front passenger seat; alternatively, the target object may be an object whose identity is registered in advance using facial information, such as an owner of a vehicle registered through an application; alternatively, the target object may be any occupant in the vehicle, and at least one occupant may be located by performing face detection on the video stream in the vehicle cabin, and the detected occupant or occupants may be set as the target object.
In the face detection, a plurality of faces of the subject may appear on one frame image. In some scenarios, the physiological state detection may be selected for an occupant at a certain riding position, i.e., the occupant at the riding position may be the target object. In order to achieve the physiological state detection for the target object in the vehicle cabin, here, the multi-frame face image of the target object may be determined from the detected face images based on the face detection results of the multi-frame images and a specified riding position indicating the position of the target object to be measured.
The relative position of a camera used for collecting video streams in the vehicle cabin in the inner space of the vehicle is fixed, and images collected by the camera can be divided according to the seat area according to the position of the camera, for example, for a 5-seat private car, the images can be divided into: an image area corresponding to a driver seat, an image area corresponding to a co-driver seat, an image area corresponding to a left seat in a rear row, an image area corresponding to a right seat in a rear row, and an image area corresponding to a middle seat in a rear row. According to the position of the face of each passenger object in the vehicle in the image and the coordinate range of each image area, the image area where the face of each passenger object falls can be determined, and the passenger object at the designated riding position is determined to be the target object.
In practical application, the OMS generally shoots an image of the whole vehicle, may shoot a plurality of persons, and may manually select "a front vehicle takes a parking space" and "a rear seat takes a parking space" to designate a target object to be measured, and at this time, the embodiment of the present disclosure may measure a face of a person in a corresponding region in the image. The DMS is a photographing for the main driving area, and in the case where the photographed object includes only one driver, the object may not need to be specified.
It should be noted that the physiological states such as heart rate, respiratory rate, blood oxygen, blood pressure, etc. often need to be monitored for a certain period of time for evaluation, and therefore, in the embodiment of the present disclosure, the extraction of the physiological state information is implemented by using the image change information corresponding to the multiple frames of face images in the video stream lasting for a certain period of time, so that the extracted physiological state detection result further conforms to the needs of the actual scene.
In the process of analyzing image change information based on a face image, there is a problem that detection cannot be continued due to loss of a Region of Interest (ROI) caused by, for example, rotation, blocking, or external light of a human face.
In addition, the fact that an included angle exists between the human face and the optical axis of the camera and the pixel area occupied by the different interested regions in the imaging of the camera and the corresponding pixel brightness are different is considered, so that the pixel area of the interested regions in the picture and the corresponding pixel brightness can be used for weighting, the signal quantity closer to the actual situation is obtained, and the detection precision is improved.
The image information in the region of interest here can largely characterize the blood flow change. In practical application, the region of interest can be a communicated face smooth sub-region, and the face smooth sub-region has more uniform reflectivity to a certain extent, so that more effective skin color and brightness change generated by the flow of facial blood vessels can be captured, and more accurate physiological state detection can be realized.
In the case of determining the regions of interest of the face image, the physiological status detection method provided by the embodiment of the present disclosure may first determine a first contribution degree of each region of interest to the physiological status detection result and a second contribution degree of each region of interest to the physiological status detection result, then fuse image information of at least one region of interest based on the first contribution degree and the second contribution degree, and finally extract physiological status information based on the fused image information, where the extracted physiological status detection result may be a detection result including at least one of heart rate, respiratory rate, blood oxygen, blood pressure, and the like.
Here, the fusion process may be weighted fusion based on the first contribution degree and the second contribution degree, that is, for a region of interest, the higher the corresponding first contribution degree and the second contribution degree is, the stronger the characterization capability of the fused image information corresponding to the region of interest is.
Alternatively, the fusion process may be performed based on a comparison result of the first contribution degree and the second contribution degree. Specifically, if the first contribution degree of a certain region of interest is much greater than the second contribution degree, the physiological state information extraction result corresponding to the region of interest can be determined based on the brightness information of the region of interest, and the influence of the area is ignored; if the first contribution degree of a certain region of interest is much smaller than the second contribution degree, the physiological state information extraction result corresponding to the region of interest can be determined based on the area of the region of interest, and the influence of brightness is ignored.
The first contribution degree can be determined based on the area of the corresponding region of interest, and the first contribution degree value is in direct proportion to the area of the corresponding region of interest, that is, the region of interest with a larger area provides a higher first contribution degree for physiological state detection to a certain extent, otherwise, the region of interest with a smaller area provides a lower first contribution degree for physiological state detection, which mainly considers that along with the increase of the area of the region, the extracted effective information amount is also improved, and the improvement of the effective information amount can improve the accuracy of physiological state detection to a certain extent.
In addition, the second contribution degree may be determined based on pixel brightness information of the corresponding region of interest, where the second contribution degree is proportional to an average pixel brightness value of the corresponding region of interest, that is, the region of interest with stronger pixel brightness provides a higher second contribution degree for physiological state detection to some extent, and conversely, the region of interest with weaker pixel brightness provides a lower second contribution degree for physiological state detection to some extent, which mainly considers that along with the increase of pixel brightness, the corresponding image quality will also be improved, and the better image quality can also improve the accuracy of physiological state detection to some extent. In addition, the second contribution degree is inversely proportional to the variance of the pixel brightness value of the corresponding region of interest, in this case, the corresponding second contribution degree may be determined based on the variance value, which is not described herein again.
It should be noted that in the process of determining the second contribution degree based on the pixel brightness information, the extreme case of overexposure or overcowning that may occur needs to be considered. Under the extreme conditions, the accuracy of physiological state detection may be affected, and therefore, in practical applications, an overexposed or too dark region of interest may be filtered based on the pixel brightness, and then a second contribution degree may be determined for the region of interest meeting the brightness requirement, so as to ensure the accuracy of the final physiological state detection result.
Considering the critical role of the determination of the region of interest in the implementation of the physiological state detection, the following may focus on a specific description of the determination process of the region of interest. In some alternative implementations, the region of interest may be determined by steps one and two as follows:
step one, extracting facial feature points of each frame of facial image;
and step two, determining at least one connected face smooth sub-area in each frame of face image based on the extracted face feature points.
Here, first, facial feature point extraction may be performed on the face image, and then a face smoothing sub-region in the face image may be determined based on the extracted facial feature points and be used as the region of interest.
The process of extracting facial feature points may be implemented by using a face key point detection algorithm, for example, facial feature points related to a standard face image may be preset, where the standard face may be a face image including five sense organs and directly facing a camera, and thus, in the process of extracting facial feature points from a face image of a target object extracted from each frame of image, each facial feature point may be determined based on a comparison condition between the extracted face image of the target object and the standard face image. For example, there may be related feature points having distinctive facial characteristics, such as eyebrow feature points, nose bridge feature points, nose tip feature points, cheek feature points, mouth corner feature points, and the like.
In the disclosed embodiments, one or more face smoothing sub-regions in the face image may be determined based on the determined coordinate information of the facial feature points.
The face smoothing sub-region may be a rectangular region, and may also be another region having a connected shape, which is not specifically limited in this disclosure, and the following description mostly takes the rectangular region as an example.
In practical applications, the facial smoothing sub-region may be a forehead smoothing region determined based on the eyebrow feature points, a left upper cheek smoothing region and a right upper cheek smoothing region determined based on the cheek feature points, the nose bridge feature points, and the nose tip feature points, and a left lower cheek smoothing region and a right lower cheek smoothing region determined based on the cheek feature points, the nose tip feature points, and the mouth corner feature points.
In the case where no region occlusion occurs, the above five regions may be simultaneously extracted on one frame of face image, and in the case where region occlusion occurs, a region that can be actually extracted from one frame of face image may be determined in accordance with the actual situation.
As shown in fig. 2, the total number of feature points is 106, which is a schematic diagram of face feature points that can be extracted from a face image captured by a camera. Here, based on the coordinate information of the facial feature points, 5 facial smoothing sub-regions may be screened, see in particular fig. 2, where region 1 may be a rectangular ROI of region 1 constructed by two feature points of two lateral eyebrows; the region 2 is a cheek region on the left side, and a rectangular ROI of the region 2 can be constructed by positions of a feature point of the left side edge of the face, a feature point of the nose bridge and a feature point of the left eye; the region 3 is a right cheek region, a rectangular ROI of the region 3 can be constructed by positions of a right edge feature point, a nose bridge feature point, and a right eye feature point of the face, the region 4 is a left cheek region, a rectangular ROI of the region 4 can be constructed by positions of a left edge feature point, a left nose wing feature point, and a left mouth corner feature point of the face, the region 5 is a right cheek region, and a rectangular ROI of the region 5 can be constructed by positions of a right edge feature point, a right nose wing feature point, and a right mouth corner feature point of the face.
When the regions of interest are extracted from each frame of face image, on one hand, a first contribution degree corresponding to each region of interest can be determined based on the area of each region of interest, and on the other hand, a second contribution degree corresponding to each region of interest can be determined based on the pixel brightness information of each region of interest.
Under the condition that the area of the region of interest is smaller than a preset area threshold, determining that the first contribution degree of the region of interest is 0, namely, for a smaller region of interest, the image representation capability of the smaller region of interest is limited, and the corresponding first contribution degree can be not considered; in addition, in the case of determining the average brightness value of each region of interest according to the pixel brightness information of each region of interest, in the case that the average brightness value is smaller than the first preset brightness threshold, the second contribution degree of the region of interest is determined to be 0, or in the case that the average brightness value is greater than the second preset brightness threshold, the second contribution degree of the region of interest is determined to be 0, that is, for a region of interest that may be over-exposed or over-dark, the corresponding second contribution degree may not be considered here.
Based on this, in the case that the area of one region of interest belongs to the normal area and the region of interest is in the normal brightness, the corresponding area weight and the brightness weight can be respectively determined.
Wherein, the related area weight can be realized according to the following steps:
step one, carrying out summation operation on the areas of the interested areas to obtain the area sum value;
and step two, calculating the ratio of the area of the region of interest to the sum of the areas of the region of interest for each region of interest to obtain the area proportion of the region of interest as the area weight corresponding to the region of interest.
Here, the area sum value may be obtained by summing the areas of the regions of interest, and a larger area weight may be determined for the region of interest whose area is larger in the area sum value, whereas a smaller area weight may be determined for the region of interest whose area is smaller in the area sum value.
In addition, the related luminance weight can be implemented as follows:
step one, performing summation operation on the average brightness values of all interested areas to obtain brightness sum values;
and step two, calculating the ratio of the average brightness value of the interested areas to the brightness sum value of each interested area to obtain the brightness ratio of the interested areas as the brightness weight corresponding to the interested areas.
Here, the average luminance values of the regions of interest are summed to obtain a luminance sum value, and a larger luminance weight may be determined for the region of interest in which the average luminance value of the region of interest is larger in the luminance sum value, whereas a smaller luminance weight may be determined for the region of interest in which the luminance value of the region of interest is smaller in the luminance sum value.
The physiological state detection method provided by the embodiment of the disclosure can fuse the image information of each region of interest based on the area weight and the brightness weight to perform subsequent physiological state detection, wherein the relevant fusion process specifically includes the following steps:
step one, determining a total weight value corresponding to at least one region of interest based on the product of the area weight and the brightness weight corresponding to the at least one region of interest;
and secondly, performing weighted fusion on the image information of at least one region of interest based on the corresponding total weight value, and determining the image information obtained by fusion.
Here, the total weight value corresponding to each region of interest may be determined first, and then the fused image information may be determined by weighted summation under the condition that the image information of each region of interest is given with the corresponding total weight value.
For the region of interest with a larger area weight and a larger brightness weight, the corresponding total weight value is also larger, which may provide more information support for the fused image information to some extent, and for the region of interest with a larger area weight and a smaller brightness weight, or for the region of interest with a smaller area weight and a larger brightness weight, the corresponding total weight value may be larger or smaller, which needs to be determined based on a specific image analysis result. Effective pixel points on the face can be excavated to the greatest extent through image information obtained through weighting fusion, so that more data support is provided for subsequent physiological state detection, and the detection precision is favorably improved.
Under the condition that the first contribution degree and the second contribution degree of each region of interest to the physiological state detection result are determined, the physiological state detection method provided by the embodiment of the disclosure may first determine the time domain luminance signal corresponding to each region of interest through time domain processing, and then fuse each time domain luminance signal based on the first contribution degree and the second contribution degree of each region of interest, so as to facilitate physiological state information extraction based on the fused time domain luminance signals.
In order to further improve the accuracy of physiological state detection, here, the fused time domain luminance signal may be subjected to frequency domain processing, and more useful information may be analyzed based on the frequency domain signal obtained by the frequency domain processing, for example, the amplitude distribution and the energy distribution of each frequency component may be determined, so as to obtain frequency values of the main amplitude and the energy distribution. Here, the physiological state value of the target subject may be determined based on the peak value of the frequency domain signal.
In the embodiment of the present disclosure, in the case where the image information obtained by fusion is determined, the time-domain luminance signal corresponding to each color channel may be determined here based on the luminance values of the three color channels corresponding to the image information obtained by fusion. After the principal component analysis is performed on each color channel, the detection of the physiological state of the relevant target object can be realized, and the method specifically includes the following steps:
performing time domain processing on brightness values corresponding to three color channels in image information obtained by fusion, and determining a time domain brightness signal corresponding to each color channel;
secondly, performing principal component analysis on the time domain brightness signals corresponding to the plurality of different color channels to obtain time domain signals representing the physiological state of the target object;
thirdly, performing frequency domain processing on the time domain signal representing the physiological state of the target object to obtain a frequency domain signal representing the physiological state of the target object;
and step four, determining the physiological state value of the target object based on the peak value of the frequency domain signal.
Considering that the physiological state directly affects the blood flow change of the target object, and the blood flow change can be characterized based on the brightness change of the image, thus, it may first be determined that the fused image information corresponds to the temporal luminance signal of each of the three color channels, red, green and blue, forming an RGB three-dimensional signal, then, the time domain brightness signals of three different color channels are subjected to principal component analysis, a one-dimensional signal obtained after the principal components are extracted (dimension reduction) is taken as a time domain signal representing the physiological state of the target object, the time domain signal may be determined by the time domain luminance signal of one of the color channels (e.g., the green channel) described above, the selected channel may be one that is most representative of the blood flow change, and may be determined by other principal component analysis methods, which are not limited herein.
In order to facilitate more accurate principal component analysis, processing such as regularization and Detrend filtering denoising may be performed before principal component analysis is performed on the three-dimensional time-domain luminance signal. In addition, after the principal component analysis, the obtained time domain signal can be subjected to moving average filtering denoising processing, so that the precision of the time domain signal is further improved, and the accuracy of subsequent physiological state detection is improved.
In order to further improve the accuracy of the physiological state detection, here, the time domain signal may be subjected to frequency domain processing to analyze more useful information. Here, the physiological state value of the target object may be determined based on the peak value of the frequency domain signal.
Taking heart rate detection as an example, the peak value pmax of the frequency domain signal may be determined, and the original heart rate measurement value may be obtained by summing the pmax, which represents the variation of the heart rate, and a heart rate reference value, which may be determined by a lower limit of an empirically-based heart rate estimation range, and may be adjusted in consideration of the influence of factors such as the video frame rate, the length of the frequency domain signal, and the like.
After the heart rate is determined, relevant physiological indexes such as blood oxygen saturation, heart rate variability and the like can be measured and calculated. Aiming at the blood oxygen saturation, red light (600-800 nm) and near red light regions (800-1000 nm) can be used for respectively detecting time domain signals of HbO2 and Hb, and the corresponding ratio is calculated to obtain the blood oxygen saturation; after a time domain signal is extracted according to the heart rate variability, a plurality of interval time is obtained by calculating the distance between every two adjacent peaks and combining the frame rate, and then the Standard Deviation (SDNN) of the interval time is taken, so that the heart rate variability is obtained.
The respiratory frequency detection method is similar to the heart rate detection method, the main difference lies in that the range of the respiratory frequency is different from the range of the heart rate, the corresponding reference value is set differently, and the respiratory frequency detection can be realized based on the same method.
The embodiment of the disclosure realizes physiological state detection of multiple frames of images, that is, image change information corresponding to multiple frames of images can represent change conditions of physiological states. In practical applications, the physiological state detection result determined with respect to the video stream may be updated as the acquisition of the video stream continues.
Here, in the case of acquiring one or more frames of images included in a new video stream, the face detection may be performed on the images in the new video stream, the face image of the target object in the cabin is extracted, then at least one region of interest in the face image is determined, and in the case of determining the first contribution degree and the second contribution degree of each region of interest to the physiological state detection result, the image information of the at least one region of interest is fused based on the first contribution degree and the second contribution degree, and the physiological state detection result is updated based on the image information obtained by the fusion, and if the preset detection duration is not reached, the updated physiological state detection result is obtained based on the acquired new video stream again until the preset detection duration is reached.
Here again illustrated as heart rate detection. In the case where the preset detection time period is determined to be 30s, the video stream can be continuously acquired within 30 s. In the case where the heart rate measurement is calculated based on a multi-frame image of the starting video stream (e.g., the video stream within the first 5 seconds), it is still within 30 seconds. At this time, with the collection of the image frames, the number of the image frames is increased, a new heart rate measurement value can be calculated every time one frame is added or every time n frames are added, then the smoothing processing is carried out through the sliding average, the measurement is finished after 30s is reached, and the final measurement result is obtained.
In order to help the target object to perform faster physiological state measurement in a cabin environment, a detection progress reminding signal for reminding the target object of a required detection time duration may be generated according to an acquired video stream time duration and a preset detection time duration in a physiological state detection process, for example, when the acquired video stream time duration (i.e., the detection time during which the current physiological state detection of the target object has been continued) reaches 25 seconds, and the preset detection time duration is 30 seconds, a voice or screen prompt related to "please keep still, and the detection can be completed in 5 seconds" may be sent; or when the physiological state detection time of the current target object reaches 30 seconds, sending out a voice or screen prompt of 'measurement is completed'.
After the physiological state detection is realized, the embodiment of the disclosure can also display the physiological state detection result to provide better vehicle cabin service for the target object through the displayed physiological state detection.
In the embodiment of the disclosure, on one hand, the physiological state detection result of the target object can be transmitted to the display screen in the vehicle cabin to be displayed on the display screen, so that vehicle cabin personnel can monitor the physiological state of the vehicle cabin personnel in real time and can also seek medical advice or take other necessary measures in time under the condition that the physiological state of the vehicle cabin personnel is abnormal; on the other hand, the physiological state detection result of the target object can be transmitted to the server of the physiological state detection application, so that the physiological state detection result is sent to the terminal device used by the target object through the server under the condition that the target object requests to obtain the detection result through the physiological state detection application.
That is, the physiological status detection result of the target object may be recorded in the server, and the server may further perform statistical analysis on the physiological status detection result, for example, a historical physiological status statistical result of one month and one week may be determined, so that, when the target object initiates a physiological status detection application request, the physiological status detection result, the statistical result, and the like may be sent to the terminal device of the target object, so as to implement more comprehensive physiological status evaluation.
The physiological state detection Application may be a specific Application program (APP) for performing physiological state detection, and the APP may be used to respond to an acquisition request of a detection result related to a target object, so as to implement result presentation on the APP, which is more practical.
It will be understood by those skilled in the art that in the method of the present invention, the order of writing the steps does not imply a strict order of execution and any limitations on the implementation, and the specific order of execution of the steps should be determined by their function and possible inherent logic.
Based on the same inventive concept, a physiological state detection device corresponding to the physiological state detection method is also provided in the embodiments of the present disclosure, and as the principle of solving the problem of the device in the embodiments of the present disclosure is similar to the physiological state detection method in the embodiments of the present disclosure, the implementation of the device may refer to the implementation of the method, and repeated details are not described again.
Referring to fig. 3, a schematic diagram of a physiological status detecting device provided in an embodiment of the present disclosure is shown, the device includes: an acquisition module 301, a first extraction module 302, a second extraction module 303, a determination module 304, a fusion module 305 and a detection module 306; wherein the content of the first and second substances,
an obtaining module 301, configured to obtain a video stream acquired by a camera device;
a first extraction module 302, configured to extract a face image of a target object from a multi-frame image of a video stream;
a second extraction module 303, configured to extract at least one region of interest in each frame of face image, where the region of interest includes at least one connected face smoothing sub-region;
a determining module 304, configured to determine a first contribution degree of each region of interest to a physiological status detection result according to an area of each region of interest; determining a second contribution degree of each interested area to the physiological state detection result according to the pixel brightness information of each interested area;
a fusion module 305 for fusing image information of at least one region of interest based on the first contribution degree and the second contribution degree;
the detection module 306 is configured to extract physiological status information based on the image information obtained by fusion, so as to obtain a physiological status detection result of the target object.
In the case of acquiring a video stream, the physiological status detection apparatus provided in the embodiment of the present disclosure may extract a face image of a target object from the video stream first, and then extract at least one region of interest in the face image. In this way, under the condition that the first contribution degree and the second contribution degree of each interested area to the physiological state detection result are respectively determined according to the area and the pixel brightness information of each interested area, the image information of at least one interested area can be fused, and finally the physiological state information is extracted based on the image information obtained by fusion to obtain the physiological state detection result. Compared with the problem of inconvenient measurement caused by the fact that contact measurement needs to be carried out by means of a special instrument in the related art, the method and the device for detecting the physiological state realize physiological state detection based on an image processing mode, can carry out real-time measurement anytime and anywhere, have better practicability, can combine the area and the area brightness to realize the fusion of image information of an interested area to carry out in the process of detecting the physiological state, and can further improve the accuracy of measurement.
In one possible embodiment, the first contribution comprises an area weight and the second contribution comprises a brightness weight; a fusion module 305, configured to fuse image information of at least one region of interest based on the first contribution and the second contribution according to the following steps:
determining a total weight value corresponding to each of the at least one region of interest based on a product of the area weight and the brightness weight corresponding to each of the at least one region of interest;
and performing weighted fusion on the image information of at least one region of interest based on the corresponding total weight value, and determining the image information obtained by fusion.
In one possible embodiment, the fusion module 305 is configured to determine the area weight as follows:
summing the areas of the regions of interest to obtain an area sum value;
and calculating the ratio of the area of the region of interest to the sum of the areas of the region of interest to obtain the area ratio of the region of interest as the area weight corresponding to the region of interest.
In a possible implementation, in case the pixel luminance information comprises an average luminance value, the fusion module 305 is configured to determine the luminance weight as follows:
carrying out summation operation on the average brightness values of the interested regions to obtain a brightness sum value;
and calculating the ratio of the average brightness value to the brightness sum value of each interested area to obtain the brightness ratio of the interested areas as the brightness weight corresponding to the interested areas.
In a possible implementation, the determining module 304 is configured to determine the first contribution degree of each region of interest to the physiological state detection result according to the area of each region of interest, according to the following steps:
under the condition that the area of the region of interest is smaller than a preset area threshold value, determining that the first contribution degree of the region of interest is 0; and/or
A determining module 304, configured to determine a second contribution degree of each region of interest to the physiological status detection result according to the pixel brightness information of each region of interest, according to the following steps:
determining the average brightness value of each interested area according to the pixel brightness information of each interested area;
determining the second contribution degree of the region of interest to be 0 in case the average brightness value is smaller than a first preset brightness threshold value, or determining the second contribution degree of the region of interest to be 0 in case the average brightness value is larger than a second preset brightness threshold value.
In a possible implementation, the fusion module 305 is configured to fuse the image information of the at least one region of interest based on the first contribution and the second contribution according to the following steps:
aiming at each region of interest, performing time domain processing on the brightness value of the region of interest in the multi-frame image, and determining a time domain brightness signal corresponding to the region of interest;
fusing time domain luminance signals corresponding to the multiple interested regions based on the first contribution degree and the second contribution degree to obtain fused time domain luminance signals;
the detection module 306 is configured to extract physiological status information based on the image information obtained by fusion according to the following steps to obtain a physiological status detection result of the target object:
and performing frequency domain processing on the fused time domain brightness signal, and determining a physiological state value of the target object based on a peak value of the frequency domain signal obtained by the frequency domain processing.
In a possible implementation manner, in the case that the image information obtained by fusion includes brightness values corresponding to a plurality of different color channels, the detection module 306 is configured to perform physiological status information extraction based on the image information obtained by fusion according to the following steps to obtain a physiological status detection result of the target object:
performing time domain processing on brightness values corresponding to the three color channels in the image information obtained by fusion, and determining a time domain brightness signal corresponding to each color channel;
performing principal component analysis on the time domain brightness signals corresponding to the plurality of different color channels to obtain time domain signals representing the physiological state of the target object;
performing frequency domain processing on the time domain signal representing the physiological state of the target object to obtain a frequency domain signal representing the physiological state of the target object;
based on the peak value of the frequency domain signal, a physiological state value of the target object is determined.
In a possible implementation, the second extraction module 303 is configured to extract the face smoothing sub-region according to the following steps:
extracting facial feature points of each frame of facial image;
and determining at least one connected face smoothing sub-region in each frame of face image based on the extracted face feature points.
In a possible embodiment, the first contribution value is proportional to the area of the corresponding region of interest; the second contribution is proportional to the average pixel intensity value of the corresponding region of interest and/or the second contribution is inversely proportional to the variance of the pixel intensity values of the corresponding region of interest.
The description of the processing flow of each module in the device and the interaction flow between the modules may refer to the related description in the above method embodiments, and will not be described in detail here.
An embodiment of the present disclosure further provides an electronic device, as shown in fig. 4, which is a schematic structural diagram of the electronic device provided in the embodiment of the present disclosure, and the electronic device includes: a processor 401, a memory 402, and a bus 403. The memory 402 stores machine-readable instructions executable by the processor 401 (for example, execution instructions corresponding to the obtaining module 301, the first extracting module 302, the second extracting module 303, the determining module 304, the fusing module 305, and the detecting module 306 in the apparatus in fig. 3, and the like), when the electronic device is operated, the processor 401 communicates with the memory 402 through the bus 403, and when the processor 401 executes the following processes:
acquiring a video stream acquired by camera equipment;
extracting a face image of a target object from a plurality of frame images of a video stream;
extracting at least one region of interest in each frame of facial image, wherein the region of interest comprises at least one connected face smoothing sub-region;
determining a first contribution degree of each interested area to a physiological state detection result according to the area of each interested area;
determining a second contribution degree of each interested area to the physiological state detection result according to the pixel brightness information of each interested area;
fusing image information of at least one region of interest based on the first contribution degree and the second contribution degree;
and extracting physiological state information based on the image information obtained by fusion to obtain a physiological state detection result of the target object.
The embodiments of the present disclosure also provide a computer-readable storage medium, on which a computer program is stored, where the computer program is executed by a processor to perform the steps of the physiological state detection method described in the above method embodiments. The storage medium may be a volatile or non-volatile computer-readable storage medium.
The embodiments of the present disclosure also provide a computer program product, where the computer program product carries a program code, and instructions included in the program code may be used to execute the steps of the physiological status detection method in the foregoing method embodiments, which may be referred to specifically in the foregoing method embodiments, and are not described herein again.
The computer program product may be implemented by hardware, software or a combination thereof. In an alternative embodiment, the computer program product is embodied in a computer storage medium, and in another alternative embodiment, the computer program product is embodied in a Software product, such as a Software Development Kit (SDK), or the like.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the system and the apparatus described above may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again. In the several embodiments provided in the present disclosure, it should be understood that the disclosed system, apparatus, and method may be implemented in other ways. The above-described apparatus embodiments are merely illustrative, and for example, the division of the units into only one type of logical function may be implemented in other ways, and for example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not implemented. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of devices or units through some communication interfaces, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present disclosure may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a non-volatile computer-readable storage medium executable by a processor. Based on such understanding, the technical solution of the present disclosure may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for causing an electronic device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present disclosure. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
Finally, it should be noted that: the above-mentioned embodiments are merely specific embodiments of the present disclosure, which are used for illustrating the technical solutions of the present disclosure and not for limiting the same, and the scope of the present disclosure is not limited thereto, and although the present disclosure is described in detail with reference to the foregoing embodiments, those skilled in the art should understand that: any person skilled in the art can modify or easily conceive of the technical solutions described in the foregoing embodiments or equivalent technical features thereof within the technical scope of the present disclosure; such modifications, changes or substitutions do not depart from the spirit and scope of the embodiments of the present disclosure, and should be construed as being included therein. Therefore, the protection scope of the present disclosure shall be subject to the protection scope of the claims.

Claims (12)

1. A physiological state detection method, comprising:
acquiring a video stream acquired by a camera device;
extracting a face image of a target object from a plurality of frame images of the video stream;
extracting at least one region of interest in each frame of the facial image, wherein the region of interest comprises at least one connected face smoothing sub-region;
determining a first contribution degree of each interested area to a physiological state detection result according to the area of each interested area;
determining a second contribution degree of each interested area to a physiological state detection result according to the pixel brightness information of each interested area;
fusing image information of at least one region of interest based on the first contribution and the second contribution;
and extracting physiological state information based on the image information obtained by fusion to obtain a physiological state detection result of the target object.
2. The method of claim 1, wherein the first degree of contribution comprises an area weight and the second degree of contribution comprises a brightness weight;
the fusing the image information of at least one region of interest based on the first contribution degree and the second contribution degree comprises:
determining a total weight value corresponding to at least one region of interest respectively based on the product of the area weight and the brightness weight corresponding to the at least one region of interest respectively;
and performing weighted fusion on the image information of the at least one region of interest based on the corresponding total weight value, and determining the image information obtained by fusion.
3. The method of claim 2, wherein the area weight is determined as follows:
summing the areas of the regions of interest to obtain an area sum value;
and calculating the ratio of the area of the region of interest to the area sum value for each region of interest to obtain the area ratio of the region of interest as the area weight corresponding to the region of interest.
4. A method according to claim 2 or 3, wherein in the case where the pixel luminance information comprises an average luminance value, the luminance weight is determined as follows:
summing the average brightness values of the interested regions to obtain a brightness sum value;
and calculating the ratio of the average brightness value of the region of interest to the brightness sum value for each region of interest to obtain the brightness ratio of the region of interest as the brightness weight corresponding to the region of interest.
5. The method according to any one of claims 1 to 4, wherein determining the first degree of contribution of each region of interest to the physiological state detection result according to the area of each region of interest comprises:
determining that the first contribution degree of the region of interest is 0 under the condition that the area of the region of interest is smaller than a preset area threshold value; and/or
The determining a second contribution degree of each region of interest to the physiological state detection result according to the pixel brightness information of each region of interest includes:
determining an average brightness value of each interested area according to the pixel brightness information of each interested area;
determining that the second contribution degree of the region of interest is 0 in case that the average brightness value is smaller than a first preset brightness threshold, or determining that the second contribution degree of the region of interest is 0 in case that the average brightness value is greater than a second preset brightness threshold.
6. The method according to any one of claims 1 to 5, wherein the fusing the image information of the at least one region of interest based on the first contribution degree and the second contribution degree comprises:
for each region of interest, performing time domain processing on the brightness value of the region of interest in the multi-frame image, and determining a time domain brightness signal corresponding to the region of interest;
fusing the time domain luminance signals corresponding to the plurality of regions of interest based on the first contribution degree and the second contribution degree to obtain fused time domain luminance signals;
the extracting of the physiological state information based on the image information obtained by the fusion to obtain the physiological state detection result of the target object includes:
and performing frequency domain processing on the fused time domain brightness signal, and determining the physiological state value of the target object based on the peak value of the frequency domain signal obtained by the frequency domain processing.
7. The method according to any one of claims 1 to 5, wherein in a case where the image information obtained by fusion includes luminance values corresponding to a plurality of different color channels, the performing physiological state information extraction based on the image information obtained by fusion to obtain a physiological state detection result of the target subject includes:
performing time domain processing on brightness values corresponding to the three color channels in the image information obtained by fusion, and determining a time domain brightness signal corresponding to each color channel;
performing principal component analysis on the time domain brightness signals corresponding to the plurality of different color channels to obtain time domain signals representing the physiological state of the target object;
performing frequency domain processing on the time domain signal representing the physiological state of the target object to obtain a frequency domain signal representing the physiological state of the target object;
determining a physiological state value of the target subject based on the peak value of the frequency domain signal.
8. The method according to any one of claims 1 to 7, further comprising:
extracting facial feature points of each frame of facial image;
and determining at least one connected face smoothing sub-region in each frame of the face image based on the extracted face feature points.
9. The method according to any one of claims 1 to 8, wherein the first contribution value is proportional to the area of the corresponding region of interest; the second contribution degree is in direct proportion to the average pixel brightness value of the corresponding interested area and/or the second contribution degree is in inverse proportion to the variance of the pixel brightness value of the corresponding interested area.
10. A physiological condition detection device, comprising:
the acquisition module is used for acquiring a video stream acquired by the camera equipment;
the first extraction module is used for extracting a face image of a target object from a multi-frame image of the video stream;
the second extraction module is used for extracting at least one region of interest in each frame of the face image, and the region of interest comprises at least one connected face smoothing sub-region;
the determination module is used for determining a first contribution degree of each interested area to a physiological state detection result according to the area of each interested area; determining a second contribution degree of each interested area to a physiological state detection result according to the pixel brightness information of each interested area;
a fusion module, configured to fuse image information of at least one region of interest based on the first contribution degree and the second contribution degree;
and the detection module is used for extracting physiological state information based on the image information obtained by fusion to obtain a physiological state detection result of the target object.
11. An electronic device, comprising: a processor, a memory and a bus, the memory storing machine-readable instructions executable by the processor, the processor and the memory communicating over the bus when the electronic device is running, the machine-readable instructions when executed by the processor performing the steps of the physiological state detection method of any one of claims 1 to 9.
12. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the steps of the physiological state detection method according to any one of claims 1 to 9.
CN202210346417.8A 2022-03-31 2022-03-31 Physiological state detection method and device, electronic equipment and storage medium Pending CN114663865A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202210346417.8A CN114663865A (en) 2022-03-31 2022-03-31 Physiological state detection method and device, electronic equipment and storage medium
PCT/CN2022/113755 WO2023184832A1 (en) 2022-03-31 2022-08-19 Physiological state detection method and apparatus, electronic device, storage medium, and program

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210346417.8A CN114663865A (en) 2022-03-31 2022-03-31 Physiological state detection method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN114663865A true CN114663865A (en) 2022-06-24

Family

ID=82033595

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210346417.8A Pending CN114663865A (en) 2022-03-31 2022-03-31 Physiological state detection method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN114663865A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023184832A1 (en) * 2022-03-31 2023-10-05 上海商汤智能科技有限公司 Physiological state detection method and apparatus, electronic device, storage medium, and program

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023184832A1 (en) * 2022-03-31 2023-10-05 上海商汤智能科技有限公司 Physiological state detection method and apparatus, electronic device, storage medium, and program

Similar Documents

Publication Publication Date Title
CN114648749A (en) Physiological state detection method and device, electronic equipment and storage medium
Zhang et al. Driver drowsiness detection using multi-channel second order blind identifications
CN207015287U (en) Determine whether driver can start the system of vehicle
JP6308161B2 (en) Pulse wave detection device and pulse wave detection program
CN107427233B (en) Pulse wave detection device and pulse wave detection program
EP3155961A1 (en) Emotion estimating method, emotion estimating apparatus, and recording medium storing program
CN109155106A (en) Condition estimating device, condition estimation method and condition estimating program
KR101629901B1 (en) Method and Device for measuring PPG signal by using mobile device
EP2777485B1 (en) Signal processor, signal processing method, and signal processing program
EP3440991A1 (en) Device, system and method for determining a physiological parameter of a subject
EP3424425A1 (en) Determination result output device, determination result provision device, and determination result output system
DE102016200045A1 (en) Selecting a region of interest to extract physiological parameters from a subject's video
CN114663865A (en) Physiological state detection method and device, electronic equipment and storage medium
CN114863399A (en) Physiological state detection method and device, electronic equipment and storage medium
KR101647318B1 (en) Portable device for skin condition diagnosis and method for diagnosing and managing skin using the same
KR20130141285A (en) Method and appartus for skin condition diagnosis and system for providing makeup information suitable skin condition using the same
CN114140775A (en) Data recording and physiological state detection method, device, equipment and storage medium
CN111050638B (en) Computer-implemented method and system for contact photoplethysmography (PPG)
WO2023184832A1 (en) Physiological state detection method and apparatus, electronic device, storage medium, and program
JP6796525B2 (en) Image processing equipment, image processing system and image processing method
Mehta et al. Heart rate estimation from RGB facial videos using robust face demarcation and VMD
CN114708225A (en) Blood pressure measuring method and device, electronic equipment and storage medium
CN116311175A (en) Physiological state detection method and device, electronic equipment and storage medium
Sarkar et al. Evaluation of video magnification for nonintrusive heart rate measurement
Sarkar et al. Assessment of psychophysiological characteristics using heart rate from naturalistic face video data

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination