CN111445477A - Analysis method and device based on automatic region segmentation and selection and server - Google Patents

Analysis method and device based on automatic region segmentation and selection and server Download PDF

Info

Publication number
CN111445477A
CN111445477A CN202010129351.8A CN202010129351A CN111445477A CN 111445477 A CN111445477 A CN 111445477A CN 202010129351 A CN202010129351 A CN 202010129351A CN 111445477 A CN111445477 A CN 111445477A
Authority
CN
China
Prior art keywords
value
signal
area
illumination
areas
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010129351.8A
Other languages
Chinese (zh)
Other versions
CN111445477B (en
Inventor
罗静静
甄俊杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ji Hua Laboratory
Original Assignee
Ji Hua Laboratory
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ji Hua Laboratory filed Critical Ji Hua Laboratory
Priority to CN202010129351.8A priority Critical patent/CN111445477B/en
Publication of CN111445477A publication Critical patent/CN111445477A/en
Application granted granted Critical
Publication of CN111445477B publication Critical patent/CN111445477B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/66Analysis of geometric attributes of image moments or centre of gravity
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02BCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO BUILDINGS, e.g. HOUSING, HOUSE APPLIANCES OR RELATED END-USER APPLICATIONS
    • Y02B20/00Energy efficient lighting technologies, e.g. halogen lamps or gas discharge lamps
    • Y02B20/40Control techniques providing energy savings, e.g. smart controller or presence detection

Abstract

The embodiment of the application provides an analysis method, an analysis device and a server based on automatic region segmentation and selection, wherein a morphological symmetrical region of a face and a reference region are obtained as a target region by using a non-coincident user-defined region set based on the existing face segmentation algorithm of face recognition and feature point calibration. The algorithm is not influenced by subjective factors, the symmetrical region selection is based on human face characteristic points obtained through actual facial morphology training and is not influenced by the difference of people, the actually selected regions of each subject can correspond to each other morphologically, and the technical problems that the influence of subjective factors is caused when the fixed regions of the face are manually selected, accurate positioning is difficult, and determined relative position information is not established between the selected regions when analysis is carried out between the regions, particularly between the symmetrical regions, are solved.

Description

Analysis method and device based on automatic region segmentation and selection and server
Technical Field
The application relates to the technical field of face detection big data, in particular to an analysis method, an analysis device and a server based on automatic region segmentation and selection.
Background
The photoplethysmography (iPPG) is a non-contact type physiological parameter measurement method that detects the intensity of light reflected from a human face region between frames (i.e., iPPG signal) based on a video signal, and calculates physiological parameters such as a heart rate, a respiratory rate, and blood oxygen saturation from the iPPG signal.
In order to analyze iPPG signals of a specific region of the face, the existing iPPG technology generally selects a region of interest (ROI) by a manual region selection method, and the defects and disadvantages of the manual region selection method include: when monitoring physiological parameters of a certain subject, a certain area of the face is generally selected at will for analysis, and the signal quality of the area is lack of consideration, so that the signal reliability is difficult to guarantee; when the commonality (regional difference) of physiological signals in a specific region among people is analyzed, the position of a selected region depends on the experience of experimenters, is easily influenced by subjective factors and is difficult to accurately position; in the case of inter-area, in particular symmetrical, analysis, no defined relative position information is established between the selections.
Disclosure of Invention
In order to overcome at least the above disadvantages in the prior art, an object of the present application is to provide an analysis method, an analysis device, and a server based on automatic region segmentation and selection, which use a non-overlapping user-defined region set to obtain a morphologically symmetric facial region and a reference region as a target region based on the existing face recognition and feature point calibration face segmentation algorithm. The algorithm is not influenced by subjective factors, the symmetrical region selection is based on human face characteristic points obtained through actual facial morphology training and is not influenced by the difference of people, the actually selected regions of each subject can correspond to each other morphologically, and the technical problems that the influence of subjective factors is caused when the fixed regions of the face are manually selected, accurate positioning is difficult, and determined relative position information is not established between the selected regions when analysis is carried out between the regions, particularly between the symmetrical regions, are solved.
In a first aspect, the present application provides an analysis method based on automatic region segmentation and selection, the method including:
acquiring a face image to be processed, and acquiring n key feature points on the face image and position information of each key feature point based on a preset algorithm;
dividing f preselected regions on the face image according to the position information of the n key feature points, wherein the contour line of each preselected region is formed by connecting at least three key feature points; the face image comprises a plurality of face organs and a plurality of voice-over areas, and the voice-over areas are completely covered by the f preselected areas;
calculating the position information of the center of mass of each preselected area, and acquiring f positive selection areas taking the center of mass of each preselected area as the center according to the position information of the center of mass;
extracting the iPGG signal of each positive selection area, and selecting the positive selection area of which the iPGG signal meets the preset conditions as a target area;
and analyzing the target area and outputting an analysis result.
In a possible design of the first aspect, before the step of calculating the position information of the centroid of each of the preselected areas, the method further includes a step of setting a reference illuminance under the current light source device, where the step of setting the reference illuminance under the current light source device includes:
selecting any area on the face as an illumination calibration area, and acquiring data of each frame of illumination calibration area under the current illumination;
calculating original signal data of the illumination calibration area under the current illumination, calculating iPG alternating current signal data through the original signal data, filtering the iPG alternating current signals, calculating to obtain an effective value AC of the iPG alternating current signals in each period after filtering, calculating an average value of the effective values AC of the iPG alternating current signals, and calculating a signal-to-noise ratio (SNR) of the illumination calibration area under the current illumination;
controlling the illumination to rise, and acquiring the average value and the signal-to-noise ratio (SNR) of the effective value AC of the iPGG alternating current signal in each illumination calibration area, wherein the average value and the SNR of the effective value AC of the iPGG alternating current signal are synchronously raised based on the illumination rise; when the situation that the average value of the effective values AC of the iPGG alternating current signals rises to a first value, the rising amplitude is lower than a first set amplitude after the average value of the effective values AC of the iPGG alternating current signals rises to the first value, and the signal-to-noise ratio SNR rises to a second value, starts to fall after the signal-to-noise ratio SNR rises to the second value, and the falling amplitude is higher than a second set amplitude is detected, the first value and the second value are obtained, and any illumination value is selected from illumination intervals corresponding to the first value and the second value to serve as the reference illumination of the current light source.
In the design, a rectangular area in the middle of the forehead on the face is selected as an illumination calibration area.
In a possible design of the first aspect, the step of calculating position information of a centroid of each of the preselected areas, and acquiring f positive selected areas centered on the centroid of each of the preselected areas according to the position information of the centroid includes:
setting e pre-fetching areas with different sizes based on the f pre-selecting areas by taking the center of mass of each pre-selecting area as the center, wherein the sizes of the e pre-fetching areas are in an increasing trend;
respectively calculating the signal-to-noise ratio of e pre-fetching areas under f pre-selecting areas; the signal-to-noise ratios of f pre-fetching areas with the same size are grouped, and the average signal-to-noise ratio of each group is calculated to obtain e average signal-to-noise ratios;
and when the average signal-to-noise ratio value is detected to rise to a third value and the rising amplitude is lower than a third set amplitude after the average signal-to-noise ratio value rises to the third value, the size of the pre-fetching area corresponding to the average signal-to-noise ratio value is the size of the selected area.
In a possible design of the first aspect, the step of extracting the iPPG signal of each forward selection region and selecting a forward selection region, in which the iPPG signal meets a preset condition, as the target region includes:
calculating the standard difference value and the signal-to-noise ratio of the iPG signal of each positive selection area;
calculating according to the standard difference value and the signal-to-noise ratio value to obtain a quality evaluation value Q;
and sorting the quality evaluation values Q of all the positive selection areas in a descending order, wherein the preset condition is that the quality evaluation values Q are sorted in the top w bits.
In a second aspect, the present application further provides an analysis method and apparatus based on automatic region segmentation and selection, where the apparatus includes:
the acquisition fixed point module is used for acquiring a face image to be processed and acquiring n key feature points on the face image and position information of each key feature point based on a preset algorithm;
the combination preselection module is used for dividing f preselection areas on the face image according to the position information of the n key feature points, and the contour line of each preselection area is formed by connecting at least three key feature points; the face image comprises a plurality of face organs and a plurality of voice-over areas, and the voice-over areas are completely covered by the f preselected areas;
the secondary region selection module is used for dividing f preselected regions on the face image according to the position information of the n key feature points, and the contour line of each preselected region is formed by connecting at least three key feature points; the face image comprises a plurality of face organs and a plurality of voice-over areas, and the voice-over areas are completely covered by the f preselected areas;
and the third selection module is used for extracting the iPGG signal of each positive selection area and selecting the positive selection area of which the iPGG signal meets the preset conditions as the target area.
And the area analysis module is used for analyzing the target area and outputting an analysis result.
In a possible design of the second aspect, before the secondary selection module, the apparatus further includes an illuminance setting module, configured to set a reference illuminance under the current light source device; the illuminance setting module includes:
the calibration area selection module is used for selecting any area on the face as an illumination calibration area and acquiring data of each frame of illumination calibration area under the current illumination;
the data calculation module is used for calculating original signal data of the illumination calibration area under the current illumination, calculating iPG alternating current signal data through the original signal data, filtering the iPG alternating current signals, calculating to obtain an effective value AC of the iPG alternating current signals in each period after filtering, calculating an average value of the effective values AC of the iPG alternating current signals, and calculating a signal-to-noise ratio (SNR) of the illumination calibration area under the current illumination;
the data processing module is used for controlling the illumination to rise, obtaining the average value and the signal-to-noise ratio (SNR) of the effective value AC of the iPG alternating current signal in each illumination calibration area, and synchronously rising the average value and the SNR of the effective value AC of the iPG alternating current signal based on the illumination rise; when the situation that the average value of the effective values AC of the iPGG alternating current signals rises to a first value, the rising amplitude is lower than a first set amplitude after the average value of the effective values AC of the iPGG alternating current signals rises to the first value, and the signal-to-noise ratio SNR rises to a second value, starts to fall after the signal-to-noise ratio SNR rises to the second value, and the falling amplitude is higher than a second set amplitude is detected, the first value and the second value are obtained, and any illumination value is selected from illumination intervals corresponding to the first value and the second value to serve as the reference illumination of the current light source.
In one possible design of the second aspect, the secondary election module includes:
the pre-fetching setting module is used for respectively setting e pre-fetching areas with different sizes based on the f pre-selecting areas by taking the center of mass of each pre-selecting area as the center, wherein the sizes of the e pre-fetching areas are increased;
the pre-fetching calculation module is used for respectively calculating the signal-to-noise ratio of e pre-fetching areas under f pre-selecting areas; the signal-to-noise ratios of f pre-fetching areas with the same size are grouped, and the average signal-to-noise ratio of each group is calculated to obtain e average signal-to-noise ratios;
and the positive selection setting module is used for setting the size of the pre-fetching area corresponding to the average signal-to-noise ratio value as the size of the positive selection area when the average signal-to-noise ratio value is detected to rise to a third value and the rising amplitude is lower than a third set amplitude after the average signal-to-noise ratio value rises to the third value.
In one possible design of the second aspect, the third selection module includes:
the numerical value calculation module is used for calculating the standard deviation value and the signal-to-noise ratio value of the iPG signal of each positive selection area;
the evaluation calculation module is used for calculating to obtain a quality evaluation value Q according to the standard difference value and the signal-to-noise ratio value;
and the target setting module is used for sorting the quality evaluation values Q of all the positive selection areas in a descending order, and the preset condition is that the quality evaluation values Q are sorted at the top w.
In a third aspect, an embodiment of the present application provides a server for an analysis method based on automatic region segmentation and selection, where the server for an analysis method based on automatic region segmentation and selection includes a machine-readable storage medium and a processor, where the machine-readable storage medium stores machine-executable instructions, and when the processor executes the machine-executable instructions, the server for an analysis method based on automatic region segmentation and selection implements a method in the first aspect or any possible design manner of the first aspect.
In a fourth aspect, embodiments of the present application provide a machine-readable storage medium storing machine-executable instructions that, when executed on a computer, cause the computer to perform the method of the first aspect or any possible design of the first aspect.
Based on any one of the aspects, the facial segmentation method based on the face recognition and the feature point calibration uses the non-coincident user-defined region set to obtain the morphologically symmetrical region of the face and the reference region as the target region. The algorithm is not influenced by subjective factors, the symmetrical region selection is based on human face characteristic points obtained through actual facial morphology training and is not influenced by the difference of people, the actually selected regions of each subject can correspond to each other morphologically, and the technical problems that the influence of subjective factors is caused when the fixed regions of the face are manually selected, accurate positioning is difficult, and determined relative position information is not established between the selected regions when analysis is carried out between the regions, particularly between the symmetrical regions, are solved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are required to be used in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present application and therefore should not be considered as limiting the scope, and for those skilled in the art, other related drawings can be obtained from the drawings without inventive effort.
Fig. 1 is a schematic flowchart of an analysis method based on automatic region segmentation and selection according to an embodiment of the present disclosure.
Fig. 2 is a schematic diagram of one combination of key feature points.
Fig. 3 is a schematic diagram of the division of the triangular pre-selection area according to the combination of fig. 2.
Fig. 4 is a schematic view of the preselected area completely covering the whiteout area.
Fig. 5 is a schematic illustration of the centroid of a preselected region.
Fig. 6 is a digital signature diagram of centroid location information for a preselected area.
Fig. 7 is a schematic flowchart of one possible implementation shown in fig. 1, which further includes step S100.
Fig. 8 is a schematic flow chart of each sub-step included in step S100 in one possible implementation shown in fig. 7.
Fig. 9 is a broken line diagram showing the AC average value and the SNR before and after filtering corresponding to the luminance.
Fig. 10 is a flowchart illustrating each sub-step included in step S130 in one possible embodiment shown in fig. 1.
Fig. 11 is a broken line diagram illustrating the average snr value data corresponding to the size of the prefetch area.
Fig. 12 is a schematic flow chart of each sub-step included in step S140 in one possible implementation shown in fig. 1.
FIG. 13 is a diagram of the position information and numerical labels of symmetric sub-regions.
Fig. 14 is a block diagram illustrating an analysis method and apparatus based on automatic region segmentation and selection according to an embodiment of the present disclosure.
Fig. 15 is a block diagram illustrating an analysis method apparatus based on automatic region segmentation and selection, further including an illuminance calibration module in one possible embodiment shown in fig. 14.
Fig. 16 is a block diagram of an illuminance calibration module.
Fig. 17 is a block diagram of the secondary selection module.
FIG. 18 is a block diagram of a triple selection block.
Detailed Description
The present application will now be described in detail with reference to the drawings, and the specific operations in the method embodiments may also be applied to the apparatus embodiments or the system embodiments. In the description of the present application, "at least one" includes one or more unless otherwise specified. "plurality" means two or more. For example, at least one of A, B and C, comprising: a alone, B alone, a and B in combination, a and C in combination, B and C in combination, and A, B and C in combination. In this application, "/" means "or, for example, A/B may mean A or B; "and/or" herein is merely an association describing an associated object, and means that there may be three relationships, e.g., a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone.
Referring to fig. 1, fig. 1 is a schematic flow chart illustrating an analysis method based on automatic region segmentation and selection according to a preferred embodiment of the present application. The analysis method based on the automatic segmentation and selection of the region is described in detail below.
Step S110, a face image to be processed is obtained, and n key feature points and position information of each key feature point on the face image are obtained based on a preset algorithm.
In this embodiment, a green light source and an RGB camera are used for video capture, and the resolution of each frame is 2448 × 2048 (which may be different from camera to camera). Based on the Dlib algorithm library, a face region detection algorithm (get _ front _ face _ detector) is used for detecting a face region in the first frame image and obtaining a face image to be processed. Using a face contour detection algorithm in the face image, 81 key feature points are obtained (i.e. in this embodiment, n is 81). In this embodiment, the face is relatively still to the camera, so the position information of the 81 key feature points obtained from the first frame is used as a marker throughout the video sequence.
And step S120, dividing f preselected areas on the face image according to the position information of the n key feature points, wherein the contour line of each preselected area is formed by connecting at least three key feature points. The face image comprises a plurality of face organs and the voice-over areas, and the voice-over areas are completely covered by the f preselected areas.
In this embodiment, 98 preselected regions are divided on the face image according to the position information of 81 key feature points (the position information of the key feature points is marked with numbers), the contour line of each preselected region is formed by connecting three key feature points, the preselected regions are triangles, one combination of the key feature points is shown in fig. 2, and the triangle preselected region formed by using the combination of fig. 2 is shown in fig. 3. The 98 preselected areas completely cover the whitespace areas of the face except for the eyes, nostrils and lips (face organs), as shown in fig. 4, and the white part in fig. 4 is the whitespace area covered by the preselected areas. Since the serial numbers and positions of the key feature points are corresponding, different subjects can obtain the same region selection effect.
Step S130, calculating the position information of the center of mass of each preselected area, and acquiring f positive selection areas taking the center of mass of each preselected area as the center according to the position information of the center of mass.
In this embodiment, the centroid is the centroid of the preselected area of the triangle, and is the intersection of the centerlines on each side of the triangle, and the schematic diagram of the centroid position is shown in fig. 5. The position information of the centroid of each preselected area is calculated, and numbers are used as marks, as shown in fig. 6, fig. 6 is a mark schematic diagram of the position information of the centroid of the preselected area, and 98 positive selected areas centered on the centroid of each preselected area are obtained according to the position information of the centroid.
Step S140, extracting the iPGG signal of each positive selection area, and selecting the positive selection area of which the iPGG signal meets the preset conditions as the target area.
And step S150, analyzing the target area and outputting an analysis result.
Based on the above steps, the present embodiment uses a non-coincident user-defined triangular region set based on the existing face segmentation algorithm of face recognition and feature point calibration to obtain a total of 98 face regions of morphologically 49 pairs of face symmetric regions and 1 reference region. The algorithm is not influenced by subjective factors, the symmetrical region selection is based on human face characteristic points obtained through actual facial morphology training and is not influenced by the difference of people, the actually selected regions of each subject can correspond to each other morphologically, and the technical problems that the influence of subjective factors is caused when the fixed regions of the face are manually selected, accurate positioning is difficult, and determined relative position information is not established between the selected regions when analysis is carried out between the regions, particularly between the symmetrical regions, are solved.
In one possible design, referring to fig. 7, before step S130, step S100 is further included: and setting the reference illumination under the current light source equipment. Referring to fig. 8, step S100 may specifically include the following sub-steps:
and a substep S101, selecting any area on the human face as an illumination calibration area, and acquiring data of the illumination calibration area on each frame of image under the current illumination.
In this embodiment, a rectangular area is selected as the illumination calibration area at the middle of the forehead on the face, and the size of the rectangular area can be customized, for example, by selecting coordinates (x)1,y1)=(1200,400),(x2,y2) = (1300,500). And cutting rectangular areas with the same position and the same size for each frame of image, cutting N frames of images to obtain N illumination calibration areas, and acquiring data of the illumination calibration areas of each frame of image. The illumination calibration area is only used for illumination calibration, so that one area is selected in the middle of the forehead on the face of the image (research shows that forehead signals are relatively good), all 98 areas are not needed, and the size of the actual area can be adjusted according to different camera resolutions.
And a substep S102, calculating original signal data of the illumination calibration area under the current illumination, calculating iPGG alternating current signal data through the original signal data, filtering the iPGG alternating current signal, calculating to obtain an effective value AC of the iPGG alternating current signal in each period after filtering, calculating an average value of the effective values AC of the iPGG alternating current signal, and calculating a signal-to-noise ratio SNR of the illumination calibration area under the current illumination.
In this embodiment, the specific step of calculating the original signal data of the illumination calibration area under the current illumination is to use the following formula (1) to calculate, where in the formula (1), W is the area width (along the x axis), L is the area length (along the y axis), and g (l, W) is the intensity value (0-255) of the green channel of a certain pixel, so that the formula (1) represents the average intensity value of all pixels of the green channel of the selected area.
Figure BDA0002395371090000101
The specific steps of obtaining iPGG alternating current signal data through calculation of original signal data are as follows: calculated using the formula (2) shown below, in the formula (2)
Figure BDA0002395371090000103
For convolutional notation, Ori _ means is a sequence composed of Ori _ means obtained from formula (1) in picture order, L1 is the Ori _ means length, W1 is the sliding window size, W1 in this embodiment is set to the sampling frequency (W ═ FPS ═ 40), onew1A sequence of length W1 and value 1, oneL1A sequence of length L1 and value 1.
Figure BDA0002395371090000102
The specific steps of filtering the iPGG alternating current signal and calculating to obtain the effective value of the iPGG alternating current signal of each period are as follows: using the following formula (3), the signal before filtering is denoted by x (i), the signal after filtering is denoted by f (i), and the embodiment uses a 3 rd order butterworth band-pass filter, the filtering range is 0.5-4, and the effective value (AC) of the alternating current signal in each period is calculated for x (i), f (i) based on the formula (2), where AC (i) is the signal strength of the ith frame picture, and Np is the number of sampling points in one period. An average Alternating Current (AC) value is calculated based on the AC effective value for each cycle.
Figure BDA0002395371090000111
The specific steps of calculating the signal-to-noise ratio of the illumination calibration area under the current illumination are as follows: the SNR is calculated using equation (4) shown below, where f (i) is the pre-filter signal, x (i) is the pre-filter signal, and N is the total number of frames in the video.
Figure BDA0002395371090000112
By executing the steps, an AC average value and a signal-to-noise ratio (SNR) can be calculated under illumination.
A substep S103 of controlling the illumination to rise, and acquiring the average value and the signal-to-noise ratio SNR of the effective value AC of the iPGG alternating current signal in each illumination calibration area, wherein the average value and the signal-to-noise ratio SNR of the effective value AC of the iPGG alternating current signal synchronously rise based on the illumination rise; when the situation that the average value of the effective values AC of the iPGG alternating current signals rises to a first value, the rising amplitude is lower than a first set amplitude after the average value of the effective values AC of the iPGG alternating current signals rises to the first value, and the signal-to-noise ratio SNR rises to a second value, starts to fall after the signal-to-noise ratio SNR rises to the second value, and the falling amplitude is higher than a second set amplitude is detected, the first value and the second value are obtained, and any illumination value is selected from illumination intervals corresponding to the first value and the second value to serve as the reference illumination of the current light source.
In this embodiment, the illuminance is set to be 17.2lx,69.3lx,141.1lx,220.7lx,290.2lx,483.4lx,562.1lx,662.2lx,762.1lx,818.6lx,865.8lx,880.8lx,920.9lx, and 1030lx respectively to control the rise of the illuminance, and the substep S102 is performed under each illuminance, so that an AC average value and a signal-to-noise ratio SNR can be calculated under each illuminance, that is, a plurality of sets of AC average values and signal-to-noise ratios SNR corresponding to the illuminance can be obtained. The multiple sets of AC average values and SNR data are plotted as a line graph, and the specific result is shown in fig. 9, where fig. 9 is a line graph of AC average values and SNR data before and after filtering corresponding to illuminance.
Experiments show that as the illuminance is increased, the AC average value and the signal-to-noise ratio (SNR) both increase first, the AC average value tends to be gentle when the AC average value rises to a certain degree, and the SNR has a phenomenon of sudden drop. When both the AC average value and the SNR value are high, the corresponding illumination is the optimal reference illumination. In the present embodiment data, it can be seen from the trend of the line graph that the AC average value and the signal-to-noise ratio SNR are high at 850lx illuminance, and therefore 850lx illuminance is selected as the reference illuminance.
Based on the above steps, in this embodiment, by further considering the influence of the illuminance on the signal quality, before calculating the position information of the centroid of the preselected area, the optimal illuminance capable of effectively optimizing the signal quality under the current light source device is calculated, and the optimal illuminance is set as the reference illuminance. The present application proposes a method for selecting illumination intensity by taking into consideration both signal intensity and signal-to-noise ratio, which is not disclosed in the prior art.
In one possible design, referring to fig. 10, the step S130: the method comprises the following steps of calculating the position information of the center of mass of each preselected area, and acquiring f positive selected areas with the center of mass of each preselected area as the center according to the position information of the center of mass, wherein the steps comprise:
substep S131: the position information of the centroid of each preselected area is calculated, and the result is shown in fig. 11. Based on the f pre-selection areas, e pre-fetching areas with different sizes are respectively set by taking the center of mass of each pre-selection area as the center, and the sizes of the e pre-fetching areas are increased.
In this embodiment, based on 98 preselected areas, 9 prefetch areas of different sizes are set, respectively, with the centroid of each preselected area as the center. The prefetch areas are set to be square areas, and the side lengths of the 9 prefetch areas are respectively 10, 14, 20, 28, 40, 56, 80, 98 and 113, and the side lengths are in an increasing trend.
Substep S132: respectively calculating the signal-to-noise ratio of e pre-fetching areas under f pre-selecting areas; and f, the signal-to-noise ratios of the pre-fetching areas with the same size are grouped, and the average signal-to-noise ratio of each group is calculated to obtain e average signal-to-noise ratios.
In this embodiment, 9 prefetch regions are provided for one preselection region, and the signal-to-noise ratio value of each of the 98 preselection regions is calculated, respectively, to obtain 882 signal-to-noise ratio value data (98x 9). The signal-to-noise ratio values of 98 pre-fetched regions of the same size are grouped, for example, the signal-to-noise ratio values of 98 pre-fetched regions with a side length of 10 are grouped, and the average signal-to-noise ratio value of each group is calculated. By analogy, 9 groups of average signal-to-noise ratio value data can be obtained. Fig. 12 shows a specific result obtained by plotting 9 sets of the average signal-to-noise ratio value data and the size of the prefetch area, and fig. 12 is a line graph in which the average signal-to-noise ratio value data corresponds to the size of the prefetch area.
Substep S133: and when the average signal-to-noise ratio value is detected to rise to a third value and the rising amplitude is lower than a third set amplitude after the average signal-to-noise ratio value rises to the third value, the size of the pre-fetching area corresponding to the average signal-to-noise ratio value is the size of the selected area.
As can be seen from the figure, when the side length of the prefetch area reaches 80, the rise of the average signal-to-noise ratio value tends to be flat, so the present embodiment sets the positive selection area to be a square area with the side length of 80 × 80.
Based on the above steps, the present embodiment sets a preferred size of the region to be selected by further considering the influence of the size of the region on the signal quality and the computational load, and avoiding selecting an excessively large region to cover the signal characteristics possibly existing in the small face region. The method for selecting the size of the cutting area is not disclosed in the prior art.
In one possible design, referring to fig. 13, the step S140: extracting the iPGG signal of each positive selection area, and selecting the positive selection area of which the iPGG signal meets the preset condition as a target area, wherein the steps comprise:
substep S141: and calculating the standard deviation value and the signal-to-noise ratio value of the iPG signal of each positive selection area.
Substep S142: and calculating to obtain a quality evaluation value Q according to the standard difference value and the signal-to-noise ratio value.
Substep S143: and sorting the quality evaluation values Q of all the positive selection areas in a descending order, wherein the preset condition is that the quality evaluation values Q are sorted in the top w bits.
In this embodiment, the quality evaluation value Q is calculated by the following formula (5), where STD is a standard deviation value and SNR is a signal-to-noise ratio value in the formula (5).
Q=mean(SNR)+mean(-STD)=(SNR-STD)/2 (5)
And after the quality evaluation values Q are sequenced, selecting a positive selection area with 50 bits before sequencing as a target area, extracting an iPG signal in the positive selection area, and carrying out the next analysis. In addition, in the target region with the first 50 bits of ranking, 38 symmetric sub-regions can be selected at the positions where symmetric points exist on the left face and the right face, the iPG signals in the symmetric sub-regions can be extracted, and the next analysis can be carried out. The position information and the numerical labels of the symmetric sub-regions are shown in fig. 14.
Based on the above steps, the present application proposes a signal quality evaluation index, i.e., a Q value, and selects a face region capable of meeting experimental requirements as a target region according to Q value sorting, which is not disclosed in the prior art.
Fig. 14 is a schematic functional module diagram of an analysis method device based on automatic region segmentation and selection according to an embodiment of the present application, where the embodiment may perform functional module division on the analysis method device based on automatic region segmentation and selection according to the foregoing method embodiment. For example, the functional blocks may be divided for the respective functions, or two or more functions may be integrated into one processing block. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. It should be noted that, the division of the modules in the present application is schematic, and is only a logical function division, and there may be another division manner in actual implementation. For example, in the case of dividing each function module according to each function, the analysis method device based on the automatic region segmentation and selection shown in fig. 14 is only a schematic device diagram. The analysis method device based on the automatic region segmentation and selection may include a collection fixed point module 210, a combination preselection module 220, a secondary region selection module 230, a tertiary region selection module 240, and a region analysis module 250, and the functions of the functional modules of the analysis method device based on the automatic region segmentation and selection are described in detail below.
The acquisition fixed point module 210 is configured to acquire a face image to be processed, and acquire n key feature points on the face image and position information of each key feature point based on a preset algorithm.
The combination preselection module 220 is used for dividing f preselection areas on the face image according to the position information of the n key feature points, and the contour line of each preselection area is formed by connecting at least three key feature points; the face image comprises a plurality of face organs and the voice-over areas, and the voice-over areas are completely covered by the f preselected areas.
And a secondary selecting module 230, configured to calculate location information of a centroid of each preselected area, and obtain f forward selecting areas centered on the centroid of each preselected area according to the location information of the centroid.
And a third selecting module 240, configured to extract the iPPG signal of each forward selecting area, and select a forward selecting area where the iPPG signal meets a preset condition as a target area.
And the area analysis module 250 is configured to analyze the target area and output an analysis result.
In a possible design, referring to fig. 15, fig. 15 is a schematic diagram of an analysis method apparatus based on automatic region segmentation and selection in another embodiment, before the secondary region selection module 230 performs data processing, further including an illuminance setting module 200 for setting a reference illuminance under a current light source device. Namely, the analysis method device based on automatic segmentation and selection of the region may include a collection pointing module 210, a combination preselection module 220, an illuminance setting module 200, a secondary selection module 230, a tertiary selection module 240, and a region analysis module 250. Referring to fig. 16, fig. 16 is a schematic structural diagram of the illuminance setting module 200, the illuminance setting module 200 may include a calibration selecting module 201, a data calculating module 202, and a data processing module 203, and the functions of the functional modules of the illuminance setting module 200 are described in detail below.
And a calibration selecting module 201, configured to select any area on the face as an illumination calibration area, and obtain data of each frame of illumination calibration area under the current illumination.
The data calculation module 202 is configured to calculate original signal data of the illumination calibration area under the current illumination, calculate iPPG alternating current signal data according to the original signal data, filter the iPPG alternating current signal, calculate an iPPG alternating current signal effective value AC of each period after filtering, calculate an average value of the iPPG alternating current signal effective values AC, and calculate a signal-to-noise ratio SNR of the illumination calibration area under the current illumination.
The data processing module 203 is used for controlling the illumination to rise, obtaining the average value and the signal-to-noise ratio (SNR) of the effective value AC of the iPGG alternating current signal in each illumination calibration area, and synchronously rising the average value and the SNR of the effective value AC of the iPGG alternating current signal based on the illumination rise; when the situation that the average value of the effective values AC of the iPGG alternating current signals rises to a first value, the rising amplitude is lower than a first set amplitude after the average value of the effective values AC of the iPGG alternating current signals rises to the first value, and the signal-to-noise ratio SNR rises to a second value, starts to fall after the signal-to-noise ratio SNR rises to the second value, and the falling amplitude is higher than a second set amplitude is detected, the first value and the second value are obtained, and any illumination value is selected from illumination intervals corresponding to the first value and the second value to serve as the reference illumination of the current light source.
In a possible design, referring to fig. 17, fig. 17 is a schematic structural diagram of the secondary selection module 230, and the secondary selection module 230 may further include a prefetch setting module 231, a prefetch calculating module 232, and a forward selection setting module 233, where functions of each functional module of the secondary selection module 230 are described in detail below.
The prefetch setting module 231 is configured to set, based on the f preselected areas, e prefetch areas of different sizes, respectively, with the center of mass of each preselected area as the center, where the sizes of the e prefetch areas are increasing.
A prefetch calculation module 232, configured to calculate signal-to-noise ratios of e prefetch areas under the f pre-selected areas, respectively; and f, the signal-to-noise ratios of the pre-fetching areas with the same size are grouped, and the average signal-to-noise ratio of each group is calculated to obtain e average signal-to-noise ratios.
And a positive selection setting module 233, configured to, when it is detected that the average signal-to-noise ratio value rises to the third value and the rising amplitude is lower than a third set amplitude after the average signal-to-noise ratio value rises to the third value, determine the size of the pre-fetch area corresponding to the average signal-to-noise ratio value, that is, the size of the positive selection area.
In a possible design, referring to fig. 18, fig. 18 is a schematic structural diagram of the third selection module 240, and the third selection module 240 may further include a numerical calculation module 241, an evaluation calculation module 242, and a goal setting module 243, and the functions of the functional modules of the third selection module 240 are described in detail below.
And the numerical value calculating module 241 is used for calculating the standard deviation value and the signal-to-noise ratio value of the iPPG signal of each positive selection area.
And an evaluation calculating module 242, configured to calculate a quality evaluation value Q according to the standard deviation value and the signal-to-noise ratio value.
And the target setting module 243 is configured to sort the quality evaluation values Q of all the regions being selected in a descending order, where the preset condition is that the quality evaluation values Q are sorted in the top w bits.
The server comprises a machine-readable storage medium and a processor, wherein the machine-readable storage medium stores machine-executable instructions, and when the processor executes the machine-executable instructions, the server performs the method in any possible design manner.
The machine-readable storage medium, which is one type of computer-readable storage medium, may be used to store software programs, computer-executable programs, and modules.
The machine-readable storage medium may mainly include a program storage area and a Data storage area, wherein the program storage area may store an operating system, an application program required for at least one function, and the Data storage area may store Data created according to the use of the terminal, etc. furthermore, the machine-readable storage medium may be a volatile Memory or a non-volatile Memory, or may include both volatile and non-volatile memories, wherein the non-volatile Memory may be a Read-Only Memory (ROM), a Programmable Read-Only Memory (PROM), an erasable Programmable Read-Only Memory (erasabprom, EPROM), an Electrically erasable Programmable Read-Only Memory (EEPROM), or a flash Memory.
The computer instructions may be stored in or transmitted from one computer-readable storage medium to another computer-readable storage medium, e.g., from one website site, computer, server, or data center via a wired (e.g., coaxial cable, optical fiber, digital subscriber line (DS L)) or wireless (e.g., infrared, wireless, microwave, etc.) manner to another website site, computer, server, or data center.
Embodiments of the present application are described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
It will be apparent to those skilled in the art that various changes and modifications may be made in the embodiments of the present application without departing from the spirit and scope of the application. Thus, if such modifications and variations of the embodiments of the present application fall within the scope of the claims of the present application and their equivalents, the present application is also intended to encompass such modifications and variations.

Claims (11)

1. An analysis method based on automatic segmentation and selection of regions, comprising:
acquiring a face image to be processed, and acquiring n key feature points on the face image and position information of each key feature point based on a preset algorithm;
dividing f preselected regions on the face image according to the position information of the n key feature points, wherein the contour line of each preselected region is formed by connecting at least three key feature points; the face image comprises a plurality of face organs and a plurality of voice-over areas, and the voice-over areas are completely covered by the f preselected areas;
calculating the position information of the center of mass of each preselected area, and acquiring f positive selection areas taking the center of mass of each preselected area as the center according to the position information of the center of mass;
extracting the iPGG signal of each positive selection area, and selecting the positive selection area of which the iPGG signal meets the preset conditions as a target area;
and analyzing the target area and outputting an analysis result.
2. The method as claimed in claim 1, further comprising a step of setting a reference illumination of the current light source device before the step of calculating the position information of the centroid of each of the preselected areas, wherein the step of setting the reference illumination of the current light source device comprises:
selecting any area on a human face as an illumination calibration area, and acquiring data of each frame of the illumination calibration area under the current illumination;
calculating original signal data of the illumination calibration area under the current illumination, calculating iPG alternating current signal data through the original signal data, filtering the iPG alternating current signals, calculating to obtain an effective value AC of the iPG alternating current signals in each period after filtering, calculating an average value of the effective values AC of the iPG alternating current signals, and calculating a signal-to-noise ratio (SNR) of the illumination calibration area under the current illumination;
controlling the illumination to rise, and acquiring the average value and the signal-to-noise ratio (SNR) of the effective value AC of the iPGG alternating current signal in the illumination calibration area under each illumination, wherein the average value and the SNR of the effective value AC of the iPGG alternating current signal are synchronously raised based on the illumination rise; when the situation that the average value of the effective values AC of the iPGG alternating current signals rises to a first value, the rising amplitude is lower than a first set amplitude after the average value of the effective values AC of the iPGG alternating current signals rises to the first value, and the signal-to-noise ratio SNR rises to a second value, starts to fall after the signal-to-noise ratio SNR rises to the second value, and the falling amplitude is higher than a second set amplitude is detected, the first value and the second value are obtained, and any illumination value is selected from illumination intervals corresponding to the first value and the second value to serve as the reference illumination of the current light source.
3. The method of claim 2, wherein a rectangular area in the middle of the forehead of the face is selected as the illumination calibration area.
4. The analysis method based on automatic segmentation and selection of regions according to claim 1 or 2, wherein the step of calculating the position information of the centroid of each of the preselected regions, and acquiring f positive selected regions centered at the centroid of each of the preselected regions according to the position information of the centroid comprises:
setting e pre-fetching areas with different sizes based on the f pre-selecting areas by taking the center of mass of each pre-selecting area as the center, wherein the sizes of the e pre-fetching areas are increased;
respectively calculating the signal-to-noise ratio of e pre-fetching areas under f pre-selecting areas; f, the signal-to-noise ratios of the pre-fetching areas with the same size are grouped, and the average signal-to-noise ratio of each group is calculated to obtain e average signal-to-noise ratios;
and when the average signal-to-noise ratio value is detected to rise to a third value and the rising amplitude is lower than a third set amplitude after the average signal-to-noise ratio value rises to the third value, the size of the pre-fetching area corresponding to the average signal-to-noise ratio value is the size of the positive selection area.
5. The analysis method based on the automatic region segmentation and selection according to claim 1 or 2, wherein the step of extracting the iPPG signal of each selected region and selecting the selected region of the iPPG signal satisfying a preset condition as the target region comprises:
calculating the standard deviation value and the signal-to-noise ratio value of the iPG signal of each positive selection area;
calculating according to the standard difference value and the signal-to-noise ratio value to obtain a quality evaluation value Q;
and sorting the quality evaluation values Q of all the positive selection areas in a descending order, wherein the preset condition is that the quality evaluation values Q are sorted in the top w bits.
6. An analysis method device based on automatic region segmentation and selection is characterized by comprising the following steps:
the acquisition fixed point module is used for acquiring a face image to be processed and acquiring n key feature points on the face image and position information of each key feature point based on a preset algorithm;
the combination preselection module is used for dividing f preselection areas on the face image according to the position information of the n key feature points, and the contour line of each preselection area is formed by connecting at least three key feature points; the face image comprises a plurality of face organs and a plurality of voice-over areas, and the voice-over areas are completely covered by the f preselected areas;
the secondary area selection module is used for calculating the position information of the center of mass of each preselected area and acquiring f positive selection areas taking the center of mass of each preselected area as the center according to the position information of the center of mass;
the third selection module is used for extracting the iPGG signal of each positive selection area and selecting the positive selection area of which the iPGG signal meets the preset conditions as a target area;
and the area analysis module is used for analyzing the target area and outputting an analysis result.
7. The apparatus according to claim 6, further comprising an illuminance setting module, before the secondary region selection module, for setting a reference illuminance under a current light source device; the illuminance setting module includes:
the calibration area selection module is used for selecting any area on the face as an illumination calibration area and acquiring data of each frame of the illumination calibration area under the current illumination;
the data calculation module is used for calculating original signal data of the illumination calibration area under the current illumination, calculating iPG alternating current signal data through the original signal data, filtering the iPG alternating current signals, calculating to obtain an effective value AC of the iPG alternating current signals in each period after filtering, calculating an average value of the effective values AC of the iPG alternating current signals, and calculating a signal-to-noise ratio (SNR) of the illumination calibration area under the current illumination;
the data processing module is used for controlling the rise of the illumination intensity, acquiring the average value and the signal-to-noise ratio (SNR) of the effective value AC of the iPG alternating current signal in each illumination intensity calibration area, and synchronously rising the average value and the SNR of the effective value AC of the iPG alternating current signal based on the rise of the illumination intensity; when the situation that the average value of the effective values AC of the iPGG alternating current signals rises to a first value, the rising amplitude is lower than a first set amplitude after the average value of the effective values AC of the iPGG alternating current signals rises to the first value, and the signal-to-noise ratio SNR rises to a second value, starts to fall after the signal-to-noise ratio SNR rises to the second value, and the falling amplitude is higher than a second set amplitude is detected, the first value and the second value are obtained, and any illumination value is selected from illumination intervals corresponding to the first value and the second value to serve as the reference illumination of the current light source.
8. The device according to claim 6 or 7, wherein the secondary region selection module comprises:
the pre-fetching setting module is used for respectively setting e pre-fetching areas with different sizes based on the f pre-selecting areas by taking the center of mass of each pre-selecting area as the center, wherein the sizes of the e pre-fetching areas are increased;
a pre-fetching calculation module for calculating the signal-to-noise ratio of e pre-fetching areas under f pre-selection areas respectively; f, the signal-to-noise ratios of the pre-fetching areas with the same size are grouped, and the average signal-to-noise ratio of each group is calculated to obtain e average signal-to-noise ratios;
and the positive selection setting module is used for setting the size of the pre-fetching area corresponding to the average signal-to-noise ratio value as the size of the positive selection area when the average signal-to-noise ratio value is detected to rise to a third value and the rising amplitude is lower than a third set amplitude after the average signal-to-noise ratio value rises to the third value.
9. The device according to claim 6 or 7, wherein the third selecting module comprises:
the numerical value calculation module is used for calculating the standard deviation value and the signal-to-noise ratio value of the iPG signal of each positive selection area;
the evaluation calculation module is used for calculating to obtain a quality evaluation value Q according to the standard difference value and the signal-to-noise ratio value;
and the target setting module is used for sequencing the quality evaluation values Q of all the forward selection areas in a descending order, and the preset condition is that the quality evaluation values Q are sequenced at the top w.
10. A server of an analysis method based on automatic region segmentation and selection, the server of the analysis method based on automatic region segmentation and selection comprises a machine-readable storage medium and a processor, the machine-readable storage medium stores machine-executable instructions, and the processor executes the machine-executable instructions, so that the server of the analysis method based on automatic region segmentation and selection realizes the analysis method based on automatic region segmentation and selection according to any one of claims 1 to 5.
11. A readable storage medium having stored therein machine executable instructions which, when executed, implement the method of analysis based on region automatic segmentation and selection according to any one of claims 1 to 5.
CN202010129351.8A 2020-02-28 2020-02-28 Analysis method, device and server based on automatic segmentation and selection of regions Active CN111445477B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010129351.8A CN111445477B (en) 2020-02-28 2020-02-28 Analysis method, device and server based on automatic segmentation and selection of regions

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010129351.8A CN111445477B (en) 2020-02-28 2020-02-28 Analysis method, device and server based on automatic segmentation and selection of regions

Publications (2)

Publication Number Publication Date
CN111445477A true CN111445477A (en) 2020-07-24
CN111445477B CN111445477B (en) 2023-07-25

Family

ID=71655752

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010129351.8A Active CN111445477B (en) 2020-02-28 2020-02-28 Analysis method, device and server based on automatic segmentation and selection of regions

Country Status (1)

Country Link
CN (1) CN111445477B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180042486A1 (en) * 2015-03-30 2018-02-15 Tohoku University Biological information measuring apparatus and biological information measuring method
US20180204111A1 (en) * 2013-02-28 2018-07-19 Z Advanced Computing, Inc. System and Method for Extremely Efficient Image and Pattern Recognition and Artificial Intelligence Platform
CN109589101A (en) * 2019-01-16 2019-04-09 四川大学 A kind of contactless physiological parameter acquisition methods and device based on video
CN109977858A (en) * 2019-03-25 2019-07-05 北京科技大学 A kind of heart rate detection method and device based on image analysis
CN110647815A (en) * 2019-08-25 2020-01-03 上海贝瑞电子科技有限公司 Non-contact heart rate measurement method and system based on face video image

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180204111A1 (en) * 2013-02-28 2018-07-19 Z Advanced Computing, Inc. System and Method for Extremely Efficient Image and Pattern Recognition and Artificial Intelligence Platform
US20180042486A1 (en) * 2015-03-30 2018-02-15 Tohoku University Biological information measuring apparatus and biological information measuring method
CN109589101A (en) * 2019-01-16 2019-04-09 四川大学 A kind of contactless physiological parameter acquisition methods and device based on video
CN109977858A (en) * 2019-03-25 2019-07-05 北京科技大学 A kind of heart rate detection method and device based on image analysis
CN110647815A (en) * 2019-08-25 2020-01-03 上海贝瑞电子科技有限公司 Non-contact heart rate measurement method and system based on face video image

Also Published As

Publication number Publication date
CN111445477B (en) 2023-07-25

Similar Documents

Publication Publication Date Title
CN101599175B (en) Detection method for determining alteration of shooting background and image processing device
JP4307496B2 (en) Facial part detection device and program
JP4997252B2 (en) How to identify the illumination area in an image
CN105744268A (en) Camera shielding detection method and device
EP2188779A1 (en) Extraction method of tongue region using graph-based approach and geometric properties
CN107895362B (en) Machine vision method for detecting quality of miniature wiring terminal
WO2013111140A2 (en) Eye tracking
CN110913147B (en) Exposure adjusting method and device and electronic equipment
CN107347151A (en) binocular camera occlusion detection method and device
US20080069482A1 (en) Image processor
CN109273074B (en) Network model adjusting method and equipment for medical image
AU2020103260A4 (en) Rice blast grading system and method
CN106815575A (en) The optimum decision system and its method of Face datection result set
DE112011105435B4 (en) eyelid detection device
CN110827273A (en) Tea disease detection method based on regional convolution neural network
CN109028237A (en) The kitchen ventilator of wind speed adjusting is carried out based on dual area Image Acquisition
CN115171024A (en) Face multi-feature fusion fatigue detection method and system based on video sequence
CN110248113B (en) Image processing apparatus, image processing method, and computer-readable recording medium
CN111445477A (en) Analysis method and device based on automatic region segmentation and selection and server
CN116563276B (en) Chemical fiber filament online defect detection method and detection system
KR101908785B1 (en) Tongue region extraction method and image processing apparatus for performing the method
CN111369497B (en) Walking type tree fruit continuous counting method and device
CN108564564A (en) Based on the medical image cutting method for improving fuzzy connectedness and more seed points
CN106937864B (en) Skin tissue estimation method and system using same
CN115222638B (en) Neural network model-based retinal blood vessel image segmentation method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant