CN111743524A - Information processing method, terminal and computer readable storage medium - Google Patents

Information processing method, terminal and computer readable storage medium Download PDF

Info

Publication number
CN111743524A
CN111743524A CN202010568303.9A CN202010568303A CN111743524A CN 111743524 A CN111743524 A CN 111743524A CN 202010568303 A CN202010568303 A CN 202010568303A CN 111743524 A CN111743524 A CN 111743524A
Authority
CN
China
Prior art keywords
signal curve
identification
target
curve
signal
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010568303.9A
Other languages
Chinese (zh)
Inventor
陈旭杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lenovo Beijing Ltd
Original Assignee
Lenovo Beijing Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lenovo Beijing Ltd filed Critical Lenovo Beijing Ltd
Priority to CN202010568303.9A priority Critical patent/CN111743524A/en
Publication of CN111743524A publication Critical patent/CN111743524A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/02Detecting, measuring or recording pulse, heart rate, blood pressure or blood flow; Combined pulse/heart-rate/blood pressure determination; Evaluating a cardiovascular condition not otherwise provided for, e.g. using combinations of techniques provided for in this group with electrocardiography or electroauscultation; Heart catheters for measuring blood pressure
    • A61B5/024Detecting, measuring or recording pulse rate or heart rate
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • A61B5/725Details of waveform analysis using specific filters therefor, e.g. Kalman or adaptive filters
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • A61B5/7253Details of waveform analysis characterised by using transforms
    • A61B5/7257Details of waveform analysis characterised by using transforms using Fourier transforms
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • A61B5/7264Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
    • A61B5/7267Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems involving training the classification device
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/165Detection; Localisation; Normalisation using facial parts and geometric relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2218/00Aspects of pattern recognition specially adapted for signal processing
    • G06F2218/02Preprocessing

Landscapes

  • Health & Medical Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Artificial Intelligence (AREA)
  • Animal Behavior & Ethology (AREA)
  • Veterinary Medicine (AREA)
  • Public Health (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Surgery (AREA)
  • Physiology (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Pathology (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Medical Informatics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Psychiatry (AREA)
  • Signal Processing (AREA)
  • Cardiology (AREA)
  • Mathematical Physics (AREA)
  • Geometry (AREA)
  • Evolutionary Computation (AREA)
  • Fuzzy Systems (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the invention discloses an information processing method, which comprises the following steps: determining an identification region of an interested face corresponding to a frame image based on the frame image containing the face information of the object to be detected; carrying out spatial pixel averaging on at least two primary color channels of the identification area to obtain an identification signal of the identification area, and determining an identification signal curve based on the identification signal of the identification area; performing linear selection and blind source separation on the identification signal curve to obtain a target signal curve; the target signal curve represents the heart rate of the object to be detected; and processing the target signal curve based on signal transformation and a preset threshold value to obtain a heart rate value of the object to be detected. The embodiment of the invention also discloses a terminal and a computer readable storage medium.

Description

Information processing method, terminal and computer readable storage medium
Technical Field
The present invention relates to the field of digital image processing technologies, and in particular, to an information processing method, a terminal, and a computer-readable storage medium.
Background
The heart rate is one of the most important physiological information of human body, and can reflect a plurality of important human body physiological signs, such as health condition and the like; in the related art, non-contact human heart rate measurement becomes a reality along with the development of visual technology, and meanwhile, non-contact human heart rate measurement based on human face video is widely concerned; in the related art, motion artifacts are easy to occur in the non-contact heart rate measurement process based on the face video, which easily causes the problem that the accuracy is not high when a user uses the non-contact heart rate measurement based on the face video.
Disclosure of Invention
In order to solve the above technical problems, embodiments of the present invention desirably provide an information processing method, a terminal, and a computer-readable storage medium, so as to effectively avoid the problem of motion artifacts existing in the non-contact heart rate measurement process based on a face video in the related art, and improve the accuracy of a user when using the non-contact heart rate measurement based on the face video.
In order to achieve the purpose, the technical scheme of the invention is realized as follows:
an information processing method, the method comprising:
determining an identification region of an interested face corresponding to a frame image based on the frame image containing the face information of the object to be detected;
carrying out spatial pixel averaging on at least two primary color channels of the identification area to obtain an identification signal of the identification area, and determining an identification signal curve based on the identification signal of the identification area;
performing linear selection and blind source separation on the identification signal curve to obtain a target signal curve; the target signal curve represents the heart rate of the object to be detected;
and processing the target signal curve based on signal transformation and a preset threshold value to obtain a heart rate value of the object to be detected.
Optionally, the determining, based on the frame image containing the face information of the object to be detected, an identification region of the face of interest corresponding to the frame image includes:
acquiring a multi-frame target image containing face information of an object to be detected;
carrying out image preprocessing on the multiple frames of target images to obtain key point information corresponding to each frame of image;
determining an identification area of the interested face corresponding to each frame of image based on the key point information corresponding to each frame of image; wherein the identified regions include at least a first identified region characterizing the left and right cheeks and a second identified region characterizing the bridge of the nose.
Optionally, the acquiring a multi-frame target image including face information of an object to be detected includes:
acquiring a video containing the face information of the object to be detected;
acquiring a plurality of target videos with preset time length from the videos in a mode of sliding a video window according to a preset step length;
and acquiring the multi-frame target image corresponding to each target video.
Optionally, the processing the target signal curve based on the signal transformation and a preset threshold to obtain a heart rate value of the object to be detected includes:
processing the target signal curve based on signal transformation and a preset threshold value to obtain a reference heart rate value corresponding to each target video;
and processing the plurality of reference heart rate values by adopting an average deviation algorithm to obtain the heart rate value of the object to be detected.
Optionally, the performing linear selection and blind source separation on the identification signal curve to obtain a target signal curve includes:
performing linear selection on the identification signal curve to obtain a basic signal curve;
and carrying out blind source separation on the basic signal curve to obtain the target signal curve.
Optionally, the linearly selecting the identification signal curve to obtain a basic signal curve includes:
carrying out segmentation processing on the identification signal curves to obtain a plurality of first signal curves;
selecting variances of the first signal curves to obtain second signal curves, and generating a first target signal curve based on the second signal curves;
and removing a nonlinear part in the first target signal curve based on spline fitting to obtain the basic signal curve.
Optionally, the blind source separation on the basic signal curve to obtain the target signal curve includes:
filtering the basic signal curve to obtain a reference signal curve;
and carrying out iterative processing on the reference signal curve by adopting an independent component analysis method to obtain the target signal curve.
Optionally, the processing the target signal curve based on the signal transformation and a preset threshold includes:
carrying out Fourier transform on the target signal curve to obtain a target signal frequency domain power value;
and processing the target signal frequency domain power value based on a preset threshold value.
A terminal, the terminal comprising: a processor, a memory, and a communication bus;
the communication bus is used for realizing communication connection between the processor and the memory;
the processor is configured to execute a program of the information processing method stored in the memory to realize the steps of:
determining an identification region of an interested face corresponding to a frame image based on the frame image containing the face information of the object to be detected;
carrying out spatial pixel averaging on at least two primary color channels of the identification area to obtain an identification signal of the identification area, and determining an identification signal curve based on the identification signal of the identification area;
performing linear selection and blind source separation on the identification signal curve to obtain a target signal curve; the target signal curve represents the heart rate of the object to be detected;
and processing the target signal curve based on signal transformation and a preset threshold value to obtain a heart rate value of the object to be detected.
A computer-readable storage medium having one or more programs, which are executable by one or more processors, to implement the steps of the above-described information processing method.
The information processing method, the terminal and the computer readable storage medium provided by the embodiment of the invention determine the identification area of the interested face corresponding to the frame image based on the frame image containing the face information of the object to be detected; carrying out spatial pixel averaging on at least two primary color channels of the identification area to obtain an identification signal of the identification area, and determining an identification signal curve based on the identification signal of the identification area; performing linear selection and blind source separation on the identification signal curve to obtain a target signal curve; the target signal curve represents the heart rate of the object to be detected; the target signal curve is processed based on signal transformation and a preset threshold value to obtain a heart rate value of the object to be detected, so that at least two primary color channels of an interested face identification area are subjected to spatial pixel averaging to obtain identification signals, the identification signal curve is determined based on a plurality of identification signals, and linear selection and blind source separation are carried out on the identification signal curve to obtain a target signal curve representing the heart rate of the object to be detected, so that the problem of motion artifacts in non-contact heart rate measurement based on face video in the related technology is effectively solved, and the accuracy of a user in the non-contact heart rate measurement based on the face video is improved.
Drawings
Fig. 1 is a schematic flowchart of an information processing method according to an embodiment of the present invention;
FIG. 2 is a flow chart illustrating another information processing method according to an embodiment of the present invention;
fig. 3 is a schematic diagram illustrating division of a human face area according to an embodiment of the present invention;
FIG. 4 is a flowchart illustrating another information processing method according to an embodiment of the present invention;
FIG. 5 is a flowchart illustrating another information processing method according to an embodiment of the present invention;
FIG. 6 is a schematic diagram of a method for measuring a heart rate of a human body according to an embodiment of the present invention;
fig. 7 is a schematic structural diagram of a terminal according to an embodiment of the present invention.
Detailed Description
The technical solution in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention.
It should be appreciated that reference throughout this specification to "an embodiment of the present invention" or "an embodiment described previously" means that a particular feature, structure or characteristic described in connection with the embodiment is included in at least one embodiment of the present invention. Thus, the appearances of the phrase "in an embodiment of the present invention" or "in the foregoing embodiments" in various places throughout the specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. In various embodiments of the present invention, the sequence numbers of the above-mentioned processes do not mean the execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation on the implementation process of the embodiments of the present invention. The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
In the case where no specific description is given, the terminal executes any step in the embodiments of the present invention, and the processor of the terminal may execute the step. It should also be noted that the embodiment of the present invention does not limit the sequence in which the terminal executes the following steps. And the analysis methods used for analyzing the data in the different steps in the embodiments of the present invention may be the same method or different methods.
It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
An embodiment of the present invention provides an information processing method, which is applied to a terminal, and as shown in fig. 1, the method includes the following steps:
step 101, determining an identification area of an interested human face corresponding to a frame image based on the frame image containing the face information of the object to be detected.
In the embodiment of the invention, the terminal determines the identification area of the interested face corresponding to the frame image based on the frame image containing the face information of the object to be detected, wherein the terminal can be a server, a digital television, a desktop computer or the like. In a feasible implementation manner, the terminal may also be a mobile terminal or a handheld mobile terminal, and the mobile terminal or the handheld mobile terminal may include a mobile phone, a notebook computer, a tablet computer, a palm computer, a personal digital assistant, a portable media player, an intelligent sound box, a navigation device, a wearable device, an intelligent bracelet, a vehicle-mounted computer, and the like. The present invention does not limit the type of the terminal.
In the embodiment of the present invention, the frame image containing the face information of the object to be detected may be a frame image obtained by the terminal based on the acquisition device of the terminal itself, and the acquisition device may specifically be a camera module of the terminal, where the camera module obtains the corresponding frame image information and may set the camera frame rate to be 10 frames per second for acquisition and 30 frames per second for acquisition, and the camera frame rate setting is not limited in this application; or the terminal acquires the frame image containing the face information of the object to be detected through screen capture operation. Meanwhile, in the embodiment of the present invention, the object to be detected may be one object, may also be two objects, and may further be more than two objects having face information.
The frame image containing the face information of the object to be detected represents a static picture containing the face information, and the frame represents a single image picture of the minimum unit in the image animation. In the embodiment of the present invention, the face information of the object to be detected at least includes feature points of the face of the object to be detected, that is, the feature points can specifically express the face contour and the five sense organs information of the object to be detected, wherein the face information at least includes feature information of left and right eyes, mouth corners, nose wings, and the like of the object to be detected.
In the embodiment Of the present invention, a Region Of Interest (ROI) is used to represent an identification Region Of a face Of Interest corresponding to a frame image, where the ROI may specifically refer to several regions with a landmark property in the face, for example, a cheek Region, a wing Of nose Region, an eye Region, and the like.
In an embodiment of the present invention, the terminal may delineate a region to be processed from the frame image containing the face information of the object to be detected in a manner of a square frame, a circle, an ellipse, an irregular polygon, and the like, that is, the ROI may appear in a manner of a square frame, a circle, an ellipse, an irregular polygon, and the like. In the embodiment of the present invention, the specific expression of the determined ROI is not limited at all.
In a possible implementation manner, the terminal may process the frame image containing the face information of the object to be detected in various manners to obtain the ROI, that is, the terminal may perform corresponding processing on each frame image containing the face information of the object to be detected in various manners to obtain the ROI corresponding to each frame image.
102, carrying out spatial pixel averaging on at least two primary color channels of the identification area to obtain an identification signal of the identification area, and determining an identification signal curve based on the identification signal of the identification area.
In the embodiment of the present invention, spatial pixel averaging is performed on at least two primary color channels of the identification region, specifically, spatial pixel averaging may be performed on at least two primary color channels of a plurality of ROIs corresponding to the multi-frame image by the terminal, and at least two primary colors represent at least two of three optical primary colors, i.e., Red, Green, Blue (RGB). In a possible implementation manner, the terminal may perform spatial pixel averaging on two RG channels of the ROI, may perform spatial pixel averaging on two RB channels of the ROI, may perform spatial pixel averaging on two GB channels of the ROI, and may further perform spatial pixel averaging on three RGB channels of the ROI, based on which the corresponding identification signal of the ROI is obtained, where the identification signal may be replaced with RGB values corresponding to the above-mentioned three primary colors; in other words, the RGB values corresponding to the multiple ROIs, that is, the RGB values corresponding to the multiple frames of images mentioned above, are obtained, and in the following embodiments, the three signal values, that is, the RGB values, are obtained by performing spatial pixel averaging on all three primary color channels of the ROIs, which is taken as an example for detailed description.
In the embodiment of the invention, the terminal performs the same operation on the ROI corresponding to each frame of image, namely performs spatial pixel averaging of three primary color channels respectively to obtain RGB values of three primary colors corresponding to each frame of image, and based on the RGB values corresponding to the frames of image, the RGB values corresponding to the frames of image are integrated to generate an identification signal curve, namely the generated identification signal curve comprises three curves, namely a curve S corresponding to an R valuerCurve S corresponding to G valuegCurve S corresponding to B valueb
And 103, performing linear selection and blind source separation on the identification signal curve to obtain a target signal curve.
Wherein the target signal curve represents the heart rate of the object to be detected.
In the embodiment of the invention, the curve S is the signal curve corresponding to the acquired RGB valuerCurve SgCurve SbAnd performing linear selection and blind source separation to obtain a target signal curve.
In a possible embodiment, this may be in particular for the curve SrCurve SgCurve SbPerforming the same operation, namely synchronously performing linear selection, namely variance selection, curve connection and nonlinear and linear trend removing operation on the three signal curves, performing blind source separation operation after the linear selection to obtain an identification signal curve SrCurve SgCurve SbCorresponding target signal curve SrrCurve SggCurve Sbb. Wherein the target signal curve SrrCurve SggCurve SbbIncluding signals that are significant to heart rate variations.
And 104, processing the target signal curve based on the signal transformation and a preset threshold value to obtain a heart rate value of the object to be detected.
In the embodiment of the invention, the terminal pair obtains the target signal curve SrrCurve SggCurve SbbFourier transform to obtain a signal with heart rateAnd screening the obtained frequency domain power spectrum based on a preset threshold value to obtain a heart rate value of the object to be detected. In the practical application process, the general range of the human heart rate frequency is 0.75-2.5 Hz, a corresponding significant single peak can be arranged on a signal component containing a heart rate signal at the frequency position of the actual heart rate, the peak value of other signal frequencies in the signal component frequency spectrum is far lower than the peak value of the heart rate frequency position, and based on the signal component frequency spectrum, a confidence value is set, namely a preset threshold value is selected for the obtained frequency domain power spectrum with the heart rate signal component, so that the heart rate value of the object to be detected is finally obtained.
The information processing method provided by the embodiment of the invention determines the identification area of the interested face corresponding to the frame image based on the frame image containing the face information of the object to be detected; carrying out spatial pixel averaging on at least two primary color channels of the identification area to obtain an identification signal of the identification area, and determining an identification signal curve based on the identification signal of the identification area; carrying out linear selection and blind source separation on the identification signal curve to obtain a target signal curve; the target signal curve represents the heart rate of the object to be detected; and processing the target signal curve based on the signal transformation and a preset threshold value to obtain a heart rate value of the object to be detected. Therefore, the identification signals are obtained by carrying out spatial pixel averaging on at least two primary color channels of the interested face identification area, the identification signal curve is determined based on a plurality of identification signals, linear selection and blind source separation are carried out on the identification signal curve to obtain a target signal curve representing the heart rate of the object to be detected, the problem of motion artifacts existing in non-contact heart rate measurement based on face videos in the related technology is effectively solved, and the accuracy of a user in the use of the non-contact heart rate measurement based on the face videos is improved.
Based on the foregoing embodiments, an embodiment of the present invention provides an information processing method, as shown in fig. 2, including the following steps:
step 201, the terminal acquires a multi-frame target image containing face information of an object to be detected.
In the embodiment of the present invention, the multi-frame target image may specifically be a still picture having face information of the object to be detected in each frame image, where the multi-frame target image may be a continuously time-sequenced multi-frame still picture.
In a possible embodiment, step 201 may be implemented by means of steps 201a to 201 c:
step 201a, the terminal acquires a video containing face information of an object to be detected.
In the embodiment of the invention, the terminal acquires the video information with the face, and can acquire a color video containing the face information of the object to be detected or acquire a gray video containing the face information of the object to be detected; the video can be generated by shooting a relevant object to be detected by a terminal through a shooting device of the terminal, or can be obtained by the terminal from other equipment or a server and contains the face information of the object to be detected.
Step 201b, the terminal obtains a plurality of target videos with preset duration from the videos in a mode of sliding a video window according to preset step length.
In the embodiment of the invention, the terminal acquires a plurality of different target video information with a plurality of preset durations by sliding any frame image of the video from front to back according to a fixed step length through a fixed-length sliding window in the acquired video containing the face information; for example, a video with a duration of 10s and containing face information, wherein 1s contains 30 frames, namely 300 still pictures in the video, the terminal acquires a first target video from the first frame picture according to a preset duration, namely 5s, and the first target video is the video information of 1 st to 5 th s of the video; and the terminal acquires a second target video according to the preset time length, namely 5s, based on the 31 st frame picture, wherein the second target video is the video information of 2-6s of the video, and the like, and the terminal acquires at least 6 target videos.
In the embodiment of the present invention, the predetermined time period may be 1s, 2s, 5s, 10s, etc., which is not limited in this application; meanwhile, the terminal starts to slide from the first frame of the video to acquire the target video, and the terminal can also start to slide from the 10 th frame of the video to acquire the target video.
Step 201c, the terminal acquires a plurality of frames of target images corresponding to each target video.
In the embodiment of the present invention, the terminal processes the plurality of target videos obtained above to obtain a plurality of target images corresponding to each target video information, in other words, each obtained target video is converted into a video frame picture, that is, each frame picture includes an image of face information of the object to be detected. The terminal may obtain some frame image sequences from multiple target videos, or may obtain all of them, and in the following embodiment, all video frame pictures in each target video, that is, all of the multiple frame target images corresponding to each target video, are obtained.
In the embodiment of the present invention, the terminal may process the target video based on a corresponding algorithm or corresponding application software to obtain the corresponding frame image.
Step 202, the terminal performs image preprocessing on multiple frames of target images to obtain key point information corresponding to each frame of image.
In the embodiment of the present invention, the terminal performs preprocessing on multiple frames of images, which may be preprocessing on multiple frames of images corresponding to each target video.
In a feasible real-time manner, taking the acquired target video as a color example, that is, the acquired multi-frame image is also a color image, and the preprocessing of the multi-frame image may specifically be performed through the following procedure: 1. carrying out gray level processing on the color image of each frame to obtain a gray level image corresponding to each frame of image; 2. detecting the gray level image and intercepting a face image, specifically, processing the gray level image by a Histogram of Oriented Gradient (HOG) and a trained linear classifier such as a Support Vector Machine (SVM) to obtain a detection frame coordinate of a face position and a rectangular frame containing the face position, mapping the obtained detection frame coordinate of the face position and the rectangular frame containing the face position to an original color image, positioning and detecting face information, and intercepting a face part in the detection frame based on the face information to obtain the face image; 3. after the capturing of the face partial image is completed, face key point detection is performed, specifically, a trained Gradient Boosting Decision Tree (GBDT) is used as a detector of the face key point, and the detector is used to perform face alignment on the face image to obtain 68 key points of the face image, so as to realize the positioning of the face structure. Fig. 3 shows a schematic diagram of dividing a human face region, wherein 68 key points of a human face image may illustrate an entire human face frame.
And 203, the terminal determines the identification area of the interested face corresponding to each frame of image based on the key point information corresponding to each frame of image.
Wherein the identified regions include at least a first identified region characterizing the left and right cheeks and a second identified region characterizing the bridge of the nose.
In the embodiment of the invention, the ROI with stable illumination condition in the face image can be obtained by intercepting by using the 68 face key point data. As shown in fig. 3, two rectangular ROIs, one triangular ROI are given, wherein a total of four keypoints located at the corners of the eyes and four keypoints located at the cheeks obtain two symmetric rectangular ROIs at the left and right cheeks, and a triangular ROI encompassing the entire bridge of the nose is obtained using three keypoints around the bridge of the nose, for a total of three ROIs.
And 204, the terminal performs space pixel averaging on at least two primary color channels of the identification area to obtain an identification signal of the identification area, and determines an identification signal curve based on the identification signal of the identification area.
In the embodiment of the present invention, the identification region may be two rectangular ROIs and one triangular ROI mentioned above, and the terminal performs spatial pixel averaging on three primary color channels of the three ROIs of each frame of image, that is, extracts RGB signals from the three ROIs, and generates the identification signal curve S based on the RGB signals extracted from the three ROIs of each frame of imagerCurve SgCurve Sb
And step 205, the terminal performs linear selection and blind source separation on the identification signal curve to obtain a target signal curve.
Wherein the target signal curve represents the heart rate of the object to be detected.
And step 206, the terminal processes the target signal curve based on the signal transformation and the preset threshold value to obtain the heart rate value of the object to be detected.
It should be noted that, for the descriptions of the same steps and the same contents in this embodiment as those in other embodiments, reference may be made to the descriptions in other embodiments, which are not described herein again.
In the information processing method provided by the embodiment of the invention, the terminal converts videos of a plurality of different time periods into corresponding frame pictures, then performs image processing on each frame picture to obtain the identification area of each frame picture, simultaneously performs spatial pixel averaging on at least two primary color channels of the interested face identification area to obtain the identification signal, determines the identification signal curve based on a plurality of identification signals, and performs linear selection and blind source separation on the identification signal curve to obtain the target signal curve representing the heart rate of the object to be detected, thereby effectively avoiding the problem of motion artifacts existing in non-contact heart rate measurement based on the face video in the related technology, and improving the accuracy of the user in the non-contact heart rate measurement based on the face video.
Based on the foregoing embodiments, an embodiment of the present invention provides an information processing method, as shown in fig. 4, including the steps of:
step 301, the terminal acquires a multi-frame target image containing face information of an object to be detected.
Step 302, the terminal performs image preprocessing on multiple frames of target images to obtain key point information corresponding to each frame of image.
And step 303, the terminal determines the identification area of the interested face corresponding to each frame of image based on the key point information corresponding to each frame of image.
Step 304, the terminal performs spatial pixel averaging on at least two primary color channels of the identification area to obtain an identification signal of the identification area, and determines an identification signal curve based on the identification signal of the identification area.
And 305, the terminal linearly selects the identification signal curve to obtain a basic signal curve.
In one possible implementation, step 305 may be implemented by means of steps 305a to 305 c:
step 305a, the terminal performs segmentation processing on the identification signal curve to obtain a plurality of first signal curves.
In the embodiment of the invention, the terminal pair identifies the signal curve SrCurve SgCurve SbThe same operation is performed, i.e. the three curves are segmented at regular intervals, e.g. every 15 frames and every 10 frames, to obtain the curve SrCurve SgCurve SbA plurality of first signal curves S corresponding to each otherr1Curve Sg1Curve Sb1
And 305b, the terminal performs variance selection on the plurality of first signal curves to obtain a plurality of second signal curves, and generates a first target signal curve based on the plurality of second signal curves.
In the embodiment of the present invention, the terminal performs variance selection on the signal curves, i.e. calculates a plurality of first signal curves S respectivelyr1Curve Sg1Curve Sb1V of each corresponding signal segmentr、Vg、VbIf the maximum value in the variance is greater than a threshold value, the threshold value is a preset condition value, the segment is cut off, and the like, after the variance selection is completed in all segments, a plurality of second signal curves, namely the curve S, generated by the remaining signal segments are obtainedr2Curve Sg2Curve Sb2Connecting from front to back in sequence, connecting the broken part, namely the middle of the two sections, by a cut part, calculating the average value difference of the front section signal and the rear section signal, and adding the average value difference to the whole rear section curve to keep the average values of the front section signal and the rear section signal consistent so as to complete the alignment of the front section curve and the rear section curve, thereby reducing the frequency component error of the signals caused by curve breakage. In the embodiment of the invention, a plurality of first signal curves Sr1Curve Sg1Curve Sb1After square difference selection processing, generate correspondingFirst target Signal Curve Sr11Curve Sg11Curve Sb11
And 305c, removing the nonlinear part in the first target signal curve by the terminal based on spline fitting to obtain a basic signal curve.
In an embodiment of the invention, the terminal has a first target signal curve Sr11Curve Sg11Curve Sb11Performing a synchronization operation, i.e. removing the non-linearity in the signal curve, and more preferably removing the non-linearity and linear trend in the signal curve, in a possible embodiment, the specific operation is to fit the three curves respectively by using b-splines, where the b-splines are based on the difference quotient to complete the curve fitting, so as to obtain three signal curves Sr11Curve Sg11Curve Sb11Non-linear trend T ofnlr、Tnlg、TnlbThen using three signal curves Sr11Curve Sg11Curve Sb11Corresponding subtraction of the respective non-linear trend Tnlr、Tnlg、TnlbThree curves S are obtainedr3Curve Sg3Curve Sb3And then fitting the signals based on the least square method to obtain three curves Sr3Curve Sg3Curve Sb3The corresponding three curves correspond to a linear trend Tlr、Tlg、TlbThen using curve Sr3Curve Sg3Curve Sb3The respective linear trend T is subtracted accordinglylr、Tlg、TlbTo obtain the final basic signal curve Sr111Curve Sg111Curve Sb111
And step 306, the terminal performs blind source separation on the basic signal curve to obtain a target signal curve.
Wherein the target signal curve represents the heart rate of the object to be detected.
In the embodiment of the invention, the terminal needs to firstly align the basic signal curve Sr111Curve Sg111Curve Sb111Performing corresponding band-pass filtering processing, and performing blind source on the basic signal curveA separation operation to obtain a corresponding target signal curve SrrCurve SggCurve Sbb
In one possible implementation, step 306 may be implemented by steps 306a to 306 b:
and step 306a, the terminal carries out filtering processing on the basic signal curve to obtain a reference signal curve.
In the embodiment of the invention, the terminal pair has a basic signal curve Sr111Curve Sg111Curve Sb111The bandpass filtering is performed, and the manner in which the bandpass filtering is performed is not limited in this application. In a possible embodiment, the three basic signal curves S are filtered outr111Curve Sg111Curve Sb111The ultra-low frequency and ultra-high frequency components in the signal processing system are used for completing band-pass filtering to realize smooth signal curves to obtain three corresponding reference signal curves Sr222Curve Sg222Curve Sb222
And step 306b, the terminal adopts an independent component analysis method to carry out iterative processing on the reference signal curve to obtain a target signal curve.
In the embodiment of the present invention, the terminal processes the three reference signal curves S by Independent Component Analysis (ICA)r222Curve Sg222Curve Sb222Based on the target signal curve, a multi-step iteration is adopted for further processing to obtain a target signal curve, namely a curve SrrCurve SggCurve SbbThe signal components C which are independent of each other can be obtained from the three target signal curves1、C2、C3. Wherein the signal components independent of each other include a signal that is significant for heart rate variations.
And 307, the terminal processes the target signal curve based on the signal transformation and a preset threshold value to obtain a heart rate value of the object to be detected.
It should be noted that, for the descriptions of the same steps and the same contents in this embodiment as those in other embodiments, reference may be made to the descriptions in other embodiments, which are not described herein again.
In the information processing method provided by the embodiment of the invention, the terminal converts videos of a plurality of different time periods into corresponding frame pictures, processes each frame picture to obtain the identification area of each frame picture, obtains the identification signal by performing spatial pixel averaging on at least two primary color channels of the interested face identification area, determines the identification signal curve based on a plurality of identification signals, and performs linear selection and blind source separation on the identification signal curve to obtain the target signal curve representing the heart rate of the object to be detected, thereby effectively avoiding the problem of motion artifacts in non-contact heart rate measurement based on the face video in the related technology, and improving the accuracy of the user in the non-contact heart rate measurement based on the face video.
Based on the foregoing embodiments, an embodiment of the present invention provides an information processing method, as shown in fig. 5, including the steps of:
step 401, the terminal acquires a multi-frame target image containing face information of an object to be detected.
Step 402, the terminal performs image preprocessing on multiple frames of target images to obtain key point information corresponding to each frame of image.
And step 403, the terminal determines the identification area of the interested face corresponding to each frame of image based on the key point information corresponding to each frame of image.
Step 404, the terminal performs spatial pixel averaging on at least two primary color channels of the identification area to obtain an identification signal of the identification area, and determines an identification signal curve based on the identification signal of the identification area;
405, the terminal linearly selects an identification signal curve to obtain a basic signal curve;
and 406, performing blind source separation on the basic signal curve by the terminal to obtain a target signal curve.
Wherein the target signal curve represents the heart rate of the object to be detected.
Step 407, the terminal performs fourier transform on the target signal curve to obtain a target signal frequency domain power value.
In the embodiment of the invention, the terminal obtains the target signal curve SrrCurve SggCurve SbbIn each case independently of one another, signal component C1、C2、C3And respectively carrying out Fourier transform to obtain frequency domain power spectrums of the signal components, and carrying out frequency selection in the general range of human heart rate frequency, namely 0.75-2.5 Hz to obtain a target signal frequency power value.
And step 408, the terminal processes the frequency domain power value of the target signal based on a preset threshold value to obtain a heart rate value of the object to be detected.
In an embodiment of the invention, the predetermined threshold is a confidence value obtained based on the fact that a signal component containing a heart rate signal has a single significant peak at a frequency position of an actual heart rate, and other peaks of the frequency spectrum of the signal component are much lower than those of the frequency position of the heart rate, whereas a signal component not containing the heart rate signal has no such feature. Based on the above, the maximum peak value and the second maximum peak value of each signal component frequency spectrum in the heart rate range are found, the ratio, namely the confidence level, is calculated, if the ratio is larger than the confidence level, the frequency of the maximum peak value is used as the actual heart rate, and if a plurality of signal component frequency spectrums meet the condition, the frequency corresponding to the maximum confidence level is selected as the actual heart rate.
It should be noted that, for the descriptions of the same steps and the same contents in this embodiment as those in other embodiments, reference may be made to the descriptions in other embodiments, which are not described herein again.
In the information processing method provided by the embodiment of the invention, the terminal converts videos of a plurality of different time periods into corresponding frame pictures, processes each frame picture to obtain the identification area of each frame picture, obtains the identification signal by performing spatial pixel averaging on at least two primary color channels of the interested face identification area, determines the identification signal curve based on a plurality of identification signals, and performs linear selection and blind source separation on the identification signal curve to obtain the target signal curve representing the heart rate of the object to be detected, thereby effectively avoiding the problem of motion artifacts in non-contact heart rate measurement based on the face video in the related technology, and improving the accuracy of the user in the non-contact heart rate measurement based on the face video.
Based on the foregoing embodiment, in another embodiment of the present invention, the terminal processes the target signal curve based on the signal transformation and the preset threshold to obtain the heart rate value of the object to be detected, and the method may further be implemented by the following steps a1 to a 2:
and step A1, the terminal processes the target signal curve based on the signal transformation and the preset threshold value to obtain a reference heart rate value corresponding to each target video.
In the embodiment of the present invention, the target signal curve obtained by the terminal may be three target signal curves S corresponding to one target videorrCurve SggCurve SbbMore preferably, three target signal curves S corresponding to the plurality of target videosrrCurve SggCurve Sbb(ii) a The number of the target videos is not limited, and based on the number, the reference heart rate values of the to-be-detected object corresponding to the plurality of target videos can be obtained. Therefore, the situation that the heart rate acquired by using one target video is inaccurate in measurement is avoided.
And step A2, the terminal processes the multiple reference heart rate values by adopting an average deviation algorithm to obtain the heart rate value of the object to be detected.
In the embodiment of the present invention, the terminal may further process a plurality of reference heart rate values corresponding to the obtained plurality of target videos to obtain an average value, that is, a final heart rate value of the object to be detected. In a feasible implementation manner, the terminal may obtain a final heart rate value by removing an outlier from the plurality of reference heart rate values, then averaging the processed plurality of reference heart rate values, that is, using a Mean Absolute Deviation (MAD) algorithm to find a median h of the plurality of reference heart rate values, then calculating a difference between the plurality of reference heart rate values and the median h to obtain a plurality of corresponding Deviation values, if the Deviation value is greater than a preset threshold, truncating the Deviation value as the outlier, after completing the outlier removing operation, averaging the remaining heart rate values to obtain a final Mean value ht, which is the human heart rate value of the finally obtained face video, so that the terminal obtains a plurality of target videos through a sliding window, obtains a heart rate value corresponding to each target video, and determines an average heart rate value by using the outlier removing algorithm, the robustness of the method for obtaining the heart rate value is further ensured.
In the embodiment of the invention, fig. 6 shows a schematic diagram of a method for measuring a human heart rate. The flow steps for obtaining the human body heart rate value based on the human face video are as follows:
performing the following on a per segment basis of the target video: step 1, acquiring a face video; step 2, face detection and key point extraction, specifically, face detection and key point extraction are carried out through the obtained frame image converted from the face video; step 3, ROI extraction, namely extracting ROI from the key points of the face obtained in the step 2; step 4, RGB signal extraction, namely extracting RGB signals from the ROI; step 5, selecting variance and connecting signals, namely performing curve processing on the acquired RGB signals; step 6, removing nonlinear and linear trends, specifically, removing the RGB curve subjected to variance selection and signal connection for curve processing; step 7, blind source separation; and 8, Fourier transform. The whole measuring method obtains a plurality of target videos through a step 10, namely sliding a window, then sequentially performs the steps 1-8 on the plurality of targets to obtain a plurality of signal values, and then performs the step 9 and outlier removal to finally obtain the human heart rate value of the step 11.
Based on the foregoing embodiment, an embodiment of the present invention provides a terminal 7, where the terminal 7 may apply an information processing method provided in the embodiments corresponding to fig. 1-2 and fig. 4-5, and as shown in fig. 7, the terminal 7 includes: a processor 71, a memory 72, and a communication bus 73, wherein:
the communication bus 73 is used to realize a communication connection between the processor 71 and the memory 72.
The processor 71 is configured to execute a program of an information processing method stored in the memory 72 to realize the steps of:
determining an identification region of an interested face corresponding to a frame image based on the frame image containing the face information of the object to be detected;
carrying out spatial pixel averaging on at least two primary color channels of the identification area to obtain an identification signal of the identification area, and determining an identification signal curve based on the identification signal of the identification area;
carrying out linear selection and blind source separation on the identification signal curve to obtain a target signal curve; the target signal curve represents the heart rate of the object to be detected;
and processing the target signal curve based on the signal transformation and a preset threshold value to obtain a heart rate value of the object to be detected.
In other embodiments of the present invention, the processor 71 is configured to execute the following steps of determining, based on the frame image containing the face information of the object to be detected, the identification region of the face of interest corresponding to the frame image stored in the memory 72, to:
acquiring a multi-frame target image containing face information of an object to be detected;
carrying out image preprocessing on a plurality of frames of target images to obtain key point information corresponding to each frame of image;
determining an identification area of the interested face corresponding to each frame of image based on the key point information corresponding to each frame of image; wherein the identified regions include at least a first identified region characterizing the left and right cheeks and a second identified region characterizing the bridge of the nose.
In other embodiments of the present invention, the processor 71 is configured to execute the following steps to acquire the multiple frames of target images stored in the memory 72, where the multiple frames of target images include face information of the object to be detected:
acquiring a video containing face information of an object to be detected;
acquiring a plurality of target videos with preset duration from the videos in a mode of sliding a video window according to a preset step length;
and acquiring a plurality of frames of target images corresponding to each target video.
In other embodiments of the present invention, the processor 71 is configured to execute the processing on the target signal curve based on the signal transformation and the preset threshold value stored in the memory 72 to obtain a heart rate value of the object to be detected, so as to implement the following steps:
processing the target signal curve based on the signal transformation and a preset threshold value to obtain a reference heart rate value corresponding to each target video;
and processing the plurality of reference heart rate values by adopting an average deviation algorithm to obtain the heart rate value of the object to be detected.
In other embodiments of the present invention, processor 71 is configured to perform linear selection and blind source separation on the identification signal curve stored in memory 72 to obtain a target signal curve, so as to implement the following steps:
carrying out linear selection on the identification signal curve to obtain a basic signal curve;
and carrying out blind source separation on the basic signal curve to obtain a target signal curve.
In other embodiments of the present invention, the processor 71 is configured to perform the linear selection of the identification signal curve stored in the memory 72 to obtain a basic signal curve, so as to implement the following steps:
carrying out segmentation processing on the identification signal curves to obtain a plurality of first signal curves;
selecting the variances of the first signal curves to obtain second signal curves, and generating a first target signal curve based on the second signal curves;
and removing the nonlinear part in the first target signal curve based on spline fitting to obtain a basic signal curve.
In other embodiments of the present invention, the processor 71 is configured to perform blind source separation on the basic signal curve stored in the memory 72 to obtain a target signal curve, so as to implement the following steps:
filtering the basic signal curve to obtain a reference signal curve;
and carrying out iterative processing on the reference signal curve by adopting an independent component analysis method to obtain a target signal curve.
In other embodiments of the present invention, processor 71 is configured to execute processing of the target signal curve based on the signal transformation and the preset threshold stored in memory 72 to implement the following steps:
carrying out Fourier transform on the target signal curve to obtain a target signal frequency domain power value;
and processing the frequency domain power value of the target signal based on a preset threshold value.
It should be noted that, for a specific implementation process of the steps executed by the processor in this embodiment, reference may be made to implementation processes in the information processing method provided in embodiments corresponding to fig. 1-2 and fig. 4-5, and details are not described here again.
According to the terminal provided by the embodiment of the invention, the terminal converts videos of a plurality of different time periods into corresponding frame pictures, processes each frame picture to obtain the identification area of each frame picture, obtains the identification signal by performing spatial pixel averaging on at least two primary color channels of the interested face identification area, determines the identification signal curve based on a plurality of identification signals, and performs linear selection and blind source separation on the identification signal curve to obtain the target signal curve representing the heart rate of the object to be detected, so that the problem of motion artifacts existing in non-contact heart rate measurement based on the face video in the related technology is effectively avoided, and the accuracy of a user in the non-contact heart rate measurement based on the face video is improved.
Based on the foregoing embodiments, embodiments of the present invention provide a computer-readable storage medium storing one or more programs, which are executable by one or more processors to implement steps of an information processing method corresponding to fig. 1-2 and fig. 4-5.
The computer-readable storage medium may be a Read Only Memory (ROM), a Programmable Read Only Memory (PROM), an Erasable Programmable Read Only Memory (EPROM), an Electrically Erasable Programmable Read Only Memory (EEPROM), a magnetic Random Access Memory (FRAM), a Flash Memory (Flash Memory), a magnetic surface Memory, an optical Disc, or a Compact Disc Read-Only Memory (CD-ROM); and may be various electronic devices such as mobile phones, computers, tablet devices, personal digital assistants, etc., including one or any combination of the above-mentioned memories.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal device (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method described in the embodiments of the present invention.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The above description is only a preferred embodiment of the present invention, and not intended to limit the scope of the present invention, and all modifications of equivalent structures and equivalent processes, which are made by using the contents of the present specification and the accompanying drawings, or directly or indirectly applied to other related technical fields, are included in the scope of the present invention.

Claims (10)

1. An information processing method, the method comprising:
determining an identification region of an interested face corresponding to a frame image based on the frame image containing the face information of the object to be detected;
carrying out spatial pixel averaging on at least two primary color channels of the identification area to obtain an identification signal of the identification area, and determining an identification signal curve based on the identification signal of the identification area;
performing linear selection and blind source separation on the identification signal curve to obtain a target signal curve; the target signal curve represents the heart rate of the object to be detected;
and processing the target signal curve based on signal transformation and a preset threshold value to obtain a heart rate value of the object to be detected.
2. The method according to claim 1, wherein the determining, based on the frame image containing the face information of the object to be detected, the identification region of the face of interest corresponding to the frame image comprises:
acquiring a multi-frame target image containing face information of an object to be detected;
carrying out image preprocessing on the multiple frames of target images to obtain key point information corresponding to each frame of image;
determining an identification area of the interested face corresponding to each frame of image based on the key point information corresponding to each frame of image; wherein the identified regions include at least a first identified region characterizing the left and right cheeks and a second identified region characterizing the bridge of the nose.
3. The method according to claim 2, wherein the acquiring a plurality of frames of target images containing face information of the object to be detected comprises:
acquiring a video containing the face information of the object to be detected;
acquiring a plurality of target videos with preset time length from the videos in a mode of sliding a video window according to a preset step length;
and acquiring the multi-frame target image corresponding to each target video.
4. The method according to claim 3, wherein the processing the target signal curve based on the signal transformation and a preset threshold to obtain a heart rate value of the object to be detected comprises:
processing the target signal curve based on signal transformation and a preset threshold value to obtain a reference heart rate value corresponding to each target video;
and processing the plurality of reference heart rate values by adopting an average deviation algorithm to obtain the heart rate value of the object to be detected.
5. The method of claim 1, wherein the performing linear selection and blind source separation on the identification signal curve to obtain a target signal curve comprises:
performing linear selection on the identification signal curve to obtain a basic signal curve;
and carrying out blind source separation on the basic signal curve to obtain the target signal curve.
6. The method of claim 5, wherein said linearly selecting said identification signal curve to obtain a base signal curve comprises:
carrying out segmentation processing on the identification signal curves to obtain a plurality of first signal curves;
selecting variances of the first signal curves to obtain second signal curves, and generating a first target signal curve based on the second signal curves;
and removing a nonlinear part in the first target signal curve based on spline fitting to obtain the basic signal curve.
7. The method of claim 5, wherein said blind source separation of said base signal curve to obtain said target signal curve comprises:
filtering the basic signal curve to obtain a reference signal curve;
and carrying out iterative processing on the reference signal curve by adopting an independent component analysis method to obtain the target signal curve.
8. The method according to claim 1 or 4, wherein the processing of the target signal curve based on the signal transformation and a preset threshold comprises:
carrying out Fourier transform on the target signal curve to obtain a target signal frequency domain power value;
and processing the target signal frequency domain power value based on a preset threshold value.
9. A terminal, characterized in that the terminal comprises: a processor, a memory, and a communication bus;
the communication bus is used for realizing communication connection between the processor and the memory;
the processor is configured to execute a program of the information processing method stored in the memory to realize the steps of:
determining an identification region of an interested face corresponding to a frame image based on the frame image containing the face information of the object to be detected;
carrying out spatial pixel averaging on at least two primary color channels of the identification area to obtain an identification signal of the identification area, and determining an identification signal curve based on the identification signal of the identification area;
performing linear selection and blind source separation on the identification signal curve to obtain a target signal curve; the target signal curve represents the heart rate of the object to be detected;
and processing the target signal curve based on signal transformation and a preset threshold value to obtain a heart rate value of the object to be detected.
10. A computer-readable storage medium characterized by storing one or more programs, which are executable by one or more processors, to implement the steps of the information processing method according to any one of claims 1 to 8.
CN202010568303.9A 2020-06-19 2020-06-19 Information processing method, terminal and computer readable storage medium Pending CN111743524A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010568303.9A CN111743524A (en) 2020-06-19 2020-06-19 Information processing method, terminal and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010568303.9A CN111743524A (en) 2020-06-19 2020-06-19 Information processing method, terminal and computer readable storage medium

Publications (1)

Publication Number Publication Date
CN111743524A true CN111743524A (en) 2020-10-09

Family

ID=72675259

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010568303.9A Pending CN111743524A (en) 2020-06-19 2020-06-19 Information processing method, terminal and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN111743524A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113093106A (en) * 2021-04-09 2021-07-09 北京华捷艾米科技有限公司 Sound source positioning method and system
CN114469037A (en) * 2022-01-29 2022-05-13 武汉大学 High-reliability heart rate measurement method based on millimeter wave radar
CN114708225A (en) * 2022-03-31 2022-07-05 上海商汤临港智能科技有限公司 Blood pressure measuring method and device, electronic equipment and storage medium
CN114795143A (en) * 2022-03-31 2022-07-29 联想(北京)有限公司 Detection method, detection equipment and computer storage medium

Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103720468A (en) * 2013-12-05 2014-04-16 深圳先进技术研究院 Artifact identification method and device applied to dynamic electrocardiogram data
CN104138254A (en) * 2013-05-10 2014-11-12 天津点康科技有限公司 Non-contact type automatic heart rate measurement system and measurement method
CN104287711A (en) * 2014-09-24 2015-01-21 广州三瑞医疗器械有限公司 Methods for calculating non-baseline part and baseline of fetal heart rate curve
CN105105737A (en) * 2015-08-03 2015-12-02 南京盟联信息科技有限公司 Motion state heart rate monitoring method based on photoplethysmography and spectrum analysis
CN105554385A (en) * 2015-12-18 2016-05-04 天津中科智能识别产业技术研究院有限公司 Remote multimode biometric recognition method and system thereof
CN106388832A (en) * 2016-11-24 2017-02-15 西安思源学院 Identity identification method based on ultrasound whole heart sequential images
CN106491117A (en) * 2016-12-06 2017-03-15 上海斐讯数据通信技术有限公司 A kind of signal processing method and device based on PPG heart rate measurement technology
CN107341435A (en) * 2016-08-19 2017-11-10 北京市商汤科技开发有限公司 Processing method, device and the terminal device of video image
CN108236461A (en) * 2017-12-22 2018-07-03 天津天堰科技股份有限公司 It is a kind of can be into the processing method of the electrocardiosignal of edlin
WO2018179150A1 (en) * 2017-03-29 2018-10-04 日本電気株式会社 Heart rate estimation apparatus
CN109381181A (en) * 2017-08-14 2019-02-26 深圳大学 The end-point detecting method of electrocardiosignal signature waveform
CN109480807A (en) * 2018-09-21 2019-03-19 王桥生 A kind of contactless method for measuring heart rate based on picture signal analysis
WO2019203106A1 (en) * 2018-04-17 2019-10-24 Nec Corporation Pulse rate estimation apparatus, pulse rate estimation method, and computer-readable storage medium
CN111134650A (en) * 2019-12-26 2020-05-12 上海眼控科技股份有限公司 Heart rate information acquisition method and device, computer equipment and storage medium

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104138254A (en) * 2013-05-10 2014-11-12 天津点康科技有限公司 Non-contact type automatic heart rate measurement system and measurement method
CN103720468A (en) * 2013-12-05 2014-04-16 深圳先进技术研究院 Artifact identification method and device applied to dynamic electrocardiogram data
CN104287711A (en) * 2014-09-24 2015-01-21 广州三瑞医疗器械有限公司 Methods for calculating non-baseline part and baseline of fetal heart rate curve
CN105105737A (en) * 2015-08-03 2015-12-02 南京盟联信息科技有限公司 Motion state heart rate monitoring method based on photoplethysmography and spectrum analysis
CN105554385A (en) * 2015-12-18 2016-05-04 天津中科智能识别产业技术研究院有限公司 Remote multimode biometric recognition method and system thereof
CN107341435A (en) * 2016-08-19 2017-11-10 北京市商汤科技开发有限公司 Processing method, device and the terminal device of video image
CN106388832A (en) * 2016-11-24 2017-02-15 西安思源学院 Identity identification method based on ultrasound whole heart sequential images
CN106491117A (en) * 2016-12-06 2017-03-15 上海斐讯数据通信技术有限公司 A kind of signal processing method and device based on PPG heart rate measurement technology
WO2018179150A1 (en) * 2017-03-29 2018-10-04 日本電気株式会社 Heart rate estimation apparatus
CN109381181A (en) * 2017-08-14 2019-02-26 深圳大学 The end-point detecting method of electrocardiosignal signature waveform
CN108236461A (en) * 2017-12-22 2018-07-03 天津天堰科技股份有限公司 It is a kind of can be into the processing method of the electrocardiosignal of edlin
WO2019203106A1 (en) * 2018-04-17 2019-10-24 Nec Corporation Pulse rate estimation apparatus, pulse rate estimation method, and computer-readable storage medium
CN109480807A (en) * 2018-09-21 2019-03-19 王桥生 A kind of contactless method for measuring heart rate based on picture signal analysis
CN111134650A (en) * 2019-12-26 2020-05-12 上海眼控科技股份有限公司 Heart rate information acquisition method and device, computer equipment and storage medium

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113093106A (en) * 2021-04-09 2021-07-09 北京华捷艾米科技有限公司 Sound source positioning method and system
CN114469037A (en) * 2022-01-29 2022-05-13 武汉大学 High-reliability heart rate measurement method based on millimeter wave radar
CN114469037B (en) * 2022-01-29 2024-01-12 武汉大学 Heart rate measuring method based on millimeter wave radar
CN114708225A (en) * 2022-03-31 2022-07-05 上海商汤临港智能科技有限公司 Blood pressure measuring method and device, electronic equipment and storage medium
CN114795143A (en) * 2022-03-31 2022-07-29 联想(北京)有限公司 Detection method, detection equipment and computer storage medium

Similar Documents

Publication Publication Date Title
CN111743524A (en) Information processing method, terminal and computer readable storage medium
JP5422018B2 (en) Image processing method and image processing apparatus
WO2018177364A1 (en) Filter implementation method and device
EP3798975B1 (en) Method and apparatus for detecting subject, electronic device, and computer readable storage medium
KR101167567B1 (en) Fish monitoring digital image processing apparatus and method
WO2014053837A2 (en) Image processing
CN107172354B (en) Video processing method and device, electronic equipment and storage medium
JP2010055194A (en) Image processing device and method, learning device and method, and program
US11700462B2 (en) System for performing ambient light image correction
WO2019015477A1 (en) Image correction method, computer readable storage medium and computer device
CN108805838B (en) Image processing method, mobile terminal and computer readable storage medium
CN110062157B (en) Method and device for rendering image, electronic equipment and computer readable storage medium
US11720745B2 (en) Detecting occlusion of digital ink
JP2017050683A (en) Image processor, imaging apparatus, and image processing method and program
JP3993029B2 (en) Makeup simulation apparatus, makeup simulation method, makeup simulation program, and recording medium recording the program
CN110348358B (en) Skin color detection system, method, medium and computing device
JP2013058060A (en) Person attribute estimation device, person attribute estimation method and program
CN111444555A (en) Temperature measurement information display method and device and terminal equipment
WO2023071189A1 (en) Image processing method and apparatus, computer device, and storage medium
US20140198177A1 (en) Realtime photo retouching of live video
CN109981989B (en) Method and device for rendering image, electronic equipment and computer readable storage medium
CN110473176B (en) Image processing method and device, fundus image processing method and electronic equipment
CN108769521B (en) Photographing method, mobile terminal and computer readable storage medium
CN113379702A (en) Blood vessel path extraction method and device of microcirculation image
CN110047126B (en) Method, apparatus, electronic device, and computer-readable storage medium for rendering image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination