CN110298237A - Head pose recognition methods, device, computer equipment and storage medium - Google Patents

Head pose recognition methods, device, computer equipment and storage medium Download PDF

Info

Publication number
CN110298237A
CN110298237A CN201910420186.9A CN201910420186A CN110298237A CN 110298237 A CN110298237 A CN 110298237A CN 201910420186 A CN201910420186 A CN 201910420186A CN 110298237 A CN110298237 A CN 110298237A
Authority
CN
China
Prior art keywords
video frame
angle point
vector
head pose
mobile data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910420186.9A
Other languages
Chinese (zh)
Inventor
王义文
郑权
王健宗
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ping An Technology Shenzhen Co Ltd
Original Assignee
Ping An Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ping An Technology Shenzhen Co Ltd filed Critical Ping An Technology Shenzhen Co Ltd
Priority to CN201910420186.9A priority Critical patent/CN110298237A/en
Publication of CN110298237A publication Critical patent/CN110298237A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/46Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation

Abstract

This application involves field of image detection, based on the angle point in optical flow method tracking video, and identify head pose according to angle point mobile data, realize according to video identification head pose.A kind of head pose recognition methods, device, computer equipment and storage medium are specifically disclosed, this method comprises: obtaining multiple video frames sequentially in time, and whether detect in video frame includes human face region;If detecting human face region in a video frame, several angle points are determined from human face region based on Corner Detection Algorithm, the position of each angle point in the video frame is benchmark position;The angle point mobile data of next video frame is calculated based on optical flow method, angle point mobile data includes total motion-vector of each angle point relative to corresponding base position in video frame;It is handled according to the regular angle steel joint mobile data of default processing, obtains feature vector;If feature vector meets preset recognition threshold condition, feature vector is inputted into trained SVM classifier to obtain the corresponding head pose classification of video frame.

Description

Head pose recognition methods, device, computer equipment and storage medium
Technical field
This application involves technical field of computer vision more particularly to a kind of head pose recognition methods, device, computer Equipment and storage medium.
Background technique
Present head pose identification is usually to be formed using Inertial Measurement Unit (Inertial Measurement Unit) Wear-type hardware (such as hair fastener, cap etc.) measure speed when various postures occur for head, acceleration, angular speed and Head pitch angle (pitch), yaw angle (yaw), the 3D angle information such as roll angle (roll) extract multidimensional characteristic, such as Fig. 1 institute Show;Then principal component analysis is used with the method for machine learning, GMM model is waited and mentioned to the data progress feature vector being collected into Take screening and classification;Finally obtain the prediction classification for different head posture.Therefore existing head pose identification needs Dependent on wearable device, the impression of user may be influenced whether to a certain extent, cause discomfort, and limit head The application range of portion's gesture recognition.
Summary of the invention
The embodiment of the present application provides a kind of head pose recognition methods, device, computer equipment and storage medium, can be compared with It is realized goodly according to video identification head pose.
In a first aspect, this application provides a kind of head pose recognition methods, which comprises
Multiple video frames are obtained sequentially in time, and whether detect in the video frame includes human face region;
If detecting human face region in a video frame, several angles are determined from the human face region based on Corner Detection Algorithm Point, position of each angle point in the video frame are benchmark position;
The angle point mobile data of next video frame is calculated based on optical flow method, the angle point mobile data includes the video frame In total motion-vector of each angle point relative to corresponding base position;
The angle point mobile data is handled according to default processing rule, obtains feature vector;
If described eigenvector meets preset recognition threshold condition, described eigenvector is inputted trained SVM points Class device is to obtain the corresponding head pose classification of the video frame.
Second aspect, this application provides a kind of head pose identification device, described device includes:
Video frame obtains module, for obtaining multiple video frames sequentially in time, and detect in the video frame whether Including human face region;
Benchmark determining module, if for detecting human face region in a video frame, based on Corner Detection Algorithm from described Human face region determines several angle points, and position of each angle point in the video frame is benchmark position;
Light stream tracing module, for calculating the angle point mobile data of next video frame based on optical flow method, the angle point is mobile Data include total motion-vector of each angle point relative to corresponding base position in the video frame;
Feature processing block obtains feature for handling according to default processing rule the angle point mobile data Vector;
Gesture recognition module, if meeting preset recognition threshold condition for described eigenvector, by described eigenvector Trained SVM classifier is inputted to obtain the corresponding head pose classification of the video frame.
The third aspect, this application provides a kind of computer equipment, the computer equipment includes memory and processor; The memory is for storing computer program;The processor, by executing the computer program and based on execution is described Above-mentioned head pose recognition methods is realized when calculation machine program.
Fourth aspect, this application provides a kind of computer readable storage medium, the computer readable storage medium is deposited Computer program is contained, if the computer program is executed by processor, realizes above-mentioned head pose recognition methods.
This application discloses a kind of head pose recognition methods, device, computer equipment and storage mediums, are examined by face The REF video frame for determining gesture recognition is surveyed, optical flow method is then based on and tracks angle point in each video frame, to obtain each video frame Angle point mobile data, then by angle point mobile data it is corresponding meet recognition threshold condition feature vector input it is trained SVM classifier is realized according to video identification head pose, is needed not rely on to obtain the corresponding head pose classification of video frame Wearable device, so that the application range of head pose identification is wider.
Detailed description of the invention
Technical solution in ord to more clearly illustrate embodiments of the present application, below will be to required use in embodiment description Attached drawing be briefly described, it should be apparent that, the accompanying drawings in the following description is some embodiments of the present application, for this field For those of ordinary skill, without creative efforts, it is also possible to obtain other drawings based on these drawings.
Fig. 1 is the schematic diagram that head pose is obtained based on Inertial Measurement Unit;
Fig. 2 is the flow diagram of the head pose recognition methods of one embodiment of the application;
Fig. 3 is the sub-process schematic diagram that angle point is detected in Fig. 2;
Fig. 4 is the sub-process schematic diagram that an embodiment of angle point mobile data is calculated in Fig. 2;
Fig. 5 is the sub-process schematic diagram that another embodiment of angle point mobile data is calculated in Fig. 2;
Fig. 6 is that angle steel joint mobile data is handled to obtain the sub-process schematic diagram of feature vector in Fig. 2;
Fig. 7 is the flow diagram of the head pose recognition methods of another embodiment of the application;
Fig. 8 is the schematic diagram of SVM classifier;
Fig. 9 is the flow diagram of the head pose recognition methods of the application another embodiment;
Figure 10 is the flow diagram of the head pose recognition methods of the another embodiment of the application;
Figure 11 is the structural schematic diagram for the head pose identification device that one embodiment of the application provides;
Figure 12 is the structural schematic diagram for the head pose identification device that another embodiment of the application provides;
Figure 13 is a kind of structural schematic diagram for computer equipment that one embodiment of the application provides.
Specific embodiment
Below in conjunction with the attached drawing in the embodiment of the present application, technical solutions in the embodiments of the present application carries out clear, complete Site preparation description, it is clear that described embodiment is some embodiments of the present application, instead of all the embodiments.Based on this Shen Please in embodiment, every other implementation obtained by those of ordinary skill in the art without making creative efforts Example, shall fall in the protection scope of this application.
Flow chart shown in the drawings only illustrates, it is not necessary to including all content and operation/step, also not It is that must be executed by described sequence.For example, some operation/steps can also decompose, combine or partially merge, therefore practical The sequence of execution is possible to change according to the actual situation.In addition, though the division of functional module has been carried out in schematic device, But in some cases, it can be divided with the module being different from schematic device.
Embodiments herein provides a kind of head pose recognition methods, device, computer equipment and storage medium.Its In, which can be applied in terminal or server, to realize according to the real-time video of target person or Non-real-time video identifies the head pose of the goal task.
For example, head pose recognition methods is used for server, it is of course possible to be used for terminal, such as mobile phone, notebook, desk-top Machine etc..But in order to make it easy to understand, following embodiment will be described in detail with the head pose recognition methods for being applied to terminal.
With reference to the accompanying drawing, it elaborates to some embodiments of the application.In the absence of conflict, following Feature in embodiment and embodiment can be combined with each other.
Referring to Fig. 2, Fig. 2 is a kind of flow diagram for head pose recognition methods that embodiments herein provides.
As shown in Fig. 2, head pose recognition methods includes the following steps S110- step S150.
Step S110, multiple video frames are obtained sequentially in time, and whether detect in the video frame includes face area Domain.
In some embodiments, multiple video frames are acquired in real time by camera, can disposably acquire multiple videos Frame can also acquire a frame and handle a frame and then acquire next frame;In other embodiments, terminal or server local are deposited It contains one section of video or reads one section of video from network, multiple video frames are then obtained from video, specifically, can obtain Take several continuous video frames can also be with several video frames of interval acquiring.
In some embodiments, if obtaining preset identification triggering command, video frame is obtained sequentially in time, then It is detected frame by frame with the presence or absence of face in each video frame, i.e., whether includes human face region in video frame.
In some embodiments, with based on Dlib, this includes the face of the C++ Open-Source Tools packet of machine learning algorithm Detector realizes in detection video frame whether include human face region based on the Haar of opencv cascade human-face detector.
Illustratively, it if face is not detected in obtaining multiple video frames, returns to expression and face is not detected Prompt information;Head pose identification is carried out without carrying out subsequent step at this time.
In some embodiments, if a certain moment detects when carrying out Face datection to some video frame of acquisition Include human face region in this video frame, then carries out step S120.
In some embodiments, if a certain moment detects when carrying out Face datection to some video frame of acquisition Include human face region in this video frame, then continue to continue Face datection to next video frame, if having continuous Multiple video frames include human face region, then carry out step S120.Head gesture recognition can be triggered to avoid the of short duration appearance of face The step of start.
If step S120, detecting human face region in a video frame, Corner Detection Algorithm is based on from the human face region Determine several angle points.
In some embodiments, if detecting human face region in certain video frame for the first time, pass through angle point immediately Detection algorithm carries out Corner Detection to the video frame;In other embodiments, it is detected in continuous multiple video frames Human face region can carry out Corner Detection to any one frame in multiple video frame by Corner Detection Algorithm.
Illustratively, if detecting the human face region of positive face-like state in a video frame, just based on Corner Detection Algorithm from The human face region determines several angle points.
It illustratively, can be benchmark video frame by the video frame indicia for carrying out Corner Detection.For example, can store described The frame information of REF video frame, such as the number extremely preset storage region of time tag, video frame.
Specifically, carrying out Corner Detection to REF video frame based on Corner Detection Algorithm, it can determine and be located at least in face Several angle points in region.For example, carrying out the detection of angle point to the matrix area where face in REF video frame.
Angle point can be the point in image between edge and edge, and this point is obvious, be well suited for optical flow method into Row tracking.Illustratively, if detecting face region in REF video frame based on Shi-Tomasi corner detection approach Dry angle point.
Position of each angle point in the video frame is benchmark position.
Specifically, the position based on each angle point that Corner Detection Algorithm determines in REF video frame is benchmark position.
In some embodiments, it if as shown in figure 3, step S120 detects human face region in a video frame, is based on Corner Detection Algorithm determines several angle points, including step S121- step S123 from the human face region.
If step S121, detecting human face region in a video frame, recognition of face is carried out to obtain to the human face region Take the corresponding user information of the video frame.
In the present embodiment, it just needs to know the head pose of the user when user has preset permission Not.
Illustratively, face characteristic first is extracted from the human face region in a certain video frame, then by the face characteristic of extraction It is compared with user's face characteristic in user data storage region, obtains the immediate user people of face characteristic with extraction The corresponding user information of face feature.
Step S122, judge whether corresponding user has default access according to the user information.
Illustratively, permissions data corresponding with each user information is also stored in user data storage region.According to view The corresponding user information of frequency frame can inquire whether corresponding user has required permission, as head pose identify permission or Person's control authority associated with head pose identification permission etc..
Step S123, it if it is determined that the user has default access, is examined based on Corner Detection Algorithm from the human face region Survey several angle points.
If step S122 determines that the user has default access, Corner Detection Algorithm is based on from the human face region Several angle points are detected, subsequent angle point tracking step is convenient for.
Illustratively, as shown in figure 3, according to the user information to judge whether corresponding user has in step S122 pre- If further including step S124 after permission.
Step S124, if it is determined that the user does not have default access, output is for indicating that it is default that the user does not have The information of permission.
If step S122 determines that the user does not have default access, it is corresponding to export corresponding information alert video frame User do not have default access.
After Corner Detection when face region in REF video frame is completed, diagonally clicked through in remaining video frame Row tracking.For example, finding angle point corresponding with former frame angle point in present frame for any two adjacent video frames;To Can calculate after REF video frame at least one video frame with several corresponding angle points of angle point and corresponding angle point Motion-vector.
Step S130, the angle point mobile data of next video frame is calculated based on optical flow method.
In some embodiments, chasing after for angle steel joint is realized based on Lucas-card Nader (Lucas-Kanade, LK) optical flow method Track.
Light stream (Optical flow or optic flow) is about the concept in the object of which movement detection in the ken.With To describe observed object caused by the movement relative to observer, surface or the movement at edge.Light stream is space motion object " instantaneous velocity " of pixel movement on observation imaging plane.The research of light stream is to utilize the image pixel intensities number in image sequence According to time domain variation and correlation determine " movement " of respective location of pixels.The purpose of research optical flow field is exactly in order to from picture The approximate sports ground being not directly available in sequence.
Optical flow method realizes that the tracking of angle steel joint is the invariance based on light intensity, is specifically based on following two and assumes to realize: one It is among consecutive frame, the light intensity of certain point does not change with the variation of time, but is kept relatively constant and adjacent view Frequency frame takes frame time continuous, alternatively, the movement of object is more small between consecutive frame;Second is that when certain point is moved in successive frame When dynamic, point and this point around the point have identical movement tendency, i.e. holding Space Consistency.
In the ideal case, the camera record change of head pose, head zone is from some position of a certain video frame Another position for being moved to next video frame is set, but the region on head is constant, that is, human face region is in picture Brightness value, that is, light intensity of pixel be constant.Have the constant formula of brightness as follows:
I (x, y, t)=I0(x+dx,y+dy,t+dt)
Wherein, I (x, y, t) indicates the light intensity of the certain point in a certain video frame, I0(x, y, t) is indicated in the video frame Previous video frame in the point light intensity, dx, dy, dt respectively indicate the distance that the point moves in the x, y direction between two frames And the knots modification of time, dt can time differences between two frames.
By doing Taylor series expansion to the constant formula two sides of above-mentioned brightness, eliminate public keys I (x, y, t), and t is asked It leads, available optical flow constraint equation, reflects a corresponding relationship of light intensity and speed:
Ixu+Iyv+It=0
Wherein,
IxAnd IyThe gradient value of the light intensity in x and y direction between two frames is respectively indicated, u and v respectively indicate the point The component of the speed moved between two frames in x and y direction, u and v are the unknown number for needing to solve.
According to Space Consistency it is assumed that the point movement around angle point and angle point has identical track, simultaneous can be passed through The optical flow equation of eight points around certain angle point and the angle point goes out two unknown numbers u and v by totally nine equation solutions.
Specifically, having
I.e.WhereinIndicate the velocity vector of the angle point.The purpose that optical flow method calculates, exactly makesValue it is minimum, to obtain the velocity vector of angle point
Obtain the velocity vector of angle pointAfterwards, movement of the angle point between two adjacent video frames can be calculated Vector is respectively dx, dy including component in x and y direction.
Illustratively, each angle point can be calculated in aforementioned basic video frame based on optical flow method, i.e., the described base position is corresponding Video frame and REF video frame after motion-vector between first video frame, each angle point can also be calculated in REF video Motion-vector after frame after first video frame and REF video frame between second video frame, can also calculate each angle point Motion-vector after the REF video frame after n-th video frame and REF video frame between the N+1 video frame, can be Motion-vector between adjacent video frames is defined as single step motion-vector.
Illustratively, each angle point first view after REF video frame and REF video frame can be calculated based on optical flow method Motion-vector between frequency frame, i.e., after REF video frame in first video frame each angle point single step motion-vector;Then may be used By being added to obtain each angle point with corresponding single step motion-vector in first video frame for the base position of each angle point In position.Illustratively, the base position of certain angle point is (x, y) in REF video frame, and the angle point is in REF video frame and the Single step motion-vector between one video frame is (wx, wy), then the position of corresponding angle point is (x+ in first video frame wx, y+wy)。
Later, according to the position of angle point corresponding in first video frame, each angle point can be calculated based on optical flow method and existed Motion-vector after REF video frame between first video frame and second video frame, i.e., second after REF video frame The single step motion-vector of each angle point in video frame;It may then pass through position by each angle point in first video frame and corresponding Single step motion-vector be added to obtain position of each angle point in second video frame.
And so on, according to the position of angle point corresponding in n-th video frame after REF video frame, optical flow method can be based on Calculate motion-vector of each angle point after REF video frame between n-th video frame and the N+1 video frame, i.e. REF video After frame in the N+1 video frame each angle point single step motion-vector.It may then pass through each angle point in n-th video frame Position corresponding with the N+1 video frame single step motion-vector correspondence be added to obtain each angle point in the N+1 video frame In position.
Specifically, a certain video frame that step S130 is calculated, the angle point such as n-th video frame after REF video frame is mobile Data include total motion-vector of each angle point relative to corresponding base position in the video frame, i.e. certain angle point is from REF video frame In base position be moved to total motion-vector of corresponding position in n-th video frame.
In some embodiments, as shown in figure 4, step S130 is based on the mobile number of angle point that optical flow method calculates next video frame According to, including step S131- step S133.
Step S131, next video frame is obtained.
Illustratively, next video frame that step S131 is obtained is the N+1 video frame after benchmark video frame.N is certainly So number.
Illustratively, when N is 0, the N+1 video frame is the 1st view after benchmark video frame after REF video frame Frequency frame;N-th video frame is benchmark video frame itself after REF video frame.
Step S132, the single step motion-vector of each angle point in next video frame is calculated based on optical flow method.
Specifically, the single step motion-vector is each angle point in next video frame relative to angle point corresponding in former frame Motion-vector.
Illustratively, single step motion-vector include between adjacent video frames angle point move in the x-direction and the z-direction away from From single step motion-vector includes two elements at this time.
Illustratively, it is calculated in the N+1 video frame based on Lucas-card Nader (Lucas-Kanade, LK) optical flow method The single step motion-vector of each angle point, single step motion-vector are each angle point in the N+1 video frame relative to phase in n-th video frame The motion-vector of angle point is answered, i.e., each angle point is moved in the N+1 video frame accordingly from n-th video frame after REF video frame The motion-vector of position.
Illustratively, the position of each angle point can be according to the N-1 video frame in n-th video frame after REF video frame In each angle point position single step motion-vector corresponding with n-th video frame it is corresponding addition obtain.It, can be according to based on optical flow method The position of each angle point calculates the single step motion-vector of each angle point in the N+1 video frame in N number of video frame.
Step S133, by total motion-vector of the single step motion-vector and the former frame video frame of next video frame Corresponding superposition, and the angle point that the total motion-vector for next video frame that superposition obtains is stored as next video frame is moved Dynamic data.
Illustratively, the single step motion-vector and N of each angle point in the N+1 video frame step S132 being calculated The corresponding superposition of total motion-vector of corresponding angle point in a video frame angle point mobile data, to obtain each angle in the N+1 video frame Total motion-vector of point, i.e., each angle point are moved to corresponding position in the N+1 video frame from the base position in REF video frame Total motion-vector.
In some embodiments, angle of the step S130 based on first video frame after optical flow method calculating benchmark video frame Point mobile data.
In the present embodiment, N is equal to 0, and n-th video frame is benchmark video frame itself, step after REF video frame Next video frame that S131 is obtained is first video frame after the corresponding video frame in benchmark position, i.e., after REF video frame First video frame.
In the present embodiment, the former frame video frame of next video frame described in step S133, i.e. n-th video frame are REF video frame itself, total motion-vector of the 0th video frame can for example be initialized as complete zero vector.Single step motion-vector With complete zero total motion-vector it is corresponding superposition, what is obtained is still corresponding single step motion-vector.
Therefore, passing through step S132 based on each angle point in first video frame after optical flow method calculating benchmark video frame Single step motion-vector after, as shown in figure 5, step S133 is by the previous of the single step motion-vector and next video frame The corresponding superposition of total motion-vector of frame video frame, and the total motion-vector for next video frame that superposition obtains is stored as institute State the angle point mobile data of next video frame, comprising:
If step S1331, the next video frame obtained is first view after the corresponding video frame in the base position Frequency frame is stored as the angle point mobile data of next video frame using the single step motion-vector as total motion-vector.
In some embodiments, if next video frame that step S131 is obtained is not the corresponding view in the base position First video frame after frequency frame, by the single step motion-vector of each video frame after the corresponding video frame in the base position it With the angle point mobile data for being stored as next video frame as total motion-vector.
Step S140, the angle point mobile data is handled according to default processing rule, obtains feature vector.
In some embodiments, the multiple video frames obtained for step S110 under different situations, step S120 institute is really The number for determining angle point may be also different;The angle point of the step S130 a certain video frame calculated can be moved according to default processing rule Dynamic data are handled, to obtain the feature vector of preset format.
In some embodiments, as shown in fig. 6, step S140 is regular to the angle point mobile data according to default processing It is handled, obtains feature vector, comprising:
Step S141, the number of total motion-vector in the angle point mobile data is obtained.
The angle point mobile data that step S130 is calculated includes total motion-vector of each angle point in corresponding video frame, exemplary , angle point mobile data include the corresponding total motion-vector of 100 angle points, then in the angle point mobile data always move to The number of amount is 100.
Step S142, total motion-vector all in the angle point mobile data is added, to obtain mobile sum vector.
Illustratively, the corresponding total motion-vector of each angle point includes each angle point in X-direction and the side Y in angle point mobile data Moving distance relative to corresponding base position upwards;Therefore the mobile sum vector that total motion-vector of each angle point is added Including two elements, X-direction and Y-direction are corresponded respectively to.
For example, the corresponding total motion-vector (x1, y1) of 100 angle points, (x2, y3) ... (x100, y100) be added To obtain mobile sum vector (x0, y0).
Step S143, by the mobile sum vector divided by the number of total motion-vector, to obtain next video The corresponding feature vector of frame.
Illustratively, mobile sum vector (x0, y0) is obtained divided by the number 100 of motion-vector total in angle point mobile data To the corresponding feature vector of corresponding video frame, indicate all angle points of the video frame relative to base position in the x-direction and the z-direction Average moving distance;Hence for multiple video frames that step S110 under different situations is obtained, can be obtained in step S140 The feature vector of preset format is convenient for subsequent identification.
If step S150, described eigenvector meets preset recognition threshold condition, described eigenvector is inputted and is trained Good SVM classifier is to obtain the corresponding head pose classification of the video frame.
In the present embodiment, by preset recognition threshold condition, cause head pose to eliminate the lesser shake in head The erroneous judgement of recognition result.
In some embodiments, if as shown in fig. 7, step S150 described eigenvector meets preset recognition threshold item Described eigenvector is inputted trained SVM classifier to obtain the corresponding head pose classification of the video frame, wrapped by part It includes:
If step S151, not falling within the component number of threshold range in described eigenvector not less than preset number, by institute It states feature vector and inputs trained SVM classifier to obtain the corresponding head pose classification of the video frame.
In some embodiments, the feature vector that step S140 is obtained includes multiple components, and multiple components are arranged An at least threshold range, the corresponding threshold range of each component can be identical or not identical.
Illustratively, feature vector includes two components corresponding to X-direction and Y-direction;As shown in figure 8, in coordinate system Between box illustrate the corresponding threshold range of two components;The threshold range of component corresponding to X-direction is negative 50 to positive 50, The threshold range of component corresponding to X-direction is negative 25 to positive 25.
Illustratively, if the absolute value of the component of the feature vector of certain video frame in x and y direction is respectively less than corresponding Preset threshold then determines that the feature vector of the video frame is unsatisfactory for recognition threshold condition, indicates that head is not operating or head has The degree for carrying out head pose identification is also not achieved in small shake.
Illustratively, if important in described eigenvector do not fall within threshold range, that is, the component of threshold range is not fallen within Number is not less than 1, then determines that described eigenvector meets preset recognition threshold condition, indicates that there is shifting by a relatively large margin on head It is dynamic, it needs to carry out head pose identification;Specifically this feature vector can be inputted into trained SVM (Support Vector Machine) classifier, i.e. support vector machines are to obtain the corresponding head pose classification of the video frame.
SVM is a kind of fast and reliable sorting algorithm, can complete task well in the limited situation of data volume;It is In classification and the supervised learning model and relevant learning algorithm of analyzing data in regression analysis.SVM classifier be by example, I.e. feature vector is expressed as the point in space, and mapping in this way allows for the example of independent classification by apparent interval as wide as possible It separates.Then, new example is mapped to the same space, and falls in the which side at interval based on them to predict generic.
It is illustrated in figure 8 through training dataset to the effect diagram after the completion of SVM classifier training, it can be with after training Obtain four classification based training devices, it can be understood as four one vs rest SVM.Training dataset includes multiple corresponding to different heads The feature vector of portion's posture, for example, training dataset include several heads upwards (UP), head downwards (DOWN), head to the right (RIGHT), The mark of the head corresponding feature vector of (LEFT) head pose of all categories and feature vector to the left.Rectangular window among coordinate system It indicates predefined threshold window, only just determines to meet preset recognition threshold condition more than the feature vector of window, it could be defeated Enter the prediction that SVM classifier carries out head pose;If certain feature vector is unsatisfactory for preset recognition threshold condition, this is decided that It is the small shake on head that feature vector is corresponding;Only motion amplitude is more than that the mobile of the threshold value of definition can just be entered SVM points Class device does the identification about head pose.
After the corresponding feature vector of certain video frame is inputted trained SVM classifier, trained SVM classifier can be with Export the corresponding head pose classification of the video frame.
In some embodiments, as shown in figure 9, step S140 according to default processing rule to the angle point mobile data into Row processing, after obtaining feature vector, further includes:
If step S160, described eigenvector is unsatisfactory for preset recognition threshold condition, return is described to be obtained based on optical flow method The step of removing the angle point mobile data of a video frame continues to execute, until described eigenvector meets preset recognition threshold item Part.
In some embodiments, step S130 has been calculated after the corresponding video frame in base position based on optical flow method First video frame, i.e., the angle point mobile data of first video frame after REF video frame, step S140 is according to default Processing rule handles the angle point mobile data, obtains feature vector;But the corresponding feature of first video frame Vector is unsatisfactory for preset recognition threshold condition, then need to obtain after REF video frame the corresponding feature of second video frame to Amount, to judge whether it meets preset recognition threshold condition;If the corresponding feature vector of second video frame is still unsatisfactory for pre- If recognition threshold condition, then need to obtain the corresponding feature vector of third video frame after REF video frame, and so on Until the corresponding feature vector of some video frame meets preset recognition threshold condition after REF video frame, then pass through step Described eigenvector is inputted trained SVM classifier to obtain the corresponding head pose classification of the video frame by S150.
Specifically, if next video frame that step S131 is obtained is first after the corresponding video frame in the base position A video frame, then the angle point mobile data for calculating the video frame based on optical flow method in step S130 and step S140 are to the angle Point mobile data is handled after obtaining feature vector, if this feature vector is unsatisfactory for preset recognition threshold condition, Return step S131 continues to obtain next video frame of the video frame, i.e., second video frame after REF video frame.Later Step S132 calculates the single step motion-vector of each angle point in second video frame based on optical flow method, and then step S133 will be calculated The superposition corresponding with total motion-vector of first video frame that step S1331 is obtained of single step motion-vector out, and will be superimposed To result be stored as total motion-vector of second video frame the angle point mobile data of second video frame, Zhi Hou The angle point mobile data of second video frame is handled to obtain feature vector.
In some other embodiment, as shown in Figure 10, step S110 obtains multiple video frames sequentially in time, and examines Whether survey in the video frame includes human face region, comprising:
If step S111, getting identification triggering command, multiple video frames are obtained sequentially in time by camera, and Whether detect in the video frame includes human face region.
Illustratively, if user picks up mobile phone to user from bottom to top, the processor of mobile phone is passed by gravity Sensor etc. detects This move, then the processor of mobile phone has got identification triggering command;Then the processor of mobile phone passes through The camera of mobile phone acquires video frame, and according to the people that whether there is user in each frame image of sequence detection of Image Acquisition Face.
In the present embodiment, if step S150 described eigenvector meets preset recognition threshold condition, by the feature Vector inputs trained SVM classifier to obtain the corresponding head pose classification of the video frame, later, further includes:
Step S170, according to the corresponding head pose classification of the video frame, it is corresponding to start the head pose classification Application program.
It illustratively, can be with out-feed head after step S150 gets the corresponding head pose classification of a certain video frame Gesture class, using the triggering command as other processing tasks.Such as different head pose classifications is applied into journey with corresponding Sequence binding can star corresponding application program then according to the head pose classification of the user recognized.
Illustratively, for the people of hand disability can use head pose control mobile phone, or with other auxiliary robots It combines, completes control function and promote quality of life and independent living ability.
Head pose recognition methods provided by the above embodiment, the REF video of gesture recognition is determined by Face datection Frame is then based on optical flow method and tracks angle point in each video frame, to obtain the angle point mobile data of each video frame, then by angle point It is corresponding to obtain video frame that the corresponding feature vector for meeting recognition threshold condition of mobile data inputs trained SVM classifier Head pose classification, realize according to video identification head pose, need not rely on wearable device so that head pose know Other application range is wider.
Figure 11 is please referred to, Figure 11 is a kind of structural representation for head pose identification device that one embodiment of the application provides Figure, which can be configured in server or terminal, for executing head pose recognition methods above-mentioned.
As shown in figure 11, the head pose identification device include: video frame obtain module 110, benchmark determining module 120, Light stream tracing module 130, feature processing block 140, gesture recognition module 150.
Video frame obtains module 110, for obtaining multiple video frames sequentially in time, and detects in the video frame and is No includes human face region.
Benchmark determining module 120, if being based on Corner Detection Algorithm from institute for detecting human face region in a video frame It states human face region and determines several angle points, position of each angle point in the video frame is benchmark position.
In some embodiments, as shown in figure 12, benchmark determining module 120 includes:
Recognition of face submodule 121, if being carried out for detecting human face region in a video frame to the human face region Recognition of face is to obtain the corresponding user information of the video frame;
Permission judging submodule 122, for judging whether corresponding user has default access according to the user information;
Corner Detection submodule 123 is used for if it is determined that the user is based on Corner Detection Algorithm from institute with default access It states human face region and detects several angle points.
Light stream tracing module 130, for calculating the angle point mobile data of next video frame based on optical flow method, the angle point is moved Dynamic data include total motion-vector of each angle point relative to corresponding base position in the video frame.
In some embodiments, as shown in figure 12, light stream tracing module 130 includes:
Video frame acquisition submodule 131, for obtaining next video frame;
Single step computational submodule 132, the single step for calculating each angle point in next video frame based on optical flow method are mobile Vector, the single step motion-vector be next video frame in each angle point relative to angle point corresponding in former frame movement to Amount;
Vector is superimposed submodule 133, for by the former frame video of the single step motion-vector and next video frame The corresponding superposition of total motion-vector of frame, and the total motion-vector for next video frame that superposition obtains is stored as described next The angle point mobile data of video frame.
In some embodiments, if next video frame that video frame acquisition submodule 131 obtains is the base position First video frame after corresponding video frame, vector are superimposed submodule 133 and are used for using the single step motion-vector as total Motion-vector is stored as the angle point mobile data of next video frame.
Feature processing block 140 obtains spy for handling according to default processing rule the angle point mobile data Levy vector.
In some embodiments, as shown in figure 12, feature processing block 140 includes:
Number acquisition submodule 141, for obtaining the number of total motion-vector in the angle point mobile data;
Vectorial addition submodule 142, for total motion-vector all in the angle point mobile data to be added, to obtain Mobile sum vector;
Vector generates submodule 143, for the number by the mobile sum vector divided by total motion-vector, to obtain The corresponding feature vector of next video frame.
Gesture recognition module 150, if meeting preset recognition threshold condition for described eigenvector, by the feature to Amount inputs trained SVM classifier to obtain the corresponding head pose classification of the video frame.
In some embodiments, as shown in figure 12, gesture recognition module 150 includes:
Gesture recognition submodule 151, if the component number for not falling within threshold range in described eigenvector is not less than Described eigenvector is inputted trained SVM classifier to obtain the corresponding head pose class of the video frame by preset number Not.
In some embodiments, as shown in figure 12, which further includes return module 160.
Return module 160 returns described based on light if being unsatisfactory for preset recognition threshold condition for described eigenvector Stream method obtains the step of angle point mobile data of next video frame and continues to execute, until described eigenvector meets preset identification Threshold condition.
It should be noted that it is apparent to those skilled in the art that, for convenience of description and succinctly, The device of foregoing description and each module, the specific work process of unit, can refer to corresponding processes in the foregoing method embodiment, Details are not described herein.
The present processes, device can be used in numerous general or special purpose computing system environments or configuration.Such as: it is personal Computer, server computer, handheld device or portable device, multicomputer system, are based on microprocessor at laptop device System, set-top box, programmable consumer-elcetronics devices, network PC, minicomputer, mainframe computer including any of the above Distributed computing environment of system or equipment etc..
Illustratively, above-mentioned method, apparatus can be implemented as a kind of form of computer program, which can To be run in computer equipment as shown in fig. 13 that.
Figure 13 is please referred to, Figure 13 is a kind of structural schematic diagram of computer equipment provided by the embodiments of the present application.The calculating Machine equipment can be server or terminal.
Refering to fig. 13, which includes processor, memory and the network interface connected by system bus, In, memory may include non-volatile memory medium and built-in storage.
Non-volatile memory medium can storage program area and computer program.The computer program includes program instruction, The program instruction is performed, and processor may make to execute any one head pose recognition methods.
Processor supports the operation of entire computer equipment for providing calculating and control ability.
Built-in storage provides environment for the operation of the computer program in non-volatile memory medium, the computer program quilt When processor executes, processor may make to execute any one head pose recognition methods.
The network interface such as sends the task dispatching of distribution for carrying out network communication.It will be understood by those skilled in the art that The structure of the computer equipment, only the block diagram of part-structure relevant to application scheme, is not constituted to the application side The restriction for the computer equipment that case is applied thereon, specific computer equipment may include more more or less than as shown in the figure Component, perhaps combine certain components or with different component layouts.
It should be understood that processor can be central processing unit (Central Processing Unit, CPU), it should Processor can also be other general processors, digital signal processor (Digital Signal Processor, DSP), specially With integrated circuit (Application Specific Integrated Circuit, ASIC), field programmable gate array (Field-Programmable Gate Array, FPGA) either other programmable logic device, discrete gate or transistor are patrolled Collect device, discrete hardware components etc..Wherein, general processor can be microprocessor or the processor be also possible to it is any often The processor etc. of rule.
Wherein, in one embodiment, the processor is for running computer program stored in memory, with reality Existing following steps: obtaining multiple video frames sequentially in time, and whether detect in the video frame includes human face region;If Human face region is detected in one video frame, and several angle points, each angle are determined from the human face region based on Corner Detection Algorithm Position of the point in the video frame is benchmark position;The angle point mobile data of next video frame is calculated based on optical flow method, it is described Angle point mobile data includes total motion-vector of each angle point relative to corresponding base position in the video frame;According to default processing Rule handles the angle point mobile data, obtains feature vector;If described eigenvector meets preset recognition threshold Described eigenvector is inputted trained SVM classifier to obtain the corresponding head pose classification of the video frame by condition.
The angle point mobile data is handled according to default processing rule specifically, processor is realized, obtains feature It after vector, also realizes: if described eigenvector is unsatisfactory for preset recognition threshold condition, returning described based on optical flow method acquisition The step of angle point mobile data of next video frame, continues to execute, until described eigenvector meets preset recognition threshold item Part.
If specifically, processor realization detects human face region in a video frame, based on Corner Detection Algorithm from described Human face region detect several angle points when, specific implementation: if detecting human face region in a video frame, to the human face region into Row recognition of face is to obtain the corresponding user information of the video frame;Judge whether corresponding user has according to the user information There is default access;If it is determined that the user has default access, it is several from human face region detection based on Corner Detection Algorithm Angle point.
Specifically, specific implementation: being obtained when processor realization calculates the angle point mobile data of next video frame based on optical flow method Remove a video frame;The single step motion-vector of each angle point in next video frame is calculated based on optical flow method, the single step is mobile Vector is motion-vector of each angle point relative to angle point corresponding in former frame in next video frame;By the single step it is mobile to Measure superposition corresponding with total motion-vector of former frame video frame of next video frame, and next view that superposition is obtained Total motion-vector of frequency frame is stored as the angle point mobile data of next video frame.
Specifically, next video frame of acquisition is the base position pair if processor is realized when obtaining next video frame First video frame after the video frame answered, then processor is being realized the single step motion-vector and next video frame Former frame video frame the corresponding superposition of total motion-vector, and the total motion-vector for next video frame that superposition obtains is deposited When storage is the angle point mobile data of next video frame, for realizing: using the single step motion-vector as total motion-vector It is stored as the angle point mobile data of next video frame.
Specifically, processor realizes that the default processing rule of the basis handles the angle point mobile data, obtain When feature vector, specific implementation: the number of total motion-vector in the angle point mobile data is obtained;By the angle point mobile data In all total motion-vector be added, to obtain mobile sum vector;By the mobile sum vector divided by total motion-vector Number, to obtain the corresponding feature vector of next video frame.
Specifically, processor realizes that if the described eigenvector meets preset recognition threshold condition, by the feature When vector inputs trained SVM classifier to obtain the video frame corresponding head pose classification, specific implementation: if described The component number of threshold range is not fallen in feature vector not less than preset number, described eigenvector is inputted trained SVM classifier is to obtain the corresponding head pose classification of the video frame.
As seen through the above description of the embodiments, those skilled in the art can be understood that the application can It realizes by means of software and necessary general hardware platform.Based on this understanding, the technical solution essence of the application On in other words the part that contributes to existing technology can be embodied in the form of software products, the computer software product It can store in storage medium, such as ROM/RAM, magnetic disk, CD, including some instructions are used so that a computer equipment (can be personal computer, server or the network equipment etc.) executes the certain of each embodiment of the application or embodiment Method described in part, such as:
A kind of computer readable storage medium, the computer-readable recording medium storage have computer program, the meter It include program instruction in calculation machine program, the processor executes described program instruction, realizes provided by the embodiments of the present application any Item head pose recognition methods.
Wherein, the computer readable storage medium can be the storage inside of computer equipment described in previous embodiment Unit, such as the hard disk or memory of the computer equipment.The computer readable storage medium is also possible to the computer The plug-in type hard disk being equipped on the External memory equipment of equipment, such as the computer equipment, intelligent memory card (Smart Media Card, SMC), secure digital (Secure Digital, SD) card, flash card (Flash Card) etc..
The above, the only specific embodiment of the application, but the protection scope of the application is not limited thereto, it is any Those familiar with the art within the technical scope of the present application, can readily occur in various equivalent modifications or replace It changes, these modifications or substitutions should all cover within the scope of protection of this application.Therefore, the protection scope of the application should be with right It is required that protection scope subject to.

Claims (10)

1. a kind of head pose recognition methods characterized by comprising
Multiple video frames are obtained sequentially in time, and whether detect in the video frame includes human face region;
If detecting human face region in a video frame, several angle points are determined from the human face region based on Corner Detection Algorithm, Position of each angle point in the video frame is benchmark position;
The angle point mobile data of next video frame is calculated based on optical flow method, the angle point mobile data includes each in the video frame Total motion-vector of the angle point relative to corresponding base position;
The angle point mobile data is handled according to default processing rule, obtains feature vector;
If described eigenvector meets preset recognition threshold condition, described eigenvector is inputted into trained SVM classifier To obtain the corresponding head pose classification of the video frame.
2. head pose recognition methods as described in claim 1, it is characterised in that: the default processing rule of the basis is to described Angle point mobile data is handled, after obtaining feature vector, further includes:
If described eigenvector is unsatisfactory for preset recognition threshold condition, return is described to obtain next video frame based on optical flow method The step of angle point mobile data, continues to execute, until described eigenvector meets preset recognition threshold condition.
3. head pose recognition methods as described in claim 1, it is characterised in that: if described detect people in a video frame Face region several angle points are detected from the human face region based on Corner Detection Algorithm, comprising:
If detecting human face region in a video frame, recognition of face is carried out to obtain the video frame pair to the human face region The user information answered;
Judge whether corresponding user has default access according to the user information;
If it is determined that the user has default access, several angle points are detected from the human face region based on Corner Detection Algorithm.
4. head pose recognition methods as claimed in claim 2, it is characterised in that: described to calculate next video based on optical flow method The angle point mobile data of frame, comprising:
Obtain next video frame;
The single step motion-vector of each angle point in next video frame is calculated based on optical flow method, the single step motion-vector is described Motion-vector of each angle point relative to angle point corresponding in former frame in next video frame;
By single step motion-vector superposition corresponding with total motion-vector of former frame video frame of next video frame, and will The total motion-vector for being superimposed obtained next video frame is stored as the angle point mobile data of next video frame.
5. head pose recognition methods as claimed in claim 4, it is characterised in that: described by the single step motion-vector and institute The corresponding superposition of total motion-vector of the former frame video frame of next video frame is stated, and obtained next video frame will be superimposed Total motion-vector is stored as the angle point mobile data of next video frame, comprising:
If the next video frame obtained is first video frame after the corresponding video frame in the base position, by the single step Motion-vector is stored as the angle point mobile data of next video frame as total motion-vector.
6. head pose recognition methods according to any one of claims 1 to 5, it is characterised in that: the default processing of the basis Rule handles the angle point mobile data, obtains feature vector, comprising:
Obtain the number of total motion-vector in the angle point mobile data;
Total motion-vector all in the angle point mobile data is added, to obtain mobile sum vector;
By the mobile sum vector divided by the number of total motion-vector, with obtain the corresponding feature of next video frame to Amount.
7. head pose recognition methods according to any one of claims 1 to 5, it is characterised in that: if the feature to Amount meets preset recognition threshold condition, and described eigenvector is inputted trained SVM classifier to obtain the video frame Corresponding head pose classification, comprising:
If not falling within the component number of threshold range in described eigenvector not less than preset number, described eigenvector is inputted Trained SVM classifier is to obtain the corresponding head pose classification of the video frame.
8. a kind of head pose identification device characterized by comprising
Video frame obtains module, for obtaining multiple video frames sequentially in time, and detect in the video frame whether include Human face region;
Benchmark determining module, if being based on Corner Detection Algorithm from the face for detecting human face region in a video frame Region determines several angle points, and position of each angle point in the video frame is benchmark position;
Light stream tracing module, for calculating the angle point mobile data of next video frame, the angle point mobile data based on optical flow method Total motion-vector including each angle point in the video frame relative to corresponding base position;
Feature processing block obtains feature vector for handling according to default processing rule the angle point mobile data;
Gesture recognition module inputs described eigenvector if meeting preset recognition threshold condition for described eigenvector Trained SVM classifier is to obtain the corresponding head pose classification of the video frame.
9. a kind of computer equipment, which is characterized in that the computer equipment includes memory and processor;
The memory is for storing computer program;
The processor, for executing the computer program and realization such as claim 1- when executing the computer program Head pose recognition methods described in any one of 7.
10. a kind of computer readable storage medium, the computer-readable recording medium storage has computer program, and feature exists In: if the computer program is executed by processor, realize such as head pose identification side of any of claims 1-7 Method.
CN201910420186.9A 2019-05-20 2019-05-20 Head pose recognition methods, device, computer equipment and storage medium Pending CN110298237A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910420186.9A CN110298237A (en) 2019-05-20 2019-05-20 Head pose recognition methods, device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910420186.9A CN110298237A (en) 2019-05-20 2019-05-20 Head pose recognition methods, device, computer equipment and storage medium

Publications (1)

Publication Number Publication Date
CN110298237A true CN110298237A (en) 2019-10-01

Family

ID=68026964

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910420186.9A Pending CN110298237A (en) 2019-05-20 2019-05-20 Head pose recognition methods, device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN110298237A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110717467A (en) * 2019-10-15 2020-01-21 北京字节跳动网络技术有限公司 Head pose estimation method, device, equipment and storage medium
CN115859755A (en) * 2023-02-17 2023-03-28 中国空气动力研究与发展中心计算空气动力研究所 Visualization method, device, equipment and medium for vector data of steady flow field
CN115984973A (en) * 2023-03-21 2023-04-18 深圳市嘉润原新显科技有限公司 Human body abnormal behavior monitoring method for peeping-proof screen

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101916496A (en) * 2010-08-11 2010-12-15 无锡中星微电子有限公司 System and method for detecting driving posture of driver
CN103479367A (en) * 2013-09-09 2014-01-01 广东工业大学 Driver fatigue detection method based on facial action unit recognition
CN104036243A (en) * 2014-06-06 2014-09-10 电子科技大学 Behavior recognition method based on light stream information
CN107358206A (en) * 2017-07-13 2017-11-17 山东大学 Micro- expression detection method that a kind of Optical-flow Feature vector modulus value and angle based on area-of-interest combine
CN109165552A (en) * 2018-07-14 2019-01-08 深圳神目信息技术有限公司 A kind of gesture recognition method based on human body key point, system and memory

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101916496A (en) * 2010-08-11 2010-12-15 无锡中星微电子有限公司 System and method for detecting driving posture of driver
CN103479367A (en) * 2013-09-09 2014-01-01 广东工业大学 Driver fatigue detection method based on facial action unit recognition
CN104036243A (en) * 2014-06-06 2014-09-10 电子科技大学 Behavior recognition method based on light stream information
CN107358206A (en) * 2017-07-13 2017-11-17 山东大学 Micro- expression detection method that a kind of Optical-flow Feature vector modulus value and angle based on area-of-interest combine
CN109165552A (en) * 2018-07-14 2019-01-08 深圳神目信息技术有限公司 A kind of gesture recognition method based on human body key point, system and memory

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110717467A (en) * 2019-10-15 2020-01-21 北京字节跳动网络技术有限公司 Head pose estimation method, device, equipment and storage medium
CN115859755A (en) * 2023-02-17 2023-03-28 中国空气动力研究与发展中心计算空气动力研究所 Visualization method, device, equipment and medium for vector data of steady flow field
CN115859755B (en) * 2023-02-17 2023-05-26 中国空气动力研究与发展中心计算空气动力研究所 Visualization method, device, equipment and medium for steady flow field vector data
CN115984973A (en) * 2023-03-21 2023-04-18 深圳市嘉润原新显科技有限公司 Human body abnormal behavior monitoring method for peeping-proof screen

Similar Documents

Publication Publication Date Title
US7036094B1 (en) Behavior recognition system
US9750420B1 (en) Facial feature selection for heart rate detection
TWI512645B (en) Gesture recognition apparatus and method using depth images
CN108198044B (en) Commodity information display method, commodity information display device, commodity information display medium and electronic equipment
US20130251244A1 (en) Real time head pose estimation
US9147114B2 (en) Vision based target tracking for constrained environments
KR102347249B1 (en) Method and device to display screen in response to event related to external obejct
CN110298237A (en) Head pose recognition methods, device, computer equipment and storage medium
JP2009048430A (en) Customer behavior analysis device, customer behavior determination system, and customer buying behavior analysis system
JP6590609B2 (en) Image analysis apparatus and image analysis method
US20210338109A1 (en) Fatigue determination device and fatigue determination method
Sinha et al. Pose based person identification using kinect
US20120281918A1 (en) Method for dynamically setting environmental boundary in image and method for instantly determining human activity
US20190026904A1 (en) Tracking system and method thereof
KR20170084643A (en) Motion analysis appratus and method using dual smart band
CN111382637A (en) Pedestrian detection tracking method, device, terminal equipment and medium
JP6331270B2 (en) Information processing system, information processing method, and program
Sulyman et al. REAL-TIME NUMERICAL 0-5 COUNTING BASED ON HAND-FINGER GESTURES RECOGNITION.
Liu et al. Automatic fall risk detection based on imbalanced data
Li et al. Using Kinect for monitoring warehouse order picking operations
CN107665495B (en) Object tracking method and object tracking device
JP2019121904A (en) Suspicious person detection apparatus, suspicious person detection method and suspicious person detection computer program
JP6992900B2 (en) Information processing equipment, control methods, and programs
Parashar et al. Advancements in artificial intelligence for biometrics: A deep dive into model-based gait recognition techniques
US11256911B2 (en) State recognition apparatus, state recognition method, and non-transitory computer-readable medium storing program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination