CN110069996A - Headwork recognition methods, device and electronic equipment - Google Patents

Headwork recognition methods, device and electronic equipment Download PDF

Info

Publication number
CN110069996A
CN110069996A CN201910221098.6A CN201910221098A CN110069996A CN 110069996 A CN110069996 A CN 110069996A CN 201910221098 A CN201910221098 A CN 201910221098A CN 110069996 A CN110069996 A CN 110069996A
Authority
CN
China
Prior art keywords
video image
key point
distance
head
image frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910221098.6A
Other languages
Chinese (zh)
Inventor
王旭
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing ByteDance Network Technology Co Ltd
Original Assignee
Beijing ByteDance Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing ByteDance Network Technology Co Ltd filed Critical Beijing ByteDance Network Technology Co Ltd
Priority to CN201910221098.6A priority Critical patent/CN110069996A/en
Publication of CN110069996A publication Critical patent/CN110069996A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • General Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Psychiatry (AREA)
  • Social Psychology (AREA)
  • Image Analysis (AREA)

Abstract

The disclosure discloses a kind of headwork recognition methods, device and electronic equipment.Wherein, which includes: to obtain video image from image source, includes the head of people in the video image;Detect the face key point in the video image in the head of people;Calculate the first distance between at least two key points in the first video image frame;Calculate the second distance between at least two key point in the second video image frame;The movement on the head is identified according to the first distance and second distance.The headwork recognition methods of the embodiment of the present disclosure, different headworks is identified by calculating variation of the distance between the identical key point in two field pictures frame, this method recognition speed is fast, of less demanding to computing capability, can preferably be suitable for the identification scene of mobile terminal.

Description

Headwork recognition methods, device and electronic equipment
Technical field
This disclosure relates to field of image processing, more particularly to a kind of headwork recognition methods, device and electronic equipment.
Background technique
With the development of society, requirement of the various aspects to quickly and effectively auto authentication is increasingly urgent to.Due to life Object is characterized in the inherent attribute of people, has very strong self stability and individual difference, therefore be the most ideal of authentication Foundation.Among these, carrying out authentication using face characteristic is most naturally direct means again, compares other human body biological characteristics It has the characteristics that directly, it is friendly, convenient, be easy to be received by user.
Current biological characteristic detection, majority need user to do some specific movements to know to the true and false of user Not, it for example allows user to nod, shake or gives a wink, to determine that user is true people.However current detection side Method, if directly detect user movement, can it is more complicated, speed is slower, for majority in current application scenarios be using For the capture identification that mobile terminal is acted, real-time is poor and is limited by the performance of mobile terminal, therefore needs one kind Speed is fast, the headwork recognition methods of less demanding to computing capability.
Summary of the invention
According to one aspect of the disclosure, the following technical schemes are provided:
A kind of headwork recognition methods, comprising:
Video image is obtained from image source, includes the head of people in the video image;
Detect the face key point in the video image in the head of people;
Calculate the first distance between at least two key points in the first video image frame;
Calculate the second distance between at least two key point in the second video image frame;
The movement on the head is identified according to the first distance and second distance.
Further, described to obtain video image from image source, it include the head of people in the video image, comprising:
Video image is acquired from imaging sensor, wherein including the head of at least one people in the video image.
Further, the face key point in the detection video image in the head of people, comprising:
Obtain the picture frame of the video image;
Detect the face characteristic in described image frame;
The face key point is generated according to the face characteristic.
Further, the first distance between at least two key points calculated in the first video image frame, comprising:
Calculate the distance between nose key point and the nose vertex key point in the first video image frame a1
Calculate the distance between nose key point and the chin key point in the first video image frame b1
Further, the second distance between at least two key point calculated in the second video image frame, Include:
Calculate the distance between nose key point and the nose vertex key point in the second video image frame a2
Calculate the distance between nose key point and the chin key point in the second video image frame b2
Further, the movement that the head is identified according to the first distance and second distance, comprising:
Calculate a1And b1Ratio cc;
Calculate a2And b2Ratio beta;
If α ≠ β, identify that the movement on the head is to nod.
Further, the first distance between at least two key points calculated in the first video image frame, comprising:
Canthus key point in the first video image frame is calculated to the distance between nose key point l1
Further, the second distance between at least two key points calculated in the second video image frame, comprising:
Canthus key point in the second video image frame is calculated to the distance between nose key point l2
Further, the movement that the head is identified according to the first distance and second distance, comprising:
If the l1≠l2, then identify that the movement on the head is to shake the head.
Further, the connecting line between at least two key point is on the face vertical direction or level side Upward connecting line.
According to another aspect of the disclosure, also the following technical schemes are provided:
A kind of headwork identification device, comprising:
Video image obtains module, includes the head of people in the video image for obtaining video image from image source;
Critical point detection module, for detecting the face key point in the video image in the head of people;
First distance computing module, for calculate between at least two key points in the first video image frame first away from From;
Second distance computing module, for calculating between at least two key point in the second video image frame Two distances;
Identification module, for identifying the movement on the head according to the first distance and second distance.
Further, the video image obtains module, is also used to:
Video image is acquired from imaging sensor, wherein including the head of at least one people in the video image.
Further, the critical point detection module, further includes:
Picture frame obtains module, for obtaining the picture frame of the video image;
Feature detection module, for detecting the face characteristic in described image frame;
Key point generation module, for generating the face key point according to the face characteristic.
Further, the first distance computing module, further includes:
First computing module, for calculating between the nose key point in the first video image frame and nose vertex key point Distance a1
Second computing module, for calculate between the nose key point in the first video image frame and chin key point away from From b1
Further, the second distance computing module, further includes:
Third computing module, for calculating between the nose key point in the second video image frame and nose vertex key point Distance a2
4th computing module, for calculate between the nose key point in the second video image frame and chin key point away from From b2
Further, the identification module, further includes:
5th computing module, for calculating a1And b1Ratio cc;
6th computing module, for calculating a2And b2Ratio beta;
First identification submodule identifies that the movement on the head is to nod if being used for α ≠ β.
Further, the first distance computing module, further includes:
7th computing module, for calculate canthus key point in the first video image frame between nose key point away from From l1
Further, the second distance computing module, further includes:
8th computing module, for calculate canthus key point in the second video image frame between nose key point away from From l2
Further, the identification module, further includes:
Second identification submodule, if being used for the l1≠l2, then identify that the movement on the head is to shake the head.
Further, the connecting line between at least two key point is on the face vertical direction or level side Upward connecting line.
According to the another aspect of the disclosure, and also the following technical schemes are provided:
A kind of electronic equipment, comprising: memory, for storing non-transitory computer-readable instruction;And processor, it uses In running the computer-readable instruction, so that the processor realizes institute in any of the above-described headwork recognition methods when executing The step of stating.
According to the another aspect of the disclosure, and also the following technical schemes are provided:
A kind of computer readable storage medium, for storing non-transitory computer-readable instruction, when the non-transitory When computer-readable instruction is executed by computer, so that the computer executes described in any of the above-described headwork recognition methods The step of.
The disclosure discloses a kind of headwork recognition methods, device and electronic equipment.Wherein, the headwork recognition methods Include: to obtain video image from image source, includes the head of people in the video image;Detect the head of people in the video image Face key point in portion;Calculate the first distance between at least two key points in the first video image frame;Calculate second The second distance between at least two key point in video image frame;It is identified according to the first distance and second distance The movement on the head.The headwork recognition methods of the embodiment of the present disclosure, by calculating the distance between identical key point Changing to identify different headworks in two field pictures frame, this method recognition speed is fast, of less demanding to computing capability, The identification scene of mobile terminal can be preferably suitable for.
Above description is only the general introduction of disclosed technique scheme, in order to better understand the technological means of the disclosure, and It can be implemented in accordance with the contents of the specification, and to allow the above and other objects, features and advantages of the disclosure can be brighter Show understandable, it is special below to lift preferred embodiment, and cooperate attached drawing, detailed description are as follows.
Detailed description of the invention
Fig. 1 is the flow diagram according to the headwork recognition methods of an embodiment of the present disclosure;
Fig. 2 is the schematic diagram according to the face key point of an embodiment of the present disclosure;
Fig. 3 is the structural schematic diagram according to the headwork identification device of an embodiment of the present disclosure;
Fig. 4 is the structural schematic diagram of the electronic equipment provided according to the embodiment of the present disclosure.
Specific embodiment
Illustrate embodiment of the present disclosure below by way of specific specific example, those skilled in the art can be by this specification Disclosed content understands other advantages and effect of the disclosure easily.Obviously, described embodiment is only the disclosure A part of the embodiment, instead of all the embodiments.The disclosure can also be subject to reality by way of a different and different embodiment It applies or applies, the various details in this specification can also be based on different viewpoints and application, in the spirit without departing from the disclosure Lower carry out various modifications or alterations.It should be noted that in the absence of conflict, the feature in following embodiment and embodiment can To be combined with each other.Based on the embodiment in the disclosure, those of ordinary skill in the art are without creative efforts Every other embodiment obtained belongs to the range of disclosure protection.
It should be noted that the various aspects of embodiment within the scope of the appended claims are described below.Ying Xian And be clear to, aspect described herein can be embodied in extensive diversified forms, and any specific structure described herein And/or function is only illustrative.Based on the disclosure, it will be understood by one of ordinary skill in the art that one described herein Aspect can be independently implemented with any other aspect, and can combine the two or both in these aspects or more in various ways. For example, carry out facilities and equipments in terms of any number set forth herein can be used and/or practice method.In addition, can make With other than one or more of aspect set forth herein other structures and/or it is functional implement this equipment and/or Practice the method.
It should also be noted that, diagram provided in following embodiment only illustrates the basic structure of the disclosure in a schematic way Think, component count, shape and the size when only display is with component related in the disclosure rather than according to actual implementation in schema are drawn System, when actual implementation kenel, quantity and the ratio of each component can arbitrarily change for one kind, and its assembly layout kenel can also It can be increasingly complex.
In addition, in the following description, specific details are provided for a thorough understanding of the examples.However, fields The skilled person will understand that the aspect can be practiced without these specific details.
The embodiment of the present disclosure provides a kind of headwork recognition methods.The headwork recognition methods provided in this embodiment It can be executed by a computing device, which can be implemented as software, or be embodied as the combination of software and hardware, should Computing device, which can integrate, to be arranged in server, terminal device etc..As shown in Figure 1, the headwork recognition methods is mainly wrapped Following steps S101 is included to step S105.Wherein:
Step S101: obtaining video image from image source, includes the head of people in the video image;
In the disclosure, described image source be local storage space or network storage space, it is described from image source obtain Video image, including video image, nothing are obtained from acquisition video image in local storage space or from network storage space By video image is wherefrom obtained, first choice needs to obtain the storage address of video image, obtains video from the storage address later Image, the video image include at least one picture frame, and the video image can be video and be also possible to imitate with dynamic The picture of fruit, as long as the image with multiframe may each be the video image in the disclosure.
In the disclosure, the video source can be imaging sensor, described from image source acquisition video image, including from Video image is acquired in imaging sensor.Described image sensor refers to the various equipment that can acquire image, and typical image passes Sensor is video camera, camera, camera etc..In this embodiment, described image sensor can be the camera shooting on mobile terminal The video image of preposition or rear camera on head, such as smart phone, camera acquisition can be directly displayed at mobile phone Display screen on, in this step, obtain imaging sensor captured by video, as image to be processed.
Due to the movement on the head of the technical solution identification people of the disclosure, one is included at least in the video image Personal head.It should be understood that also may include the head of multiple people in the video image, details are not described herein.
Step S102: the face key point in the video image in the head of people is detected;
In the disclosure, the face key point in the detection video image in the head of people, comprising: described in acquisition The picture frame of video image;Detect the face characteristic in described image frame: it is crucial to generate the face according to the face characteristic Point.In this step, it is necessary first to obtain the picture frame of the video image, the picture frame is working as the video image Preceding picture frame, i.e., the described video image are played to current point in time or the current point in time acquired image from video source Frame;Face key point is detected in described image frame, it is necessary first to detect to the face in picture frame, Face datection is to appoint It anticipates and gives an image or one group of image sequence, it is scanned for using certain strategy, with the position of all faces of determination With a process in region, from various different images or image sequence determine face whether there is, and determine face quantity and The process of spatial distribution.Generally conventional, the method for Face datection can be divided into 4 classes: (1), should based on the method for priori knowledge Method encodes in typical face formation rule library to face, carries out Face detection by the relationship between facial characteristics; (2) feature invariant method, this method find stable feature in the case where posture, visual angle or illumination condition change, then make Face is determined with these features;(3) template matching method, this method stores the face mode of several standards, for describing respectively Entire face and facial characteristics, the then correlation between calculating input image and the mode of storage and for detecting;(4) it is based on The method of appearance, this method and template matching method on the contrary, carry out study from training image concentration to obtain model, and by this A little models are for detecting.Optionally, an implementation in (4) kind method can be used herein to illustrate Face datection Process: modeling is completed firstly the need of feature is extracted, the present embodiment uses Haar feature as the key feature for judging face, Haar It is characterized in a kind of simple rectangular characteristic, extraction rate is fast, and feature templates used in the calculating of general Haar feature are using letter Single rectangle combination is made of the rectangle of two or more congruences, wherein there is two kinds of rectangles of black and white in feature templates;It Afterwards, a part of feature to play a crucial role is found from a large amount of Haar feature using AdaBoost algorithm, and with these features Effective classifier is generated, the face in image can be detected by the classifier constructed.In Face datection process In, it can detecte multiple face key points, 106 key points can be used typically to identify face.
The CNN network model for detecting the face key point can also be trained based on the method for deep learning, it will be described Picture frame inputs the CNN network model and obtains the key point of multiple faces.It should be understood that actually any face key point Detection method can be applied in the disclosure, especially detection speed is fast, computation complexity is low method can be applied In the disclosure, the method in above-described embodiment only as an example, does not constitute the limitation to the disclosure.
The key point for detecting face, may finally obtain the type of face key point and the coordinate of key point, wherein people The type of face key point is indicated that each number indicates the face key point of fixed type by the number of face key point, such as The key point on facial contour from top to bottom from left to right is indicated with number 1-20, indicates nose from top to bottom with number 21-24 Key point etc., it is possible to understand that above-mentioned number is only citing, and actually number, which can according to need, carrys out configured in advance.
Step S103: the first distance between at least two key points in the first video image frame is calculated;
In the disclosure, the first distance between at least two key points calculated in the first video image frame, packet It includes: calculating the distance between nose key point and the nose vertex key point in the first video image frame a1;Calculate the first video figure As the nose key point and the distance between chin key point b in frame1.In this step, the first distance include two away from From one is nose key point to nose vertex key point, the other is nose key point is to chin key point.As shown in Fig. 2, being The schematic diagram of face key point, wherein key point 201 is nose key point, and key point 202 is nose vertex key point, key point 203 be chin key point, if the distance between nose key point 201 and nose vertex key point 202 are a, 201 He of nose key point The distance between chin key point 203 is b, then in the first video image frame, nose key point 201 and nose vertex key point The distance between 202 be a1, the distance between nose key point 201 and chin key point 203 are b1.The distance can be used The coordinate of the face key point obtained when detecting the face key point in step s 102 calculates.
In the disclosure, the first distance between at least two key points calculated in the first video image frame, also It may include: to calculate canthus key point in the first video image frame to the distance between nose key point l1.As shown in Fig. 2, For the schematic diagram of face key point, wherein key point 201 is nose key point, and key point 202 is nose vertex key point, key point 204 be canthus key point, if the distance of nose key point 201 to canthus key point 204 is l, then in the first video image frame, The distance of nose key point 201 to canthus key point 204 is l1.In another embodiment, the nose vertex can be set to close It is l that key point 202, which arrives the distance between canthus key point 204, at this time in the first video image frame, nose vertex key point 202 arrive the distance between canthus key point 204 as l1
Optionally, in the disclosure, the line between at least two key point be the face vertical direction on or Line in person's horizontal direction, as shown in Fig. 2, line between the nose key point 201 and nose vertex key point 202 and Line between the nose key point 201 and chin key point 203 is the vertical direction of the face, and the nose vertex is crucial The line of point 202 to canthus key point 204 is the horizontal direction of the face.
Optionally, in the disclosure, herein be not vertically and horizontally it is absolute vertically or horizontally, allow one Determine the deviation in range, if nose key point 201 to the line between canthus key point 204 is not abswolute level, but its It can equally be played a role with horizontal component in the headwork on determined level direction.
Step S104: the second distance between at least two key point in the second video image frame is calculated;
In the disclosure, the second distance between at least two key points calculated in the second video image frame, packet It includes: calculating the distance between nose key point and the nose vertex key point in the second video image frame a2;Calculate the second video figure As the nose key point and the distance between chin key point b in frame2.The distance a2With distance b2Calculation method and its A is calculated in step S1031And b1Identical, difference is only that, distance a and b herein is calculated in the second video image frame, Second video image frame is the picture frame after the first video image frame, can be obtained according to sample rate, such as allusion quotation Type, it can all be sampled with each frame, such second video image frame is back to back frame picture frame after the first video image frame, It can also be the (n+1)th frame after the first video image frame every one frame of n frame sampling, such second video image frame, wherein n is big In 0 integer.
Likewise, in the disclosure, second between at least two key points calculated in the second video image frame Distance can also include: to calculate canthus key point in the second video image frame to the distance between nose key point l2.Equally , the distance l2Calculation method and its calculate l in step s 1031Method it is identical, the second video image frame be first view Picture frame after frequency picture frame, details are not described herein.
Step S105: the movement on the head is identified according to the first distance and second distance.
In the disclosure, the movement that the head is identified according to the first distance and second distance, comprising: calculate a1And b1Ratio cc;Calculate a2And b2Ratio beta;If α ≠ β, identify that the movement on the head is to nod.In the embodiment In, α=a1/b1, β=a2/b2, when the movement nodded occurs, video image frame is actually a two-dimensional image, then its Depth information can be lost, at this point, new line either in nodding action or bowing and can all lead to a2And b2It changes, then can Lead to α ≠ β, if calculated result obtains α ≠ β, may determine that the movement that the head of the people in video is nodded, cooperates The size variation of a and b can also further judge the new line movement in nodding or bow movement, and b becomes when such as coming back Greatly, b becomes smaller etc. when bowing.
In the disclosure, the movement that the head is identified according to the first distance and second distance, can also wrap It includes: if the l1≠l2, then identify that the movement on the head is to shake the head.When head shaking movement occurs, video image frame is actually It is a two-dimensional image, then it can lose depth information, at this point, new line either in nodding action or bow can all be led Cause l2It changes, no matter head is rotated to which direction at this time, can all lead to l2It changes, will lead to l1≠l2, therefore such as Fruit calculated result obtains l1≠l2, then may determine that the movement that the head of the people in video is shaken the head.
Above-mentioned example is only citing, can also actually be identified according to variation of the key point between multiple images frame The distance between eyes key point up and down can be used in multiple movements, such as blink, and opening one's mouth to shut up can be used ShiShimonoseki on mouth The distance between key point etc., as long as the distance between key point can be caused changed dynamic in two-dimensional image frame Make, can be identified using the method in the disclosure, details are not described herein.
It should be understood that in the disclosure, the selection of key point can select the side parallel with head movement direction as far as possible Upward key point, the distance between two key points on movement selection vertical direction such as nodded, head shaking movement selects water Square upward the distance between two key points, if the key point with the direction of direction of motion level can not be selected, as far as possible Key point important in the direction of motion is selected, and to avoid two key points vertical with the direction of motion, is such as being judged When head shaking movement, the line for selecting nose key point and nose vertex key point of trying not, because the line is perpendicular to water Square to, when movement in the horizontal direction variation be difficult detected.
The disclosure discloses a kind of headwork recognition methods, device and electronic equipment.Wherein, the headwork recognition methods Include: to obtain video image from image source, includes the head of people in the video image;Detect the head of people in the video image Face key point in portion;Calculate the first distance between at least two key points in the first video image frame;Calculate second The second distance between at least two key point in video image frame;It is identified according to the first distance and second distance The movement on the head.The headwork recognition methods of the embodiment of the present disclosure, by calculating the distance between identical key point Changing to identify different headworks in two field pictures frame, this method recognition speed is fast, of less demanding to computing capability, The identification scene of mobile terminal can be preferably suitable for.
Hereinbefore, although describing each step in above method embodiment, this field skill according to above-mentioned sequence Art personnel it should be clear that the step in the embodiment of the present disclosure not necessarily executes in the order described above, can also with inverted order, it is parallel, Other sequences such as intersection execute, moreover, those skilled in the art can also add other steps on the basis of above-mentioned steps Suddenly, the mode of these obvious variants or equivalent replacement should also be included within the protection scope of the disclosure, and details are not described herein.
It is below embodiment of the present disclosure, embodiment of the present disclosure can be used for executing embodiments of the present disclosure realization The step of, for ease of description, part relevant to the embodiment of the present disclosure is illustrated only, it is disclosed by specific technical details, it asks Referring to embodiments of the present disclosure.
The embodiment of the present disclosure provides a kind of headwork identification device.The device can execute above-mentioned headwork identification side Method step as described in the examples.As shown in figure 3, the device 300 specifically includes that video image obtains module 301, key point inspection Survey module 302, first distance computing module 303, second distance computing module 304 and identification module 305.Wherein,
Video image obtains module 301, includes the head of people in the video image for obtaining video image from image source Portion;
Critical point detection module 302, for detecting the face key point in the video image in the head of people;
First distance computing module 303, for calculating between at least two key points in the first video image frame One distance;
Second distance computing module 304, for calculating between at least two key point in the second video image frame Second distance;
Identification module 305, for identifying the movement on the head according to the first distance and second distance.
Further, the video image obtains module 301, is also used to:
Video image is acquired from imaging sensor, wherein including the head of at least one people in the video image.
Further, the critical point detection module 302, further includes:
Picture frame obtains module, for obtaining the picture frame of the video image;
Feature detection module, for detecting the face characteristic in described image frame;
Key point generation module, for generating the face key point according to the face characteristic.
Further, the first distance computing module 303, further includes:
First computing module, for calculating between the nose key point in the first video image frame and nose vertex key point Distance a1
Second computing module, for calculate between the nose key point in the first video image frame and chin key point away from From b1
Further, the second distance computing module 304, further includes:
Third computing module, for calculating between the nose key point in the second video image frame and nose vertex key point Distance a2
4th computing module, for calculate between the nose key point in the second video image frame and chin key point away from From b2
Further, the identification module 305, further includes:
5th computing module, for calculating a1And b1Ratio cc;
6th computing module, for calculating a2And b2Ratio beta;
First identification submodule identifies that the movement on the head is to nod if being used for α ≠ β.
Further, the first distance computing module 303, further includes:
7th computing module, for calculate canthus key point in the first video image frame between nose key point away from From l1
Further, the second distance computing module 304, further includes:
8th computing module, for calculate canthus key point in the second video image frame between nose key point away from From l2
Further, the identification module 305, further includes:
Second identification submodule, if being used for the l1≠l2, then identify that the movement on the head is to shake the head.
Further, the connecting line between at least two key point is on the face vertical direction or level side Upward connecting line.
The method that Fig. 3 shown device can execute embodiment illustrated in fig. 1, the part that the present embodiment is not described in detail can join Examine the related description to embodiment illustrated in fig. 1.In implementation procedure and the technical effect embodiment shown in Figure 1 of the technical solution Description, details are not described herein.
Below with reference to Fig. 4, it illustrates the structural representations for the electronic equipment 400 for being suitable for being used to realize the embodiment of the present disclosure Figure.Electronic equipment in the embodiment of the present disclosure can include but is not limited to such as mobile phone, laptop, digital broadcasting and connect Receive device, PDA (personal digital assistant), PAD (tablet computer), PMP (portable media player), car-mounted terminal (such as vehicle Carry navigation terminal) etc. mobile terminal and such as number TV, desktop computer etc. fixed terminal.Electricity shown in Fig. 4 Sub- equipment is only an example, should not function to the embodiment of the present disclosure and use scope bring any restrictions.
As shown in figure 4, electronic equipment 400 may include processing unit (such as central processing unit, graphics processor etc.) 401, random access can be loaded into according to the program being stored in read-only memory (ROM) 402 or from storage device 408 Program in memory (RAM) 403 and execute various movements appropriate and processing.In RAM 403, it is also stored with electronic equipment Various programs and data needed for 400 operations.Processing unit 401, ROM 402 and RAM 403 pass through the phase each other of bus 404 Even.Input/output (I/O) interface 405 is also connected to bus 404.
In general, following device can connect to I/O interface 405: including such as touch screen, touch tablet, keyboard, mouse, figure As the input unit 406 of sensor, microphone, accelerometer, gyroscope etc.;Including such as liquid crystal display (LCD), loudspeaking The output device 407 of device, vibrator etc.;Storage device 408 including such as tape, hard disk etc.;And communication device 409.It is logical T unit 409 can permit electronic equipment 400 and wirelessly or non-wirelessly be communicated with other equipment to exchange data.Although Fig. 4 shows The electronic equipment 400 with various devices is gone out, it should be understood that being not required for implementing or having all dresses shown It sets.It can alternatively implement or have more or fewer devices.
Particularly, in accordance with an embodiment of the present disclosure, it may be implemented as computer above with reference to the process of flow chart description Software program.For example, embodiment of the disclosure includes a kind of computer program product comprising be carried on computer-readable medium On computer program, which includes the program code for method shown in execution flow chart.In such reality It applies in example, which can be downloaded and installed from network by communication device 409, or from storage device 408 It is mounted, or is mounted from ROM 402.When the computer program is executed by processing unit 401, the embodiment of the present disclosure is executed Method in the above-mentioned function that limits.
It should be noted that the above-mentioned computer-readable medium of the disclosure can be computer-readable signal media or meter Calculation machine readable storage medium storing program for executing either the two any combination.Computer readable storage medium for example can be --- but not Be limited to --- electricity, magnetic, optical, electromagnetic, infrared ray or semiconductor system, device or device, or any above combination.Meter The more specific example of calculation machine readable storage medium storing program for executing can include but is not limited to: have the electrical connection, just of one or more conducting wires Taking formula computer disk, hard disk, random access storage device (RAM), read-only memory (ROM), erasable type may be programmed read-only storage Device (EPROM or flash memory), optical fiber, portable compact disc read-only memory (CD-ROM), light storage device, magnetic memory device, Or above-mentioned any appropriate combination.In the disclosure, computer readable storage medium can be it is any include or storage journey The tangible medium of sequence, the program can be commanded execution system, device or device use or in connection.And at this In open, computer-readable signal media may include in a base band or as the data-signal that carrier wave a part is propagated, In carry computer-readable program code.The data-signal of this propagation can take various forms, including but not limited to Electromagnetic signal, optical signal or above-mentioned any appropriate combination.Computer-readable signal media can also be computer-readable and deposit Any computer-readable medium other than storage media, the computer-readable signal media can send, propagate or transmit and be used for By the use of instruction execution system, device or device or program in connection.Include on computer-readable medium Program code can transmit with any suitable medium, including but not limited to: electric wire, optical cable, RF (radio frequency) etc. are above-mentioned Any appropriate combination.
Above-mentioned computer-readable medium can be included in above-mentioned electronic equipment;It is also possible to individualism, and not It is fitted into the electronic equipment.
Above-mentioned computer-readable medium carries one or more program, when said one or multiple programs are by the electricity When sub- equipment executes, so that the electronic equipment: obtaining video image from image source, include the head of people in the video image; Detect the face key point in the video image in the head of people;Calculate at least two key points in the first video image frame Between first distance;Calculate the second distance between at least two key point in the second video image frame;According to institute It states first distance and second distance identifies the movement on the head.
The calculating of the operation for executing the disclosure can be write with one or more programming languages or combinations thereof Machine program code, above procedure design language include object oriented program language-such as Java, Smalltalk, C+ +, it further include conventional procedural programming language-such as " C " language or similar programming language.Program code can Fully to execute, partly execute on the user computer on the user computer, be executed as an independent software package, Part executes on the remote computer or executes on a remote computer or server completely on the user computer for part. In situations involving remote computers, remote computer can pass through the network of any kind --- including local area network (LAN) Or wide area network (WAN)-is connected to subscriber computer, or, it may be connected to outer computer (such as utilize Internet service Provider is connected by internet).
Flow chart and block diagram in attached drawing are illustrated according to the system of the various embodiments of the disclosure, method and computer journey The architecture, function and operation in the cards of sequence product.In this regard, each box in flowchart or block diagram can generation A part of one module, program segment or code of table, a part of the module, program segment or code include one or more use The executable instruction of the logic function as defined in realizing.It should also be noted that in some implementations as replacements, being marked in box The function of note can also occur in a different order than that indicated in the drawings.For example, two boxes succeedingly indicated are actually It can be basically executed in parallel, they can also be executed in the opposite order sometimes, and this depends on the function involved.Also it to infuse Meaning, the combination of each box in block diagram and or flow chart and the box in block diagram and or flow chart can be with holding The dedicated hardware based system of functions or operations as defined in row is realized, or can use specialized hardware and computer instruction Combination realize.
Being described in unit involved in the embodiment of the present disclosure can be realized by way of software, can also be by hard The mode of part is realized.Wherein, the title of unit does not constitute the restriction to the unit itself under certain conditions, for example, the One acquiring unit is also described as " obtaining the unit of at least two internet protocol addresses ".
Above description is only the preferred embodiment of the disclosure and the explanation to institute's application technology principle.Those skilled in the art Member is it should be appreciated that the open scope involved in the disclosure, however it is not limited to technology made of the specific combination of above-mentioned technical characteristic Scheme, while should also cover in the case where not departing from design disclosed above, it is carried out by above-mentioned technical characteristic or its equivalent feature Any combination and the other technical solutions formed.Such as features described above has similar function with (but being not limited to) disclosed in the disclosure Can technical characteristic replaced mutually and the technical solution that is formed.

Claims (13)

1. a kind of headwork recognition methods, comprising:
Video image is obtained from image source, includes the head of people in the video image;
Detect the face key point in the video image in the head of people;
Calculate the first distance between at least two key points in the first video image frame;
Calculate the second distance between at least two key point in the second video image frame;
The movement on the head is identified according to the first distance and second distance.
2. headwork recognition methods as described in claim 1, wherein described obtain video image, the view from image source It include the head of people in frequency image, comprising:
Video image is acquired from imaging sensor, wherein including the head of at least one people in the video image.
3. headwork recognition methods as described in claim 1, wherein in the detection video image in the head of people Face key point, comprising:
Obtain the picture frame of the video image;
Detect the face characteristic in described image frame;
The face key point is generated according to the face characteristic.
4. headwork recognition methods as described in claim 1, wherein at least two calculated in the first video image frame First distance between a key point, comprising:
Calculate the distance between nose key point and the nose vertex key point in the first video image frame a1
Calculate the distance between nose key point and the chin key point in the first video image frame b1
5. headwork recognition methods as claimed in claim 4, wherein it is described calculate in the second video image frame it is described extremely Second distance between few two key points, comprising:
Calculate the distance between nose key point and the nose vertex key point in the second video image frame a2
Calculate the distance between nose key point and the chin key point in the second video image frame b2
6. headwork recognition methods as claimed in claim 5, wherein described know according to the first distance and second distance The movement on the not described head, comprising:
Calculate a1And b1Ratio cc;
Calculate a2And b2Ratio beta;
If α ≠ β, identify that the movement on the head is to nod.
7. headwork recognition methods as described in claim 1, wherein at least two calculated in the first video image frame First distance between a key point, comprising:
Canthus key point in the first video image frame is calculated to the distance between nose key point l1
8. headwork recognition methods as described in claim 1, wherein at least two calculated in the second video image frame Second distance between a key point, comprising:
Canthus key point in the second video image frame is calculated to the distance between nose key point l2
9. headwork recognition methods as claimed in claim 8, wherein described know according to the first distance and second distance The movement on the not described head, comprising:
If the l1≠l2, then identify that the movement on the head is to shake the head.
10. headwork recognition methods as described in claim 1, wherein the connecting line between at least two key point is Connecting line on the face vertical direction or in horizontal direction.
11. a kind of headwork identification device, comprising:
Video image obtains module, includes the head of people in the video image for obtaining video image from image source;
Critical point detection module, for detecting the face key point in the video image in the head of people;
First distance computing module, for calculating the first distance between at least two key points in the first video image frame;
Second distance computing module, for calculate between at least two key point in the second video image frame second away from From;
Identification module, for identifying the movement on the head according to the first distance and second distance.
12. a kind of electronic equipment, comprising:
Memory, for storing computer-readable instruction;And
Processor, for running the computer-readable instruction so that the processor run when realize according to claim 1- Method is known in headwork described in any one of 10.
13. a kind of non-transient computer readable storage medium, for storing computer-readable instruction, when the computer-readable finger When order is executed by computer, so that the computer perform claim requires headwork knowledge side described in any one of 1-10 Method.
CN201910221098.6A 2019-03-22 2019-03-22 Headwork recognition methods, device and electronic equipment Pending CN110069996A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910221098.6A CN110069996A (en) 2019-03-22 2019-03-22 Headwork recognition methods, device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910221098.6A CN110069996A (en) 2019-03-22 2019-03-22 Headwork recognition methods, device and electronic equipment

Publications (1)

Publication Number Publication Date
CN110069996A true CN110069996A (en) 2019-07-30

Family

ID=67366484

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910221098.6A Pending CN110069996A (en) 2019-03-22 2019-03-22 Headwork recognition methods, device and electronic equipment

Country Status (1)

Country Link
CN (1) CN110069996A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110879986A (en) * 2019-11-21 2020-03-13 上海眼控科技股份有限公司 Face recognition method, apparatus and computer-readable storage medium
CN113283383A (en) * 2021-06-15 2021-08-20 北京有竹居网络技术有限公司 Live broadcast behavior recognition method, device, equipment and readable medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102999164A (en) * 2012-11-30 2013-03-27 广东欧珀移动通信有限公司 E-book page turning control method and intelligent terminal
CN104850820A (en) * 2014-02-19 2015-08-19 腾讯科技(深圳)有限公司 Face identification method and device
CN105975935A (en) * 2016-05-04 2016-09-28 腾讯科技(深圳)有限公司 Face image processing method and apparatus
CN109492550A (en) * 2018-10-25 2019-03-19 腾讯科技(深圳)有限公司 The related system of biopsy method, device and application biopsy method

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102999164A (en) * 2012-11-30 2013-03-27 广东欧珀移动通信有限公司 E-book page turning control method and intelligent terminal
CN104850820A (en) * 2014-02-19 2015-08-19 腾讯科技(深圳)有限公司 Face identification method and device
CN105975935A (en) * 2016-05-04 2016-09-28 腾讯科技(深圳)有限公司 Face image processing method and apparatus
CN109492550A (en) * 2018-10-25 2019-03-19 腾讯科技(深圳)有限公司 The related system of biopsy method, device and application biopsy method

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110879986A (en) * 2019-11-21 2020-03-13 上海眼控科技股份有限公司 Face recognition method, apparatus and computer-readable storage medium
CN113283383A (en) * 2021-06-15 2021-08-20 北京有竹居网络技术有限公司 Live broadcast behavior recognition method, device, equipment and readable medium

Similar Documents

Publication Publication Date Title
CN110766777B (en) Method and device for generating virtual image, electronic equipment and storage medium
WO2018177379A1 (en) Gesture recognition, gesture control and neural network training methods and apparatuses, and electronic device
CN108510472B (en) Method and apparatus for handling image
CN110058685A (en) Display methods, device, electronic equipment and the computer readable storage medium of virtual objects
CN111541907B (en) Article display method, apparatus, device and storage medium
CN108846440A (en) Image processing method and device, computer-readable medium and electronic equipment
US20230245398A1 (en) Image effect implementing method and apparatus, electronic device and storage medium
CN108701355B (en) GPU optimization and online single Gaussian-based skin likelihood estimation
WO2020211573A1 (en) Method and device for processing image
CN110047124A (en) Method, apparatus, electronic equipment and the computer readable storage medium of render video
CN109982036A (en) A kind of method, terminal and the storage medium of panoramic video data processing
US20220358675A1 (en) Method for training model, method for processing video, device and storage medium
CN110070551A (en) Rendering method, device and the electronic equipment of video image
CN110072047A (en) Control method, device and the hardware device of image deformation
CN110069125B (en) Virtual object control method and device
US11561651B2 (en) Virtual paintbrush implementing method and apparatus, and computer readable storage medium
CN109754464A (en) Method and apparatus for generating information
CN110070063A (en) Action identification method, device and the electronic equipment of target object
CN110177295A (en) Processing method, device and the electronic equipment that subtitle crosses the border
CN110619656A (en) Face detection tracking method and device based on binocular camera and electronic equipment
WO2020155984A1 (en) Facial expression image processing method and apparatus, and electronic device
CN110069996A (en) Headwork recognition methods, device and electronic equipment
US11494961B2 (en) Sticker generating method and apparatus, and medium and electronic device
WO2019000464A1 (en) Image display method and device, storage medium, and terminal
CN112270242B (en) Track display method and device, readable medium and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20190730

RJ01 Rejection of invention patent application after publication