CN111191553A - Face tracking method and device and electronic equipment - Google Patents

Face tracking method and device and electronic equipment Download PDF

Info

Publication number
CN111191553A
CN111191553A CN201911345823.7A CN201911345823A CN111191553A CN 111191553 A CN111191553 A CN 111191553A CN 201911345823 A CN201911345823 A CN 201911345823A CN 111191553 A CN111191553 A CN 111191553A
Authority
CN
China
Prior art keywords
face
key point
preset model
information
training
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911345823.7A
Other languages
Chinese (zh)
Inventor
郑东
林科军
赵拯
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Universal Ubiquitous Technology Co ltd
Original Assignee
Universal Ubiquitous Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Universal Ubiquitous Technology Co ltd filed Critical Universal Ubiquitous Technology Co ltd
Priority to CN201911345823.7A priority Critical patent/CN111191553A/en
Publication of CN111191553A publication Critical patent/CN111191553A/en
Priority to CN202011433517.1A priority patent/CN112232311B/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/165Detection; Localisation; Normalisation using facial parts and geometric relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation

Abstract

The embodiment of the disclosure provides a face tracking method, a face tracking device and electronic equipment, belonging to the technical field of face tracking, wherein the face tracking method in the embodiment of the disclosure comprises the following steps: acquiring a video image; carrying out face key point positioning on the video image according to the preset model to obtain face key point information; acquiring human face external frame information according to the preset model and the human face key point information; and obtaining face quality classification information according to the preset model and the face external frame information. By the scheme, reliable and high-quality face information is provided for face analysis.

Description

Face tracking method and device and electronic equipment
Technical Field
The present disclosure relates to the field of face tracking technologies, and in particular, to a face tracking method and apparatus, and an electronic device.
Background
In recent years, with the continuous development of artificial intelligence technology, the application of artificial intelligence technology in human life is increasingly wide, and especially in the field of security, attributes such as face recognition, gender, age and the like are used as important components of face analysis, and it is particularly important to provide high-quality face data for face analysis.
The traditional method is that a detection technology is used to detect a face, then a traditional machine learning algorithm (such as a kernel correlation filtering algorithm) or a deep learning tracking method (such as a twin network) is used to extract the apparent features of the face in the next frame to track the face, and finally the tracked face is sent to a face analysis module to be analyzed, although the methods are higher in precision or stronger in robustness, some methods sacrifice speed or require speed on a specific hardware platform to meet the requirement of real-time performance, but when the method is deployed on a mobile end platform, the method has the defects of lower speed and cannot meet the requirement of practicality, and only can track the face but cannot classify the tracked face, so that the quality analysis of the face provided for face analysis cannot be carried out, and difficulty is brought to subsequent face attribute analysis.
Therefore, the problems of low processing speed and poor analysis effect exist in the existing face tracking method.
Disclosure of Invention
In view of this, embodiments of the present disclosure provide a face tracking method, a face tracking device, and an electronic device, which at least partially solve the problems in the prior art.
In a first aspect, an embodiment of the present disclosure provides a face tracking method, including:
acquiring a video image;
carrying out face key point positioning on the video image according to the preset model to obtain face key point information;
acquiring human face external frame information according to the preset model and the human face key point information;
and obtaining face quality classification information according to the preset model and the face external frame information.
According to a specific implementation manner of the embodiment of the present disclosure, before the step of acquiring the video image, the method further includes:
and establishing a preset model based on deep learning method training according to the sample data.
According to a specific implementation manner of the embodiment of the present disclosure, the step of establishing the preset model based on deep learning method training according to the sample data includes:
collecting sample data including a face image;
constructing a neural network;
and inputting the sample data into the neural network for training until the training is converged to obtain a preset model.
According to a specific implementation manner of the embodiment of the present disclosure, the step of inputting the sample data to the neural network for training until the training is converged to obtain a preset model includes:
inputting the sample data into a neural network for training, wherein the neural network is used for outputting the position of a face external frame based on the face key point information;
if the training of the neural network on the position of the face external frame is finished, generating target data based on the face key point information and the position of the face external frame;
inputting the target data into the neural network for training, wherein the neural network is also used for outputting a face quality category;
and if the training of the neural network on the face type is finished, generating a preset model.
According to a specific implementation manner of the embodiment of the present disclosure, before the step of inputting the sample data to the neural network for training, the method further includes:
and carrying out broadening processing on the face image, wherein the broadening processing comprises expanding the face external frame in the previous frame of training image by preset times.
According to a specific implementation manner of the embodiment of the present disclosure, the step of obtaining the external frame information of the face according to the preset model and the key point information of the face includes:
acquiring face key point information from the video image according to the preset model;
and calculating the position of the human face external frame according to the human face key point information.
According to a specific implementation manner of the embodiment of the present disclosure, the step of obtaining the face quality classification information according to the preset model and the face external frame information includes:
acquiring a face angle parameter according to the face outer frame position information obtained by the preset model;
and matching the corresponding human face quality categories according to the human face angle parameters.
In a second aspect, an embodiment of the present disclosure provides a face tracking apparatus, including:
the first acquisition module is used for acquiring a video image;
the second acquisition module is used for carrying out face key point positioning on the video image according to the preset model to acquire face key point information;
the third acquisition module is used for acquiring the information of the external frame of the face according to the preset model and the key point information of the face;
and the fourth acquisition module is used for acquiring face quality classification information according to the preset model and the face external frame information.
In a third aspect, an embodiment of the present disclosure further provides an electronic device, where the electronic device includes:
at least one processor; and the number of the first and second groups,
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of the first aspect or any implementation manner of the first aspect.
In a fourth aspect, the disclosed embodiments also provide a non-transitory computer-readable storage medium storing computer instructions for causing a computer to execute the face tracking method in the first aspect or any implementation manner of the first aspect.
In a fifth aspect, the disclosed embodiments also provide a computer program product comprising a computer program stored on a non-transitory computer readable storage medium, the computer program comprising program instructions that, when executed by a computer, cause the computer to perform the face tracking method of the first aspect or any of the implementations of the first aspect.
The face tracking method in the embodiment of the disclosure comprises the following steps: acquiring a video image; carrying out face key point positioning on the video image according to the preset model to obtain face key point information; acquiring human face external frame information according to the preset model and the human face key point information; and obtaining face quality classification information according to the preset model and the face external frame information. By the scheme, reliable and high-quality face information is provided for face analysis.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present disclosure, the drawings needed to be used in the embodiments will be briefly described below, and it is apparent that the drawings in the following description are only some embodiments of the present disclosure, and it is obvious for those skilled in the art that other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a schematic flow chart of a face tracking method according to an embodiment of the present disclosure;
fig. 2 is a schematic flow chart of another face tracking method provided in the embodiment of the present disclosure;
fig. 3a is a schematic diagram of face tracking provided by an embodiment of the present disclosure;
FIG. 3b is a schematic diagram of face tracking and face quality classification provided by an embodiment of the present disclosure;
fig. 4 is a schematic flow chart of another face tracking method provided in the embodiment of the present disclosure;
fig. 5 is a schematic structural diagram of a face tracking device according to an embodiment of the present disclosure;
fig. 6 is a schematic diagram of an electronic device provided in an embodiment of the present disclosure.
Detailed Description
The embodiments of the present disclosure are described in detail below with reference to the accompanying drawings.
The embodiments of the present disclosure are described below with specific examples, and other advantages and effects of the present disclosure will be readily apparent to those skilled in the art from the disclosure in the specification. It is to be understood that the described embodiments are merely illustrative of some, and not restrictive, of the embodiments of the disclosure. The disclosure may be embodied or carried out in various other specific embodiments, and various modifications and changes may be made in the details within the description without departing from the spirit of the disclosure. It is to be noted that the features in the following embodiments and examples may be combined with each other without conflict. All other embodiments, which can be derived by a person skilled in the art from the embodiments disclosed herein without making any creative effort, shall fall within the protection scope of the present disclosure.
It is noted that various aspects of the embodiments are described below within the scope of the appended claims. It should be apparent that the aspects described herein may be embodied in a wide variety of forms and that any specific structure and/or function described herein is merely illustrative. Based on the disclosure, one skilled in the art should appreciate that one aspect described herein may be implemented independently of any other aspects and that two or more of these aspects may be combined in various ways. For example, an apparatus may be implemented and/or a method practiced using any number of the aspects set forth herein. Additionally, such an apparatus may be implemented and/or such a method may be practiced using other structure and/or functionality in addition to one or more of the aspects set forth herein.
It should be noted that the drawings provided in the following embodiments are only for illustrating the basic idea of the present disclosure, and the drawings only show the components related to the present disclosure rather than the number, shape and size of the components in actual implementation, and the type, amount and ratio of the components in actual implementation may be changed arbitrarily, and the layout of the components may be more complicated.
In addition, in the following description, specific details are provided to facilitate a thorough understanding of the examples. However, it will be understood by those skilled in the art that the aspects may be practiced without these specific details.
The embodiment of the disclosure provides a face tracking method. The face tracking method provided by the embodiment may be executed by a computing device, which may be implemented as software or implemented as a combination of software and hardware, and may be integrally disposed in a server, a terminal device, or the like.
Referring to fig. 1, a face tracking method provided in an embodiment of the present disclosure includes:
the face tracking method of the embodiment is performed based on a preset model established by deep learning, and the establishing method of the preset model comprises the following steps:
step S101, collecting sample data including a face image;
step S102, constructing a neural network;
and S103, inputting the sample data into the neural network for training until the training is converged to obtain a preset model.
In this embodiment, referring to fig. 3a and 3b, the preset model is obtained based on deep learning training, specifically, a face key point model is obtained in advance based on deep learning, a branch is connected behind a neural network feature extraction layer on the basis of the obtained face key point model to train a classification model, and in this process, only the classification model is trained while keeping the parameters of the key point model unchanged. The connected branches include an add convolution layer, a pooling layer, and a full connection layer.
Specifically, in the present embodiment, a CNN (Convolutional Neural Network) is taken as an example of a Network, the CNN has a plurality of layers, and an output of an upper layer is taken as an input of a lower layer.
Each layer of the CNN generally consists of a plurality of maps, each map consists of a plurality of neurons, all neurons of the same map share a convolution kernel (i.e., weight), and the convolution kernel often represents a feature, for example, a certain convolution kernel represents an arc, and then when the convolution kernel is rolled over the whole image, an area with a large convolution value is likely to be an arc.
In convolutional layers, which are essentially feature extraction layers, a hyper-parameter F can be set to specify how many feature extractors (filters) are set up, which for a certain Filter corresponds to a moving window of size k x d moving from the first word of the input matrix, where k and d are the window sizes specified by filters. For a window at a certain moment, converting an input value in the window into a certain characteristic value through nonlinear transformation of a neural network, and continuously generating the characteristic value corresponding to the Filter along with continuous backward movement of the window to form a characteristic vector of the Filter. This is the process of convolutional layer extraction features. Each Filter operates in this manner to form a different feature extractor.
In the full-concatenation layer, the upper layer features of the n 1 × 1 convolution kernels are convolved, and then the convolved features are averaged once for posing.
When the convolutional neural network is adopted to train sample data, the sample data can be acquired from a gallery in advance, and the obtained sample data comprises a human face. And inputting the sample data into the neural network for training until the training is converged to obtain a preset model.
Referring to fig. 2, step S103 includes the following substeps:
step S201, inputting the sample data into a neural network for training, wherein the neural network is used for outputting the position of a face external frame based on face key point information;
step S202, if the training of the neural network to the position of the face external frame is completed, generating target data based on the key point information of the face and the position of the face external frame;
step S203, inputting the target data into the neural network for training, wherein the neural network is also used for outputting a face quality category;
and step S204, if the training of the neural network on the face type is finished, generating a preset model.
Before the step of inputting the sample data to the neural network for training, the method further comprises:
and carrying out broadening processing on the face image, wherein the broadening processing comprises expanding the face external frame in the previous frame of training image by preset times.
Since a person is moving continuously in a video sequence, but the time interval is extremely short in the front and back frames, the motion range of the person can be considered to be very small, but it cannot be excluded that the person moves quickly, so that the position of the face of the next frame is required to be found around the face of the previous frame (which is equivalent to enlarging a little point area), so that a delta-fold area is enlarged around the circumscribed frame of the training face when the training key point is positioned, and in an actual test, the delta-fold area is required to be enlarged around the circumscribed frame of the face of the previous frame, and then the area is used for predicting the position of the key point, namely the position of the face frame of the frame.
In this embodiment, the key point position prediction is performed on the face region after the enlarged region by training a lightweight neural network model, specifically, when training the key point model, the training sample is the face method region with the face outline frame expanded by δ times around the face outline frame, specifically, assuming that the position information of the face is (x, y, w, h), the face enlarged region position information used for training is (x- δ w, y- δ h, w (1+2 δ), h (1+2 δ)), and the face width and height are scaled to mxn size, and the key point positions therein are all normalized to the range [ -a, a ], where (x, y) is the coordinate position of the upper left corner of the original face frame, (w, h) is the width and height of the original face frame, delta is the magnification around the face frame, (m, n) is the width and height of the face sample after scaling, and [ -a, a ] is the range after the position normalization of the key point. The interval [ -a, a ] obtained here refers to an interval obtained after normalization processing is performed on the coordinate position of the key point; the normalized interval obtained here is a position relative to the original picture length and width, and does not represent the actual coordinate position, for example, the original picture length and width are both 100, the key point position is (40, 60), and if the normalization is between [ -1,1], the normalized key point coordinate position is (-0.2, 0.2).
How to obtain the external frame of the face according to the key points of the face
Let us assume that the coordinates of key points of left eye, right eye, left mouth corner and right mouth corner of human face obtained by us are respectively (x)1,y1),(x2,y2),(x3,y3),(x4,y4) The coordinates of the upper left corner and the lower right corner of the external frame of the face can be obtained as follows:
Figure BDA0002333317290000071
upper left corner x-axis coordinate
ylt=y1λ h, upper left corner y-axis coordinate
Figure BDA0002333317290000081
Lower right angle x-axis coordinate
yrb=y4+ λ h, lower right corner y-axis coordinate
Wherein h represents the distance from the center point of the left eye and the right eye to the center point of the left mouth angle and the right mouth angle,
Figure BDA0002333317290000082
d is the distance from the left eye to the right eye,
Figure BDA0002333317290000083
λ is a proportionality coefficient.
After the key point model is trained, on the basis of the model, a branch is connected behind the neural network feature extraction layer to train a classification model, and in the process, the parameters of the key point model are kept unchanged, and only the classification model is trained. For the classification model, the classification category is mainly divided into any of three values of small angle (Pitch, P), Yaw (Yaw, Y) and rotation angle (Roll, R)An absolute value less than β1) Large angle (P, Y, R the absolute value of any one of the three values is greater than β)2) Non-human face (background, local area of human face), etc., wherein β1And β2Is a threshold value for distinguishing the large and small angles of the face.
Small and large angles are calculated from pitch, yaw and rotation angles, with pitch referring to the angle of the face relative to the x-axis, yaw referring to the angle of the face relative to the y-axis and rotation angle referring to the angle of the face relative to the z-axis β1And β2The threshold value is used for distinguishing the large angle from the small angle, and is determined according to the actual project requirements, for example, if the human face smaller than 30 degrees is expected to be a small angle, and the human face larger than 50 degrees is expected to be a large angle, β1=30,β2=50。
According to another specific implementation manner of the embodiment of the present disclosure, referring to fig. 4, the face tracking method includes:
s401, acquiring a video image;
in an actual application process, for example, when a mobile terminal is used for payment and face authentication is required, a face authentication page is entered, a camera of the mobile terminal is opened, a video image in an area is collected, and tracking collection and identification of a face are performed from the video image.
S402, carrying out face key point positioning on the video image according to the preset model to obtain face key point information;
s403, acquiring human face external frame information according to the preset model and the human face key point information;
after the video image is obtained, carrying out face key point positioning on the image in the video image according to a preset model to obtain face key point information, wherein the face key point positioning method is carried out by adopting the method. And further obtaining the information of the face external frame according to the face key point information.
S404, obtaining face quality classification information according to the preset model and the face external frame information.
And finally, after the face is tracked, the face is subjected to quality classification according to the information of the face external frame to obtain face quality classification information.
In the face quality classification, obtaining face angle parameters according to the position information of the face external frame obtained by the preset model; and matching the corresponding human face quality categories according to the human face angle parameters.
The human face quality classification can be understood as that human faces are artificially classified into three types, namely non-human faces, human faces with large angles and front faces, so that for each piece of input image data, a score of the three types is output through network propagation, and the input type is determined according to the respective scores of the three types, namely the human face is subjected to quality analysis.
The face tracking method in the embodiment of the disclosure comprises the following steps: acquiring a video image; carrying out face key point positioning on the video image according to the preset model to obtain face key point information; acquiring human face external frame information according to the preset model and the human face key point information; and obtaining face quality classification information according to the preset model and the face external frame information. By the scheme, reliable and high-quality face information is provided for face analysis.
Corresponding to the above method embodiment, referring to fig. 5, the disclosed embodiment further provides a face tracking apparatus 50, including:
a first obtaining module 501, configured to obtain a video image;
a second obtaining module 502, configured to perform face key point positioning on the video image according to the preset model, so as to obtain face key point information;
a third obtaining module 503, configured to obtain the information of the external frame of the face according to the preset model and the information of the key points of the face;
a fourth obtaining module 504, configured to obtain face quality classification information according to the preset model and the face bounding box information.
The apparatus shown in fig. 5 may correspondingly execute the content in the above method embodiment, and details of the part not described in detail in this embodiment refer to the content described in the above method embodiment, which is not described again here.
Referring to fig. 6, an embodiment of the present disclosure also provides an electronic device 60, including:
at least one processor; and the number of the first and second groups,
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the face tracking method of the foregoing method embodiments.
The disclosed embodiments also provide a non-transitory computer-readable storage medium storing computer instructions for causing the computer to execute the face tracking method in the aforementioned method embodiments.
The disclosed embodiments also provide a computer program product comprising a computer program stored on a non-transitory computer readable storage medium, the computer program comprising program instructions which, when executed by a computer, cause the computer to perform the face tracking method in the aforementioned method embodiments.
Referring now to FIG. 6, a schematic diagram of an electronic device 60 suitable for use in implementing embodiments of the present disclosure is shown. The electronic devices in the embodiments of the present disclosure may include, but are not limited to, mobile terminals such as mobile phones, notebook computers, digital broadcast receivers, PDAs (personal digital assistants), PADs (tablet computers), PMPs (portable multimedia players), in-vehicle terminals (e.g., car navigation terminals), and the like, and fixed terminals such as digital TVs, desktop computers, and the like. The electronic device shown in fig. 6 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
As shown in fig. 6, the electronic device 60 may include a processing means (e.g., a central processing unit, a graphics processor, etc.) 601 that may perform various appropriate actions and processes in accordance with a program stored in a Read Only Memory (ROM)602 or a program loaded from a storage means 608 into a Random Access Memory (RAM) 603. In the RAM 603, various programs and data necessary for the operation of the electronic apparatus 60 are also stored. The processing device 601, the ROM 602, and the RAM 603 are connected to each other via a bus 604. An input/output (I/O) interface 605 is also connected to bus 604.
Generally, the following devices may be connected to the I/O interface 605: input devices 606 including, for example, a touch screen, touch pad, keyboard, mouse, image sensor, microphone, accelerometer, gyroscope, etc.; output devices 607 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, and the like; storage 608 including, for example, tape, hard disk, etc.; and a communication device 609. The communication means 609 may allow the electronic device 60 to communicate with other devices wirelessly or by wire to exchange data. While the figures illustrate an electronic device 60 having various means, it is to be understood that not all illustrated means are required to be implemented or provided. More or fewer devices may alternatively be implemented or provided.
In particular, according to an embodiment of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network via the communication means 609, or may be installed from the storage means 608, or may be installed from the ROM 602. The computer program, when executed by the processing device 601, performs the above-described functions defined in the methods of the embodiments of the present disclosure.
It should be noted that the computer readable medium in the present disclosure can be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In contrast, in the present disclosure, a computer readable signal medium may comprise a propagated data signal with computer readable program code embodied therein, either in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, optical cables, RF (radio frequency), etc., or any suitable combination of the foregoing.
The computer readable medium may be embodied in the electronic device; or may exist separately without being assembled into the electronic device.
The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: acquiring at least two internet protocol addresses; sending a node evaluation request comprising the at least two internet protocol addresses to node evaluation equipment, wherein the node evaluation equipment selects the internet protocol addresses from the at least two internet protocol addresses and returns the internet protocol addresses; receiving an internet protocol address returned by the node evaluation equipment; wherein the obtained internet protocol address indicates an edge node in the content distribution network.
Alternatively, the computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: receiving a node evaluation request comprising at least two internet protocol addresses; selecting an internet protocol address from the at least two internet protocol addresses; returning the selected internet protocol address; wherein the received internet protocol address indicates an edge node in the content distribution network.
Computer program code for carrying out operations for aspects of the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in the embodiments of the present disclosure may be implemented by software or hardware. Where the name of a unit does not in some cases constitute a limitation of the unit itself, for example, the first retrieving unit may also be described as a "unit for retrieving at least two internet protocol addresses".
It should be understood that portions of the present disclosure may be implemented in hardware, software, firmware, or a combination thereof.
The above description is only for the specific embodiments of the present disclosure, but the scope of the present disclosure is not limited thereto, and any changes or substitutions that can be easily conceived by those skilled in the art within the technical scope of the present disclosure should be covered within the scope of the present disclosure. Therefore, the protection scope of the present disclosure shall be subject to the protection scope of the claims.

Claims (10)

1. A face tracking method, comprising:
acquiring a video image;
carrying out face key point positioning on the video image according to the preset model to obtain face key point information;
acquiring human face external frame information according to the preset model and the human face key point information;
and obtaining face quality classification information according to the preset model and the face external frame information.
2. The face tracking method of claim 1, wherein the step of obtaining the video image is preceded by the method further comprising:
and establishing a preset model based on deep learning method training according to the sample data.
3. The face tracking method according to claim 2, wherein the step of building a preset model trained based on a deep learning method according to sample data comprises:
collecting sample data including a face image;
constructing a neural network;
and inputting the sample data into the neural network for training until the training is converged to obtain a preset model.
4. The method according to claim 3, wherein the step of inputting the sample data to the neural network for training until the training converges to obtain a predetermined model comprises:
inputting the sample data into a neural network for training, wherein the neural network is used for outputting the position of a face external frame based on the face key point information;
if the training of the neural network on the position of the face external frame is finished, generating target data based on the face key point information and the position of the face external frame;
inputting the target data into the neural network for training, wherein the neural network is also used for outputting a face quality category;
and if the training of the neural network on the face type is finished, generating a preset model.
5. The face tracking method according to claim 4, wherein before the step of inputting the sample data to a neural network for training, the method further comprises:
and carrying out broadening processing on the face image, wherein the broadening processing comprises expanding the face external frame in the previous frame of training image by preset times.
6. The face tracking method according to any one of claims 1 to 5, wherein the step of obtaining the face bounding box information according to the preset model and the face key point information comprises:
acquiring face key point information from the video image according to the preset model;
and calculating the position of the human face external frame according to the human face key point information.
7. The face tracking method according to any one of claims 1 to 5, wherein the step of obtaining face quality classification information according to the preset model and the face bounding box information comprises:
acquiring a face angle parameter according to the face outer frame position information obtained by the preset model;
and matching the corresponding human face quality categories according to the human face angle parameters.
8. A face tracking device, comprising:
the first acquisition module is used for acquiring a video image;
the second acquisition module is used for carrying out face key point positioning on the video image according to the preset model to acquire face key point information;
the third acquisition module is used for acquiring the information of the external frame of the face according to the preset model and the key point information of the face;
and the fourth acquisition module is used for acquiring face quality classification information according to the preset model and the face external frame information.
9. An electronic device, characterized in that the electronic device comprises:
at least one processor; and the number of the first and second groups,
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the face tracking method of any of the preceding claims 1-7.
10. A non-transitory computer-readable storage medium storing computer instructions for causing a computer to perform the face tracking method of any one of the preceding claims 1-7.
CN201911345823.7A 2019-12-24 2019-12-24 Face tracking method and device and electronic equipment Pending CN111191553A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201911345823.7A CN111191553A (en) 2019-12-24 2019-12-24 Face tracking method and device and electronic equipment
CN202011433517.1A CN112232311B (en) 2019-12-24 2020-12-10 Face tracking method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911345823.7A CN111191553A (en) 2019-12-24 2019-12-24 Face tracking method and device and electronic equipment

Publications (1)

Publication Number Publication Date
CN111191553A true CN111191553A (en) 2020-05-22

Family

ID=70711053

Family Applications (2)

Application Number Title Priority Date Filing Date
CN201911345823.7A Pending CN111191553A (en) 2019-12-24 2019-12-24 Face tracking method and device and electronic equipment
CN202011433517.1A Active CN112232311B (en) 2019-12-24 2020-12-10 Face tracking method and device and electronic equipment

Family Applications After (1)

Application Number Title Priority Date Filing Date
CN202011433517.1A Active CN112232311B (en) 2019-12-24 2020-12-10 Face tracking method and device and electronic equipment

Country Status (1)

Country Link
CN (2) CN111191553A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112580434A (en) * 2020-11-25 2021-03-30 奥比中光科技集团股份有限公司 Face false detection optimization method and system based on depth camera and face detection equipment
CN112699784A (en) * 2020-12-29 2021-04-23 深圳市普渡科技有限公司 Face orientation estimation method and device, electronic equipment and storage medium
WO2022126905A1 (en) * 2020-12-18 2022-06-23 平安科技(深圳)有限公司 Face tracking method and system, terminal and storage medium

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104008370B (en) * 2014-05-19 2017-06-13 清华大学 A kind of video face identification method
CN107492116A (en) * 2017-09-01 2017-12-19 深圳市唯特视科技有限公司 A kind of method that face tracking is carried out based on more display models
CN108932727B (en) * 2017-12-29 2021-08-27 浙江宇视科技有限公司 Face tracking method and device
CN109657615B (en) * 2018-12-19 2021-11-02 腾讯科技(深圳)有限公司 Training method and device for target detection and terminal equipment
CN110287874B (en) * 2019-06-25 2021-07-27 北京市商汤科技开发有限公司 Target tracking method and device, electronic equipment and storage medium
CN110516705A (en) * 2019-07-19 2019-11-29 平安科技(深圳)有限公司 Method for tracking target, device and computer readable storage medium based on deep learning
CN110544272B (en) * 2019-09-06 2023-08-04 腾讯科技(深圳)有限公司 Face tracking method, device, computer equipment and storage medium

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112580434A (en) * 2020-11-25 2021-03-30 奥比中光科技集团股份有限公司 Face false detection optimization method and system based on depth camera and face detection equipment
CN112580434B (en) * 2020-11-25 2024-03-15 奥比中光科技集团股份有限公司 Face false detection optimization method and system based on depth camera and face detection equipment
WO2022126905A1 (en) * 2020-12-18 2022-06-23 平安科技(深圳)有限公司 Face tracking method and system, terminal and storage medium
CN112699784A (en) * 2020-12-29 2021-04-23 深圳市普渡科技有限公司 Face orientation estimation method and device, electronic equipment and storage medium
WO2022143264A1 (en) * 2020-12-29 2022-07-07 深圳市普渡科技有限公司 Face orientation estimation method and apparatus, electronic device, and storage medium

Also Published As

Publication number Publication date
CN112232311B (en) 2021-04-06
CN112232311A (en) 2021-01-15

Similar Documents

Publication Publication Date Title
CN112232311B (en) Face tracking method and device and electronic equipment
CN110188719B (en) Target tracking method and device
CN110058685B (en) Virtual object display method and device, electronic equipment and computer-readable storage medium
CN110189246B (en) Image stylization generation method and device and electronic equipment
CN109584276A (en) Critical point detection method, apparatus, equipment and readable medium
WO2020228405A1 (en) Image processing method and apparatus, and electronic device
CN112101305B (en) Multi-path image processing method and device and electronic equipment
CN110287816B (en) Vehicle door motion detection method, device and computer readable storage medium
CN111078940B (en) Image processing method, device, computer storage medium and electronic equipment
CN112037223B (en) Image defect detection method and device and electronic equipment
CN111986214B (en) Construction method of pedestrian crossing in map and electronic equipment
CN112200041A (en) Video motion recognition method and device, storage medium and electronic equipment
CN111199169A (en) Image processing method and device
CN111222509A (en) Target detection method and device and electronic equipment
CN113177432B (en) Head posture estimation method, system, equipment and medium based on multi-scale lightweight network
CN110197459B (en) Image stylization generation method and device and electronic equipment
CN110222576B (en) Boxing action recognition method and device and electronic equipment
CN110378936B (en) Optical flow calculation method and device and electronic equipment
CN110069997B (en) Scene classification method and device and electronic equipment
CN111626990A (en) Target detection frame processing method and device and electronic equipment
CN111832354A (en) Target object age identification method and device and electronic equipment
CN111310595A (en) Method and apparatus for generating information
CN110555861A (en) optical flow calculation method and device and electronic equipment
CN114049674A (en) Three-dimensional face reconstruction method, device and storage medium
CN114385662A (en) Road network updating method and device, storage medium and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20200522