CN108334811B - Face image processing method and device - Google Patents

Face image processing method and device Download PDF

Info

Publication number
CN108334811B
CN108334811B CN201711434909.8A CN201711434909A CN108334811B CN 108334811 B CN108334811 B CN 108334811B CN 201711434909 A CN201711434909 A CN 201711434909A CN 108334811 B CN108334811 B CN 108334811B
Authority
CN
China
Prior art keywords
face image
current
cached
face
tracked
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201711434909.8A
Other languages
Chinese (zh)
Other versions
CN108334811A (en
Inventor
朱国刚
李波
刘永霞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Datang Software Technologies Co Ltd
Original Assignee
Datang Software Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Datang Software Technologies Co Ltd filed Critical Datang Software Technologies Co Ltd
Priority to CN201711434909.8A priority Critical patent/CN108334811B/en
Publication of CN108334811A publication Critical patent/CN108334811A/en
Application granted granted Critical
Publication of CN108334811B publication Critical patent/CN108334811B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/50Extraction of image or video features by performing operations within image blocks; by using histograms, e.g. histogram of oriented gradients [HoG]; by summing image-intensity values; Projection analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The embodiment of the invention provides a method and a device for processing a face image. In the embodiment of the invention, the detection of the face image in the video image through the setaface face detection algorithm can detect all the face images appearing in the video image, thereby avoiding missing the face images. The directional gradient histogram feature vector has invariance to rotation, scale scaling and brightness, and has stability to factors such as view angle change, light, noise and the like, so the influence of external environment factors is small, and the robustness is strong.

Description

Face image processing method and device
Technical Field
The invention relates to the technical field of computers, in particular to a method and a device for processing a face image.
Background
Nowadays, in order to provide safety prevention and guarantee for people's work and life, often be provided with surveillance camera head in important position department, record the surveillance video of important position department through surveillance camera head, later, arrange the personnel of looking over to look over whether there is suspicious personage in the surveillance video that surveillance camera head recorded, for example, look over whether there is the personnel of escaping etc..
When checking whether suspicious persons exist in the surveillance video recorded by the surveillance camera, a checking person needs to check each frame of video image in the surveillance video frame by frame, so that the checking efficiency is low and the workload of the checking person is high.
In order to improve the viewing efficiency and reduce the workload of the viewing personnel, in the prior art, a meanshift algorithm, a camshift algorithm or a mean value tracking algorithm is adopted to roughly track the people in the video, and finally, the face images of the people are stored when the people leave.
However, the inventor finds that the prior art causes a problem of partial missing of the tracked person, and the tracking is easy to generate a discontinuity problem, so that it is difficult to realize accurate tracking of the face image, and complete acquisition of the face image cannot be realized.
Disclosure of Invention
In order to solve the above technical problems, embodiments of the present invention show a method and an apparatus for processing a face image.
In a first aspect, an embodiment of the present invention shows a face image processing method, where the method includes:
detecting a current face image in a current video image through a setaface face detection algorithm;
determining the current position of the current face image in the current video image and extracting the current face image from the current video image;
judging whether a human face image which is tracked by using a particle filter tracking algorithm exists according to the current video image;
if a face image which is being tracked by using a particle filter tracking algorithm exists, acquiring a first direction gradient histogram feature vector of a face feature point of the current face image, and acquiring a second direction gradient histogram feature vector of the face feature point of the face image which is being tracked;
judging whether the face image being tracked and the current face image are face images of the same person or not according to the first direction gradient histogram feature vector and the second direction gradient histogram feature vector;
and if the face image being tracked and the current face image are the face images of the same person, replacing the cached position of the face image being tracked by using the current position.
In an optional implementation, the method further includes:
if the face image which is tracked by using the particle filter tracking algorithm does not exist, judging whether the face image which is the same as the current face image exists in all the cached face images or not;
if the human face image which is the same as the current human face image does not exist in all the cached human face images, caching the current human face image and caching the current position;
tracking the current face image starting at the current position using a particle filter tracking algorithm.
In an optional implementation, the method further includes:
if the tracked face image and the current face image are not the face image of the same person, judging whether the face image of the same person as the current face image exists in all the cached face images;
if the human face image which is the same as the current human face image does not exist in all the cached human face images, caching the current human face image and caching the current position;
tracking the current face image starting at the current position using a particle filter tracking algorithm.
In an optional implementation, the method further includes:
judging whether the area of the current face image is larger than the area of the cached face image which is being tracked;
and if the area of the current face image is larger than the area of the cached face image being tracked, replacing the cached face image being tracked with the current face image.
In an optional implementation, the method further includes:
if no human face image belonging to the same person as the current human face image appears in the preset number of video images behind the current video image, stopping tracking the human face image belonging to the same person as the current human face image by using a particle filter tracking algorithm;
acquiring geographic information of cached face images belonging to the same person as the current face image;
storing the cached face image belonging to the same person as the current face image and the cached geographic information of the face image belonging to the same person as the current face image in a database;
deleting the cached face image belonging to the same person as the current face image and deleting the cached position of the face image belonging to the same person as the current face image.
In a second aspect, an embodiment of the present invention shows a face image processing apparatus, including:
the detection module is used for detecting a current face image in a current video image through a setaface face detection algorithm;
a determining module, configured to determine a current position of the current face image in the current video image and extract the current face image from the current video image;
the first judgment module is used for judging whether a face image which is tracked by using a particle filter tracking algorithm exists according to the current video image;
the first acquisition module is used for acquiring a first direction gradient histogram feature vector of a face feature point of the current face image and acquiring a second direction gradient histogram feature vector of the face feature point of the tracked face image if the face image which is tracked by using a particle filter tracking algorithm exists;
the second judging module is used for judging whether the face image being tracked and the current face image are the face image of the same person or not according to the first direction gradient histogram feature vector and the second direction gradient histogram feature vector;
and the first replacement module is used for replacing the position of the cached face image which is being tracked by using the current position if the face image which is being tracked and the current face image are the face images of the same person.
In an optional implementation, the apparatus further comprises:
a third judging module, configured to judge whether a face image of the same person as the current face image exists in all cached face images if there is no face image being tracked by using a particle filter tracking algorithm;
the first cache module is used for caching the current face image and caching the current position if the face image which is the same as the current face image does not exist in all the cached face images;
a first tracking module for tracking the current face image starting at the current position using a particle filter tracking algorithm.
In an optional implementation, the apparatus further comprises:
a fourth judging module, configured to judge whether a face image that is the same person as the current face image exists in all cached face images if the tracked face image and the current face image are not the same person;
the second cache module is used for caching the current face image and caching the current position if the face image which is the same as the current face image does not exist in all the cached face images;
a second tracking module for tracking the current face image starting at the current position using a particle filter tracking algorithm.
In an optional implementation, the apparatus further comprises:
a fifth judging module, configured to judge whether an area of the current face image is larger than an area of the cached face image being tracked;
and the second replacement module is used for replacing the cached face image which is being tracked by using the current face image if the area of the current face image is larger than the area of the cached face image which is being tracked.
In an optional implementation, the apparatus further comprises:
the stopping module is used for stopping tracking the face image belonging to the same person as the current face image by using a particle filter tracking algorithm if the face image belonging to the same person as the current face image does not appear in the preset number of video images after the current video image;
the second acquisition module is used for acquiring the geographic information of the cached face image belonging to the same person as the current face image;
the storage module is used for storing the cached face image belonging to the same person as the current face image and the cached geographic information of the face image belonging to the same person as the current face image in a database;
and the deleting module is used for deleting the cached face image belonging to the same person as the current face image and deleting the position of the cached face image belonging to the same person as the current face image.
Compared with the prior art, the embodiment of the invention has the following advantages:
in the embodiment of the invention, the detection of the face image in the video image through the setaface face detection algorithm can detect all the face images appearing in the video image, thereby avoiding missing the face images. The directional gradient histogram feature vector has invariance to rotation, scale scaling and brightness, and has stability to factors such as view angle change, light, noise and the like, so the influence of external environment factors is small, and the robustness is strong.
Drawings
FIG. 1 is a flow chart of steps of an embodiment of a method for processing a face image according to the present invention;
fig. 2 is a block diagram of a face image processing apparatus according to an embodiment of the present invention.
Detailed Description
In order to make the aforementioned objects, features and advantages of the present invention comprehensible, embodiments accompanied with figures are described in further detail below.
Referring to fig. 1, a flowchart illustrating steps of an embodiment of a face image processing method according to the present invention is shown, and specifically may include the following steps:
in step S101, a current face image in a current video image is detected by a setaface face detection algorithm;
when the video at the fixed position is recorded through the monitoring camera, the video images are continuously collected through the monitoring camera, when one frame of video image is collected, whether a face image exists in the collected video image is detected through a setafeace face detection algorithm, when the face image exists, the detected face image is used as the current face image, and the step S102 is executed.
In step S102, determining a current position of the current face image in the current video image and extracting the current face image from the current video image;
when the current face image is detected, the current position of the current face image can be determined according to the position of a pixel point at the upper left corner of a rectangular frame containing the current face image, the length of the long edge and the length of the short edge.
In step S103, determining whether there is a face image being tracked by using a particle filter tracking algorithm according to the current video image;
in the embodiment of the invention, when other video images before the current video image are acquired through the monitoring camera, if a face image appearing for the first time is detected in other video images, the position of the face image appearing for the first time in other video images is determined, then the face image appearing for the first time and the position of the face image appearing for the first time in other video images are cached, and a particle filter tracking algorithm is used for tracking the face image appearing for the first time at the position. And when the tracked face image leaves the acquisition range of the monitoring camera, stopping tracking the face image belonging to the same person as the first appearing face image by using the particle filter tracking algorithm.
When the particle filter tracking algorithm is used for the face image which appears for the first time and begins to track at the position, the predicted position area of the face image which appears for the first time in the video image of the next frame needs to be predicted.
A plurality of face images may exist in other video images at the same time, and positions of the face images in the other video images are different, so that the particle filter tracking algorithm is used to predict that the position areas of the face images in the video image of the next frame are different.
If no face image is detected in other video images before the current video image, the face image being tracked by using the particle filter tracking algorithm does not exist at the moment, and the current face image in the current video image is the face image appearing for the first time.
In the embodiment of the invention, after each face image in a previous video image before a current video image is respectively tracked by using a particle filter tracking algorithm, the predicted position area of the face image in the previous video image in the current video image can be respectively predicted.
Therefore, after the current position of the previous face image in the current video image is obtained, it is necessary to determine which face image in the previous video image the current position is in the predicted position area of the current video image.
If the current position is located in the predicted position area of a certain face image in the previous video image in the current video image, the face image is determined as the face image being tracked by using the particle filter tracking algorithm.
If the current position is not located in the predicted position area of each face image in the previous video image in the current video image, it is determined that there is no face image being tracked using the particle filter tracking algorithm.
If the face image being tracked by using the particle filter tracking algorithm does not exist, caching the current face image and the current position in step S104, and starting to track the current face image at the current position by using the particle filter tracking algorithm;
if the face image which is tracked by using the particle filter tracking algorithm does not exist, judging whether the face image which is the same as the current face image exists in all the cached face images or not; if there is no face image of the same person as the current face image in all the cached face images, the current face image is the face image appearing for the first time, and therefore, the current face image and the current position need to be cached, and the particle filter tracking algorithm is used to start tracking the current face image at the position.
If there is a face image being tracked using the particle filter tracking algorithm, in step S105, obtaining a first histogram of oriented gradients feature vector of a face feature point of the current face image, and obtaining a second histogram of oriented gradients feature vector of the face feature point of the face image being tracked;
in step S106, it is determined whether the face image being tracked and the current face image are face images of the same person according to the first histogram feature vector and the second histogram feature vector;
in the embodiment of the present invention, the Dlib algorithm library may be used to locate the face feature points in the current face image, so as to obtain the face feature points in the face image, for example, the pixel gray value is used to train the regression tree, then the multiple regression trees are cascaded into a cascade classifier, and the cascade classifier is used to predict the face feature points in the current face image.
The face feature points in the face image in the embodiment of the present invention include 68, but may also include more face feature points, for example, 98 or 128, and the like, which is not limited in the embodiment of the present invention.
In the embodiment of the present invention, the face feature points include two face feature points of the eyebrow contour, two face feature points of the eye contour, a nose feature point, a mouth feature point, a cheek feature point, and the like.
In the embodiment of the present invention, when the face feature points of the face image are acquired, the acquired face feature points often need to be numbered, and the numbers of different face feature points in the same face image are different.
Thus, each of the face feature points of the acquired current face image has a respective number, and each of the face feature points of the acquired face image being tracked has a respective number.
In the current face image and the face image being tracked, the same number of face feature points need to be matched respectively.
And in the current face image, if the number of face characteristic points successfully matched with the face image being tracked is greater than a preset threshold value, determining that the current face image and the face image being tracked belong to the face image of the same person.
And in the current face image, if the number of the face characteristic points successfully matched with the face image being tracked is less than or equal to a preset threshold value, determining that the current face image and the face image being tracked do not belong to the face image of the same person.
The preset threshold may be 70%, 80%, or 90% of the total number of facial feature points obtained from the current facial image, and the like, which is not limited in this embodiment of the present invention.
When matching the face feature point of number a in the current face image with the face feature point of number a in the face image being tracked, it is necessary to calculate a first directional gradient histogram feature vector of the face feature point of number a in the current face image, and calculate a second directional gradient histogram feature vector of the face feature point of number a in the face image being tracked, and then calculate a euclidean distance between the first directional gradient histogram feature vector and the second directional gradient histogram feature vector.
And if the calculated Euclidean distance is smaller than a preset Euclidean distance threshold value, determining that the facial feature point with the number A in the current facial image is successfully matched with the facial feature point with the number A in the tracked facial image.
And if the calculated Euclidean distance is greater than or equal to a preset Euclidean distance threshold value, determining that the facial feature point of the number A in the current facial image is not successfully matched with the facial feature point of the number A in the tracked facial image.
The method for calculating the first direction gradient histogram feature vector of the human face feature point with the number A in the current human face image comprises the following steps:
and taking the face characteristic point with the number A in the current face image as a center, and selecting a first pixel point region of 16 pixel points.
And dividing the first pixel point region into 16 different second pixel point regions of 4 pixel points and 4 pixel points.
For any second pixel point region, the gradient directions of 16 pixel points in the second pixel point region are respectively calculated, the gradient direction range is 0-360 degrees, the gradient direction range can be equally divided into 12 sub-ranges, each sub-range is 30 degrees, so that the sub-range where the gradient direction of each pixel point in the 16 pixel points is located is obtained, the number of the pixel points of which the gradient directions are respectively located in each sub-range is further determined, a 12-dimensional feature vector is obtained, and the 12-dimensional feature vector is determined as the feature vector of the second pixel point region. And obtaining the feature vector of each of the other second pixel point regions according to the method.
And combining the feature vectors of the 16 second pixel point regions to obtain a 12-by-16-dimensional feature vector, and using the feature vector as the first direction gradient feature vector of the human face feature point with the number A in the current human face image.
In the embodiment of the present invention, the method for dividing the gradient direction range is not limited to the above-mentioned dividing method, and the gradient direction range may be equally divided into 8 parts of sub-ranges, each of which is 45 °, or 10 parts of sub-ranges, each of which is 36 °, and so on, and the embodiment of the present invention is not limited thereto.
In the embodiment of the present invention, the method for dividing the pixel area is not limited to the above-mentioned dividing method, and the first pixel area may also be divided into 64 different second pixel areas of 2 pixels × 2 pixels, and the first pixel area may also be divided into 4 different second pixel areas of 8 pixels × 8 pixels, and so on, which is not limited in this embodiment of the present invention.
Then, the second histogram feature vector of the face feature point with the number a in the face image being tracked is calculated, which may refer to a calculation procedure of calculating the first histogram feature vector of the face feature point with the number a in the current face image, and is not described in detail herein.
The face feature points obtained through the face feature point positioning algorithm are used as key points for extracting the features of the directional gradient histogram, so that the face image is matched with the face features such as eyes, a nose, a mouth, a contour, a chin, eyebrows and the like, the matching accuracy is higher, the matching algorithm is high in speed, and the tracking efficiency is improved.
If the tracked face image and the current face image are the face image of the same person, in step S107, replacing the position of the cached tracked face image with the current position of the current face image;
so that the tracking of the face image being tracked can then continue from the current position using the particle filter tracking algorithm.
If the face image being tracked is not the same person as the current face image, in step S108, the current face image is cached and the current position is cached, and the tracking of the current face image is started at the current position using the particle filter tracking algorithm.
If the face image being tracked and the current face image are not the face image of the same person, judging whether the face image of the same person as the current face image exists in all the cached face images; if there is no face image of the same person as the current face image in all the cached face images, the current face image is the face image appearing for the first time, and therefore, the current face image and the current position need to be cached, and the particle filter tracking algorithm is used to start tracking the current face image at the position.
In the embodiment of the invention, the detection of the face image in the video image through the setaface face detection algorithm can detect all the face images appearing in the video image, thereby avoiding missing the face images. The directional gradient histogram feature vector has invariance to rotation, scale scaling and brightness, and has stability to factors such as view angle change, light, noise and the like, so the influence of external environment factors is small, and the robustness is strong.
In the embodiment of the invention, if the time of the face image of a person appearing in the acquisition range of the monitoring camera is longer, the face image of the person appears in a plurality of frames of video images acquired by the monitoring camera, and if the face image of the person in each frame of video image is cached, more cache space is occupied, so that in order to save cache space and not influence the view personnel to check which persons appear in the acquisition range of the monitoring camera later, the face image of the person in each frame of video image is not required to be cached, and only the face image of the person in one frame of video image is required to be cached.
Further, in order to enable the viewer to see which people appear in the acquisition range of the monitoring camera after the viewer can see the details of the human face of the person more clearly, in another embodiment of the present invention, when the human face image of the person is cached, the human face image with the largest area of the person needs to be cached.
In the embodiment of the present invention, it may be determined whether the area of the current face image is larger than the area of the cached face image being tracked, and if the area of the current face image is larger than the area of the cached face image being tracked, the cached face image being tracked may be replaced with the current face image.
The area of the current face image and the area of the cached face image which is being tracked can be subtracted to obtain an area difference value; judging whether the area difference is larger than 0; if the area difference value is larger than 0, judging whether the area difference value is larger than a preset area threshold value, wherein the preset area threshold value is larger than 0; if the area difference is larger than the preset area threshold, the cached face image being tracked can be replaced by the current face image. If the area difference is smaller than or equal to a preset area threshold, calculating a first distance between a leftmost face feature point and a rightmost face feature point in the current face image, and calculating a second distance between the leftmost face feature point and the rightmost face feature point in the cached face image being tracked; then judging whether the first distance is greater than the second distance; if the first distance is greater than the second distance, the cached face image being tracked may be replaced with the current face image.
In another embodiment of the present invention, if no face image belonging to the same person as the current face image appears in a preset number of video images after the current video image, it indicates that the person corresponding to the current face image has left the acquisition range of the monitoring camera, so as to stop using the particle filter tracking algorithm to track the face image belonging to the same person as the current face image, and obtain the geographic information of the cached face image belonging to the same person as the current face image; storing the cached face image belonging to the same person as the current face image and the cached geographic information of the face image belonging to the same person as the current face image in a database; deleting the cached face image belonging to the same person as the current face image and deleting the cached position of the face image belonging to the same person as the current face image.
It should be noted that, for simplicity of description, the method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present invention is not limited by the illustrated order of acts, as some steps may occur in other orders or concurrently in accordance with the embodiments of the present invention. Further, those skilled in the art will appreciate that the embodiments described in the specification are presently preferred and that no particular act is required to implement the invention.
Referring to fig. 2, a block diagram of a structure of an embodiment of a face image processing apparatus of the present invention is shown, which may specifically include the following modules:
the detection module 11 is configured to detect a current face image in a current video image through a setaface face detection algorithm;
a determining module 12, configured to determine a current position of the current face image in the current video image and extract the current face image from the current video image;
a first judging module 13, configured to judge whether a face image being tracked by using a particle filter tracking algorithm exists according to the current video image;
a first obtaining module 14, configured to obtain, if there is a face image being tracked by using a particle filter tracking algorithm, a first directional gradient histogram feature vector of a face feature point of the current face image, and obtain a second directional gradient histogram feature vector of the face feature point of the face image being tracked;
a second judging module 15, configured to judge whether the face image being tracked and the current face image are face images of the same person according to the first histogram of directional gradients feature vector and the second histogram of directional gradients feature vector;
a first replacing module 16, configured to replace the position of the cached face image being tracked with the current position if the face image being tracked and the current face image are the face images of the same person.
In an optional implementation, the apparatus further comprises:
a third judging module, configured to judge whether a face image of the same person as the current face image exists in all cached face images if there is no face image being tracked by using a particle filter tracking algorithm;
the first cache module is used for caching the current face image and caching the current position if the face image which is the same as the current face image does not exist in all the cached face images;
a first tracking module for tracking the current face image starting at the current position using a particle filter tracking algorithm.
In an optional implementation, the apparatus further comprises:
a fourth judging module, configured to judge whether a face image that is the same person as the current face image exists in all cached face images if the tracked face image and the current face image are not the same person;
the second cache module is used for caching the current face image and caching the current position if the face image which is the same as the current face image does not exist in all the cached face images;
a second tracking module for tracking the current face image starting at the current position using a particle filter tracking algorithm.
In an optional implementation, the apparatus further comprises:
a fifth judging module, configured to judge whether an area of the current face image is larger than an area of the cached face image being tracked;
and the second replacement module is used for replacing the cached face image which is being tracked by using the current face image if the area of the current face image is larger than the area of the cached face image which is being tracked.
In an optional implementation, the apparatus further comprises:
the stopping module is used for stopping tracking the face image belonging to the same person as the current face image by using a particle filter tracking algorithm if the face image belonging to the same person as the current face image does not appear in the preset number of video images after the current video image;
the second acquisition module is used for acquiring the geographic information of the cached face image belonging to the same person as the current face image;
the storage module is used for storing the cached face image belonging to the same person as the current face image and the cached geographic information of the face image belonging to the same person as the current face image in a database;
and the deleting module is used for deleting the cached face image belonging to the same person as the current face image and deleting the position of the cached face image belonging to the same person as the current face image.
In the embodiment of the invention, the detection of the face image in the video image through the setaface face detection algorithm can detect all the face images appearing in the video image, thereby avoiding missing the face images. The directional gradient histogram feature vector has invariance to rotation, scale scaling and brightness, and has stability to factors such as view angle change, light, noise and the like, so the influence of external environment factors is small, and the robustness is strong.
For the device embodiment, since it is basically similar to the method embodiment, the description is simple, and for the relevant points, refer to the partial description of the method embodiment.
The embodiments in the present specification are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, apparatus, or computer program product. Accordingly, embodiments of the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, embodiments of the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
Embodiments of the present invention are described with reference to flowchart illustrations and/or block diagrams of methods, terminal devices (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing terminal to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing terminal, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing terminal to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing terminal to cause a series of operational steps to be performed on the computer or other programmable terminal to produce a computer implemented process such that the instructions which execute on the computer or other programmable terminal provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While preferred embodiments of the present invention have been described, additional variations and modifications of these embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all such alterations and modifications as fall within the scope of the embodiments of the invention.
Finally, it should also be noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or terminal that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or terminal. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or terminal that comprises the element.
The method and the device for processing the face image provided by the invention are described in detail, a specific example is applied in the text to explain the principle and the implementation mode of the invention, and the description of the embodiment is only used for helping to understand the method and the core idea of the invention; meanwhile, for a person skilled in the art, according to the idea of the present invention, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present invention.

Claims (8)

1. A face image processing method is characterized by comprising the following steps:
detecting a current face image in a current video image through a setaface face detection algorithm;
determining the current position of the current face image in the current video image and extracting the current face image from the current video image;
judging whether a human face image which is tracked by using a particle filter tracking algorithm exists according to the current video image;
if a face image which is being tracked by using a particle filter tracking algorithm exists, acquiring a first direction gradient histogram feature vector of a face feature point of the current face image, and acquiring a second direction gradient histogram feature vector of the face feature point of the face image which is being tracked;
judging whether the face image being tracked and the current face image are face images of the same person or not according to the first direction gradient histogram feature vector and the second direction gradient histogram feature vector;
if the tracked face image and the current face image are the face images of the same person, replacing the cached position of the tracked face image by the current position;
the method further comprises the following steps:
judging whether the area of the current face image is larger than the area of the cached face image which is being tracked;
and if the area of the current face image is larger than the area of the cached face image being tracked, replacing the cached face image being tracked with the current face image.
2. The method of claim 1, further comprising:
if the face image which is tracked by using the particle filter tracking algorithm does not exist, judging whether the face image which is the same as the current face image exists in all the cached face images or not;
if the human face image which is the same as the current human face image does not exist in all the cached human face images, caching the current human face image and caching the current position;
tracking the current face image starting at the current position using a particle filter tracking algorithm.
3. The method according to claim 1 or 2, characterized in that the method further comprises:
if the tracked face image and the current face image are not the face image of the same person, judging whether the face image of the same person as the current face image exists in all the cached face images;
if the human face image which is the same as the current human face image does not exist in all the cached human face images, caching the current human face image and caching the current position;
tracking the current face image starting at the current position using a particle filter tracking algorithm.
4. The method of claim 1, further comprising:
if no human face image belonging to the same person as the current human face image appears in the preset number of video images behind the current video image, stopping tracking the human face image belonging to the same person as the current human face image by using a particle filter tracking algorithm;
acquiring geographic information of cached face images belonging to the same person as the current face image;
storing the cached face image belonging to the same person as the current face image and the cached geographic information of the face image belonging to the same person as the current face image in a database;
deleting the cached face image belonging to the same person as the current face image and deleting the cached position of the face image belonging to the same person as the current face image.
5. A face image processing apparatus, characterized in that the apparatus comprises:
the detection module is used for detecting a current face image in a current video image through a setaface face detection algorithm;
a determining module, configured to determine a current position of the current face image in the current video image and extract the current face image from the current video image;
the first judgment module is used for judging whether a face image which is tracked by using a particle filter tracking algorithm exists according to the current video image;
the first acquisition module is used for acquiring a first direction gradient histogram feature vector of a face feature point of the current face image and acquiring a second direction gradient histogram feature vector of the face feature point of the tracked face image if the face image which is tracked by using a particle filter tracking algorithm exists;
the second judging module is used for judging whether the face image being tracked and the current face image are the face image of the same person or not according to the first direction gradient histogram feature vector and the second direction gradient histogram feature vector;
a first replacement module, configured to replace a position of the cached face image being tracked with the current position if the face image being tracked and the current face image are face images of the same person;
the device further comprises:
a fifth judging module, configured to judge whether an area of the current face image is larger than an area of the cached face image being tracked;
and the second replacement module is used for replacing the cached face image which is being tracked by using the current face image if the area of the current face image is larger than the area of the cached face image which is being tracked.
6. The apparatus of claim 5, further comprising:
a third judging module, configured to judge whether a face image of the same person as the current face image exists in all cached face images if there is no face image being tracked by using a particle filter tracking algorithm;
the first cache module is used for caching the current face image and caching the current position if the face image which is the same as the current face image does not exist in all the cached face images;
a first tracking module for tracking the current face image starting at the current position using a particle filter tracking algorithm.
7. The apparatus of claim 5 or 6, further comprising:
a fourth judging module, configured to judge whether a face image that is the same person as the current face image exists in all cached face images if the tracked face image and the current face image are not the same person;
the second cache module is used for caching the current face image and caching the current position if the face image which is the same as the current face image does not exist in all the cached face images;
a second tracking module for tracking the current face image starting at the current position using a particle filter tracking algorithm.
8. The apparatus of claim 5, further comprising:
the stopping module is used for stopping tracking the face image belonging to the same person as the current face image by using a particle filter tracking algorithm if the face image belonging to the same person as the current face image does not appear in the preset number of video images after the current video image;
the second acquisition module is used for acquiring the geographic information of the cached face image belonging to the same person as the current face image;
the storage module is used for storing the cached face image belonging to the same person as the current face image and the cached geographic information of the face image belonging to the same person as the current face image in a database;
and the deleting module is used for deleting the cached face image belonging to the same person as the current face image and deleting the position of the cached face image belonging to the same person as the current face image.
CN201711434909.8A 2017-12-26 2017-12-26 Face image processing method and device Active CN108334811B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711434909.8A CN108334811B (en) 2017-12-26 2017-12-26 Face image processing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711434909.8A CN108334811B (en) 2017-12-26 2017-12-26 Face image processing method and device

Publications (2)

Publication Number Publication Date
CN108334811A CN108334811A (en) 2018-07-27
CN108334811B true CN108334811B (en) 2021-06-04

Family

ID=62923707

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711434909.8A Active CN108334811B (en) 2017-12-26 2017-12-26 Face image processing method and device

Country Status (1)

Country Link
CN (1) CN108334811B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109977745B (en) * 2018-12-25 2021-09-14 深圳云天励飞技术有限公司 Face image processing method and related device
CN111652070B (en) * 2020-05-07 2023-07-28 南京航空航天大学 Face sequence collaborative recognition method based on monitoring video

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101339608B (en) * 2008-08-15 2011-10-12 北京中星微电子有限公司 Object tracking method and system based on detection
WO2011035470A1 (en) * 2009-09-24 2011-03-31 Hewlett-Packard Development Company, L.P. Particle tracking method and apparatus
CN103116756B (en) * 2013-01-23 2016-07-27 北京工商大学 A kind of persona face detection method and device
CN103985137B (en) * 2014-04-25 2017-04-05 深港产学研基地 It is applied to the moving body track method and system of man-machine interaction
CN104036523A (en) * 2014-06-18 2014-09-10 哈尔滨工程大学 Improved mean shift target tracking method based on surf features
US9665804B2 (en) * 2014-11-12 2017-05-30 Qualcomm Incorporated Systems and methods for tracking an object
CN105354902B (en) * 2015-11-10 2017-11-03 深圳市商汤科技有限公司 A kind of security management method and system based on recognition of face
CN106709932B (en) * 2015-11-12 2020-12-04 创新先进技术有限公司 Face position tracking method and device and electronic equipment
CN107066958A (en) * 2017-03-29 2017-08-18 南京邮电大学 A kind of face identification method based on HOG features and SVM multi-categorizers

Also Published As

Publication number Publication date
CN108334811A (en) 2018-07-27

Similar Documents

Publication Publication Date Title
CN109858371B (en) Face recognition method and device
CN106557726B (en) Face identity authentication system with silent type living body detection and method thereof
KR102122348B1 (en) Method and device for face in-vivo detection
US10984252B2 (en) Apparatus and method for analyzing people flows in image
EP2891990B1 (en) Method and device for monitoring video digest
US8139817B2 (en) Face image log creation
JP6007682B2 (en) Image processing apparatus, image processing method, and program
US20160092727A1 (en) Tracking humans in video images
US8805123B2 (en) System and method for video recognition based on visual image matching
US10037467B2 (en) Information processing system
KR101781358B1 (en) Personal Identification System And Method By Face Recognition In Digital Image
US20160004935A1 (en) Image processing apparatus and image processing method which learn dictionary
JP2011210238A (en) Advertisement effect measuring device and computer program
US9105101B2 (en) Image tracking device and image tracking method thereof
US20130301876A1 (en) Video analysis
WO2016018728A2 (en) Computerized prominent character recognition in videos
US20170032172A1 (en) Electronic device and method for splicing images of electronic device
Chen et al. Protecting personal identification in video
US20150278584A1 (en) Object discriminating apparatus and method
Goudelis et al. Fall detection using history triple features
Korshunov et al. Towards optimal distortion-based visual privacy filters
CN108334811B (en) Face image processing method and device
Liu et al. A novel video forgery detection algorithm for blue screen compositing based on 3-stage foreground analysis and tracking
CN109308704B (en) Background eliminating method, device, computer equipment and storage medium
CN108229281B (en) Neural network generation method, face detection device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant