CN108229359A - A kind of face image processing process and device - Google Patents
A kind of face image processing process and device Download PDFInfo
- Publication number
- CN108229359A CN108229359A CN201711435188.2A CN201711435188A CN108229359A CN 108229359 A CN108229359 A CN 108229359A CN 201711435188 A CN201711435188 A CN 201711435188A CN 108229359 A CN108229359 A CN 108229359A
- Authority
- CN
- China
- Prior art keywords
- image
- frame video
- facial image
- video image
- target
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q20/00—Payment architectures, schemes or protocols
- G06Q20/38—Payment protocols; Details thereof
- G06Q20/40—Authorisation, e.g. identification of payer or payee, verification of customer or shop credentials; Review and approval of payers, e.g. check credit lines or negative lists
- G06Q20/401—Transaction verification
- G06Q20/4014—Identity check for transactions
- G06Q20/40145—Biometric identity checks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q20/00—Payment architectures, schemes or protocols
- G06Q20/38—Payment protocols; Details thereof
- G06Q20/40—Authorisation, e.g. identification of payer or payee, verification of customer or shop credentials; Review and approval of payers, e.g. check credit lines or negative lists
- G06Q20/401—Transaction verification
- G06Q20/4016—Transaction verification involving fraud or risk level assessment in transaction processing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/246—Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/40—Spoof detection, e.g. liveness detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30196—Human being; Person
- G06T2207/30201—Face
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Business, Economics & Management (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Accounting & Taxation (AREA)
- Multimedia (AREA)
- Human Computer Interaction (AREA)
- Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Computer Security & Cryptography (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Finance (AREA)
- Strategic Management (AREA)
- General Business, Economics & Management (AREA)
- General Health & Medical Sciences (AREA)
- Image Analysis (AREA)
Abstract
An embodiment of the present invention provides a kind of face image processing process and devices.Adjacent previous frame video image and latter frame video image in video are obtained, includes target facial image in previous frame video image and latter frame video image;Predict predicted position of the target facial image in latter frame video image in previous frame video image;Obtain the actual position in latter frame video image of the target facial image in latter frame video image;Judge whether the distance between actual position and predicted position are more than pre-determined distance threshold value;If greater than pre-determined distance threshold value, it is non-genuine facial image to determine target facial image.Pass through the present invention, it can determine whether target facial image is true facial image, and when target facial image is non-genuine facial image, refusal is paid using target facial image, prevent criminal from carrying out illegal payment activity using the photo comprising facial image or video, so as to avoid bringing economic loss to user.
Description
Technical field
The present invention relates to field of computer technology, more particularly to a kind of face image processing process and device.
Background technology
Currently, user needs the commodity in checkout station payment purchase when supermarket buys commodity.With the hair at full speed of technology
Exhibition, in the commodity of checkout station payment purchase, more and more users are paid using facial image.
When user needs to pay using the facial image of user, the image capture device of checkout station can acquire user's
Facial image, and the face characteristic of user is obtained according to the facial image of user and the branch of user is determined according to face characteristic
It pays a bill family, and then uses the commodity bought of payment account payment user of user.
But sometimes criminal may use the photo payment criminal for including the facial image of user to purchase
The commodity bought, alternatively, using include user facial image video payment criminal buy commodity, so as to give user with
Carry out economic loss.
Invention content
In order to avoid bringing economic loss to user, the embodiment of the present invention shows a kind of face image processing process and dress
It puts.
In a first aspect, the embodiment of the present invention shows a kind of face image processing process, the method includes:
Obtain adjacent previous frame video image and latter frame video image in video, the previous frame video image and institute
It states and includes target facial image in latter frame video image;
Predict that the target facial image in the previous frame video image is pre- in a frame video image in the rear
Location is put;
Obtain the true position of the facial image in the latter frame video image in the rear in a frame video image
It puts;
Judge whether the distance between the actual position and the predicted position are more than pre-determined distance threshold value;
If the distance is more than the pre-determined distance threshold value, it is non-genuine face figure to determine the target facial image
Picture.
In an optional realization method, the method further includes:
If the distance be less than or equal to the pre-determined distance threshold value, by the latter frame video image with it is described previous
Frame video image subtracts each other, and obtains difference image, the foreground picture obtained by the target facial image is included at least in difference image
Picture;
Binary conversion treatment is carried out to the difference image, obtains movable information image;
Obtain the characteristic information of the movable information image, characteristic information include at least the foreground image dispersion,
Area and position;
Determine whether the target facial image is true facial image according to the characteristic information.
In an optional realization method, if the difference image includes multiple foreground images, the feature
Information further includes the distance between each foreground image.
In an optional realization method, it is described according to the characteristic information determine the target facial image whether be
True facial image, including:
The characteristic information is inputted into pre-set facial image grader, obtains the target person of grader output
Face image whether be true facial image judgement result.
In an optional realization method, the target facial image in the prediction previous frame video image
Predicted position in a frame video image in the rear, including:
A frame regards the facial image for being predicted using Face tracking algorithm in the previous frame video image in the rear
Predicted position in frequency image;
Wherein, the Face tracking algorithm includes at least:Particle filter PF, tracking study detection TLD and multiple-domain network
MdNet。
Second aspect, the embodiment of the present invention show a kind of face image processing device, and described device includes:
First acquisition module, it is described for obtaining adjacent previous frame video image and latter frame video image in video
Include target facial image in previous frame video image and the latter frame video image;
Prediction module, for predicting that a frame regards the target facial image in the previous frame video image in the rear
Predicted position in frequency image;
Second acquisition module, for obtaining a frame in the rear for the facial image in the latter frame video image
Actual position in video image;
Judgment module, for judging whether the distance between the actual position and the predicted position are more than pre-determined distance
Threshold value;
First determining module if being more than the pre-determined distance threshold value for the distance, determines the target face figure
As being non-genuine facial image.
In an optional realization method, described device further includes:
Subtraction module, if being less than or equal to the pre-determined distance threshold value for the distance, by a later frame video
Image subtracts each other with the previous frame video image, obtains difference image, is included at least in difference image by the target face figure
As obtained foreground image;
Binarization block for carrying out binary conversion treatment to the difference image, obtains movable information image;
Third acquisition module, for obtaining the characteristic information of the movable information image, characteristic information includes at least described
Dispersion, area and the position of foreground image;
Second determining module, for determining whether the target facial image is true face according to the characteristic information
Image.
In an optional realization method, if the difference image includes multiple foreground images, the feature
Information further includes the distance between each foreground image.
In an optional realization method, second determining module is specifically used for:
The characteristic information is inputted into pre-set facial image grader, obtains the target person of grader output
Face image whether be true facial image judgement result.
In an optional realization method, the prediction module is specifically used for:
A frame regards the facial image for being predicted using Face tracking algorithm in the previous frame video image in the rear
Predicted position in frequency image;
Wherein, the Face tracking algorithm includes at least:Particle filter PF, tracking study detection TLD and multiple-domain network
MdNet。
Compared with prior art, the embodiment of the present invention includes advantages below:
In embodiments of the present invention, adjacent previous frame video image and latter frame video image in video are obtained, it is previous
Include target facial image in frame video image and latter frame video image;Predict the target face in previous frame video image
Predicted position of the image in latter frame video image;Obtain the target facial image in latter frame video image in a later frame
Actual position in video image;Judge whether the distance between actual position and predicted position are more than pre-determined distance threshold value;Such as
The fruit distance is more than pre-determined distance threshold value, and it is non-genuine facial image to determine target facial image.Through the embodiment of the present invention
Method, it may be determined that go out whether target facial image is true facial image, and in target facial image to be non-genuine
During facial image, refuse to be paid using target facial image, so as to prevent criminal from using the photograph comprising facial image
Piece or video carry out illegal payment activity, so as to avoid bringing economic loss to user.
Description of the drawings
Fig. 1 is a kind of step flow chart of face image processing process embodiment of the present invention;
Fig. 2 is a kind of step flow chart of face image processing process embodiment of the present invention;
Fig. 3 is a kind of structure diagram of face image processing device embodiment of the present invention.
Specific embodiment
In order to make the foregoing objectives, features and advantages of the present invention clearer and more comprehensible, it is below in conjunction with the accompanying drawings and specific real
Applying mode, the present invention is described in further detail.
With reference to Fig. 1, a kind of step flow chart of face image processing process embodiment of the present invention is shown, it specifically can be with
Include the following steps:
In step S101, obtain previous frame video image adjacent in video and latter frame video image, former frame regard
Include target facial image in frequency image and latter frame video image;
The video of the embodiment of the present invention is the collected video including facial image of image capture device, for example, user
It needs to pay using the facial image of user, image capture device can acquire the facial image of user, and according to the people of user
Face image obtains the face characteristic of user, the payment account of user is determined according to the face characteristic of user, then using user's
Payment account is paid.
But sometimes criminal may use the photo payment criminal for including the facial image of user to purchase
The commodity bought or the commodity bought of video payment criminal using the facial image for including user, so as to give user with
Carry out economic loss.
In order to avoid bringing economic loss to user, before user is paid using the facial image of user, image
Collecting device needs to acquire the video of the facial image comprising user, and judge user according to the video of the facial image of user
Facial image be true facial image be still non-genuine facial image, then can be with when for true facial image
Paid using the facial image of user, when for non-genuine facial image, then refuse facial image using user into
Row payment, so as to avoid bringing economic loss to user.
In embodiments of the present invention, video includes multi-frame video image, and base between the multi-frame video image in video
It is ranked sequentially in time order and function, the facial image of user, i.e. target facial image is all included in each frame video image.
Target facial image in video image is detected is the either non-genuine facial image of true facial image
When, in video video image group can will be formed per two adjacent video images.
In embodiments of the present invention, when subscriber station is in front of image capture device, image capture device is collected to be regarded
The facial image included in frequency image is the true facial image of user.
When the photo for the facial image for including user is placed in front of image capture device, then image capture device is adopted
The video image collected includes the photo, which includes the facial image of user, and collected facial image is non-at this time
True facial image.
When video playback apparatus will be used to play the video of the facial image comprising user in front of image capture device,
Then the collected video image of image capture device includes the facial image of the user in the video played, collects at this time
Facial image be non-genuine facial image.
In step s 102, predict that the target facial image in previous frame video image is pre- in latter frame video image
Location is put;
In this step, Face datection algorithm can be utilized to detect the target facial image of previous frame video image, and really
Set the goal actual position of the facial image in previous frame video image, then predicts former frame video using Face tracking algorithm
Predicted position of the target facial image in latter frame video image in image.
In embodiments of the present invention, position of the facial image in video image can be the center of facial image in video
Coordinate in image etc..
Face tracking algorithm in embodiments of the present invention includes at least:PF (Particle Filter, particle filter),
TLD (Tracking Learning Detection, tracking study detection), MdNet (Multi Domain Network, multiple domain
Network) etc., it is, of course, also possible to which including other Face tracking algorithms, the embodiment of the present invention is not limited this.
In step s 103, obtain the target facial image in latter frame video image in latter frame video image
Actual position;
In this step, Face datection algorithm can be utilized to detect the target facial image of latter frame video image, and really
Set the goal actual position of the facial image in latter frame video image.
In step S104, judge whether the distance between actual position and predicted position are more than pre-determined distance threshold value;
The distance between actual position and predicted position are calculated, then compares the size of the distance and predetermined threshold value.
If the distance is more than pre-determined distance threshold value, in step S105, it is non-genuine people to determine target facial image
Face image.
In embodiments of the present invention, the time being separated by between two frame video images of arbitrary neighborhood in video is very short, very
A distance is often moved on the head of real human body within the so short period, but true human body is in the so short time
The distance of moving-head is very short in section, wherein, the direction of the head movement of human body is related with the direction of the facial image of human body,
It is and related with the direction of the body of human body and related with the moving direction on the head of human body in historical process etc..
The head of true human body moving direction when moving and movement speed are regular, the moving directions on head
Will not arbitrarily it change, and movement speed will not arbitrarily change.In this way, the target facial image in previous frame video image is previous
Actual position in battle array video image and the target facial image in latter frame video image are true in latter frame video image
The distance between real position is usually shorter.
And when criminal shows the photo for containing target facial image before image capture device, it will usually shake and shine
Piece, and often shake at random, shaking the direction of photo does not have rule, and the rate shaken is very fast, so as to cause the side of shaking
To larger with the head moving direction gap of true human body, lead to the target facial image in previous frame video image previous
Actual position in battle array video image and the target facial image in latter frame video image are true in rear a burst of video image
The distance between real position is usually longer.
Therefore, can pre-determined distance threshold value be trained according to training sample in advance, if the distance is more than pre-determined distance threshold
Value, it is determined that target facial image is non-genuine facial image.Training sample is true facial image in each video figure
The actual position of actual position and non-genuine facial image in each video image as in.
In embodiments of the present invention, adjacent previous frame video image and latter frame video image in video are obtained, it is previous
Include target facial image in frame video image and latter frame video image;Predict the target face in previous frame video image
Predicted position of the image in latter frame video image;Obtain the target facial image in latter frame video image in a later frame
Actual position in video image;Judge whether the distance between actual position and predicted position are more than pre-determined distance threshold value;Such as
The fruit distance is more than pre-determined distance threshold value, and it is non-genuine facial image to determine target facial image.Through the embodiment of the present invention
Method, it may be determined that go out whether target facial image is true facial image, and in target facial image to be non-genuine
During facial image, refuse to be paid using target facial image, so as to prevent criminal from using the photograph comprising facial image
Piece or video carry out illegal payment activity, so as to avoid bringing economic loss to user.
If the distance is less than or equal to pre-determined distance threshold value, also need to continue really by embodiment shown in Fig. 2
Whether the fixed facial image is non-genuine facial image.Specifically, referring to Fig. 2, this method further includes:
If the distance be less than or equal to pre-determined distance threshold value, in step s 201, by latter frame video image with it is previous
Frame video image subtracts each other, and obtains difference image, the foreground image obtained by target facial image is included at least in difference image;
In step S202, binary conversion treatment is carried out to difference image, obtains movable information image;
In step S203, obtain movable information image characteristic information, characteristic information include at least foreground image from
Divergence, area and position;
In embodiments of the present invention, if difference image includes multiple foreground images, characteristic information further includes each
The distance between foreground image.
In step S204, determine whether target facial image is true facial image according to this feature information.
In embodiments of the present invention, a classifier training method, such as decision tree, SVM can be selected in advance
(Support Vector Machine, support vector machines), Adaboost (iterative algorithm) or softmax etc., then make
Grader is trained with training sample, obtains facial image grader, facial image grader is used for the movable information according to input
The characteristic information of image come export target facial image whether be true facial image judgement result.
Training sample is the fortune between in multiple video images comprising true facial image, adjacent video images
The characteristic information of dynamic frame.
Therefore, in this step, the characteristic information of movable information image can be inputted to pre-set facial image point
Class device, obtain grader output target facial image whether be true facial image judgement result.
In embodiments of the present invention, mesh is included in each video image in the collected video of image capture device
Facial image is marked, video image group can will be formed per two adjacent video images in video in step S101.It is if right
One video image group performs the flow of above-mentioned steps S101~step S105 and performs above-mentioned steps S201~step S204
Flow after still determine that target facial image for true facial image, then continues to execute other video image groups above-mentioned
The flow of step S101~step S105 and the flow for performing above-mentioned steps S201~step S204.
As long as the flow of step S101~step S105 or step S201~step are passed through to any one video image group
The flow of rapid S204 judges that target facial image is non-genuine facial image, it is determined that target facial image is non-genuine
Facial image, if passing through the flow of step S101~step S105 and step S201~step to each video image group
The flow of S204 judges that target facial image is true facial image, it is determined that target facial image is true face figure
Picture.
It should be noted that for embodiment of the method, in order to be briefly described, therefore it is all expressed as to a series of action group
It closes, but those skilled in the art should know, the embodiment of the present invention is not limited by described sequence of movement, because according to
According to the embodiment of the present invention, certain steps may be used other sequences or be carried out at the same time.Secondly, those skilled in the art also should
Know, embodiment described in this description belongs to preferred embodiment, and the involved action not necessarily present invention is implemented
Necessary to example.
With reference to Fig. 3, show a kind of structure diagram of face image processing device embodiment of the present invention, can specifically include
Following module:
First acquisition module 11, for obtaining adjacent previous frame video image and latter frame video image, institute in video
It states and includes target facial image in previous frame video image and the latter frame video image;
Prediction module 12, for predicting the frame in the rear of the target facial image in the previous frame video image
Predicted position in video image;
Second acquisition module 13, for obtaining in the rear one of the facial image in the latter frame video image
Actual position in frame video image;
Judgment module 14, for judge the distance between the actual position and the predicted position whether be more than it is default away from
From threshold value;
First determining module 15 if being more than the pre-determined distance threshold value for the distance, determines the target face
Image is non-genuine facial image.
In an optional realization method, described device further includes:
Subtraction module, if being less than or equal to the pre-determined distance threshold value for the distance, by a later frame video
Image subtracts each other with the previous frame video image, obtains difference image, is included at least in difference image by the target face figure
As obtained foreground image;
Binarization block for carrying out binary conversion treatment to the difference image, obtains movable information image;
Third acquisition module, for obtaining the characteristic information of the movable information image, characteristic information includes at least described
Dispersion, area and the position of foreground image;
Second determining module, for determining whether the target facial image is true face according to the characteristic information
Image.
In an optional realization method, if the difference image includes multiple foreground images, the feature
Information further includes the distance between each foreground image.
In an optional realization method, second determining module is specifically used for:
The characteristic information is inputted into pre-set facial image grader, obtains the target person of grader output
Face image whether be true facial image judgement result.
In an optional realization method, the prediction module 12 is specifically used for:
A frame regards the facial image for being predicted using Face tracking algorithm in the previous frame video image in the rear
Predicted position in frequency image;
Wherein, the Face tracking algorithm includes at least:Particle filter PF, tracking study detection TLD and multiple-domain network
MdNet。
In embodiments of the present invention, adjacent previous frame video image and latter frame video image in video are obtained, it is previous
Include target facial image in frame video image and latter frame video image;Predict the target face in previous frame video image
Predicted position of the image in latter frame video image;Obtain the target facial image in latter frame video image in a later frame
Actual position in video image;Judge whether the distance between actual position and predicted position are more than pre-determined distance threshold value;Such as
The fruit distance is more than pre-determined distance threshold value, and it is non-genuine facial image to determine target facial image.Through the embodiment of the present invention
Method, it may be determined that go out whether target facial image is true facial image, and in target facial image to be non-genuine
During facial image, refuse to be paid using target facial image, so as to prevent criminal from using the photograph comprising facial image
Piece or video carry out illegal payment activity, so as to avoid bringing economic loss to user.
For device embodiment, since it is basicly similar to embodiment of the method, so description is fairly simple, it is related
Part illustrates referring to the part of embodiment of the method.
Each embodiment in this specification is described by the way of progressive, the highlights of each of the examples are with
The difference of other embodiment, just to refer each other for identical similar part between each embodiment.
It should be understood by those skilled in the art that, the embodiment of the embodiment of the present invention can be provided as method, apparatus or calculate
Machine program product.Therefore, the embodiment of the present invention can be used complete hardware embodiment, complete software embodiment or combine software and
The form of the embodiment of hardware aspect.Moreover, the embodiment of the present invention can be used one or more wherein include computer can
With in the computer-usable storage medium (including but not limited to magnetic disk storage, CD-ROM, optical memory etc.) of program code
The form of the computer program product of implementation.
The embodiment of the present invention be with reference to according to the method for the embodiment of the present invention, terminal device (system) and computer program
The flowchart and/or the block diagram of product describes.It should be understood that flowchart and/or the block diagram can be realized by computer program instructions
In each flow and/or block and flowchart and/or the block diagram in flow and/or box combination.These can be provided
Computer program instructions are set to all-purpose computer, special purpose computer, Embedded Processor or other programmable data processing terminals
Standby processor is to generate a machine so that is held by the processor of computer or other programmable data processing terminal equipments
Capable instruction generation is used to implement in one flow of flow chart or multiple flows and/or one box of block diagram or multiple boxes
The device for the function of specifying.
These computer program instructions, which may also be stored in, can guide computer or other programmable data processing terminal equipments
In the computer-readable memory to work in a specific way so that the instruction being stored in the computer-readable memory generates packet
The manufacture of command device is included, which realizes in one flow of flow chart or multiple flows and/or one side of block diagram
The function of being specified in frame or multiple boxes.
These computer program instructions can be also loaded into computer or other programmable data processing terminal equipments so that
Series of operation steps are performed on computer or other programmable terminal equipments to generate computer implemented processing, thus
The instruction offer performed on computer or other programmable terminal equipments is used to implement in one flow of flow chart or multiple flows
And/or specified in one box of block diagram or multiple boxes function the step of.
Although the preferred embodiment of the embodiment of the present invention has been described, those skilled in the art once know base
This creative concept can then make these embodiments other change and modification.So appended claims are intended to be construed to
Including preferred embodiment and fall into all change and modification of range of embodiment of the invention.
Finally, it is to be noted that, herein, relational terms such as first and second and the like be used merely to by
One entity or operation are distinguished with another entity or operation, without necessarily requiring or implying these entities or operation
Between there are any actual relationship or orders.Moreover, term " comprising ", "comprising" or its any other variant meaning
Covering non-exclusive inclusion, so that process, method, article or terminal device including a series of elements are not only wrapped
Those elements are included, but also including other elements that are not explicitly listed or are further included as this process, method, article
Or the element that terminal device is intrinsic.In the absence of more restrictions, it is wanted by what sentence "including a ..." limited
Element, it is not excluded that also there are other identical elements in the process including the element, method, article or terminal device.
Above to a kind of face image processing process provided by the present invention and device, it is described in detail, herein
It applies specific case to be expounded the principle of the present invention and embodiment, the explanation of above example is only intended to help
Understand the method and its core concept of the present invention;Meanwhile for those of ordinary skill in the art, thought according to the present invention,
There will be changes in specific embodiments and applications, in conclusion the content of the present specification should not be construed as to this
The limitation of invention.
Claims (10)
1. a kind of face image processing process, which is characterized in that the method includes:
Obtain video in adjacent previous frame video image and latter frame video image, the previous frame video image and it is described after
Include target facial image in one frame video image;
Predict the prediction bits of the target facial image in the previous frame video image in the rear in a frame video image
It puts;
Obtain the actual position of the facial image in the latter frame video image in the rear in a frame video image;
Judge whether the distance between the actual position and the predicted position are more than pre-determined distance threshold value;
If the distance is more than the pre-determined distance threshold value, it is non-genuine facial image to determine the target facial image.
2. according to the method described in claim 1, it is characterized in that, the method further includes:
If the distance is less than or equal to the pre-determined distance threshold value, the latter frame video image is regarded with the former frame
Frequency image subtraction obtains difference image, the foreground image obtained by the target facial image is included at least in difference image;
Binary conversion treatment is carried out to the difference image, obtains movable information image;
The characteristic information of the movable information image is obtained, characteristic information includes at least the dispersion of the foreground image, area
And position;
Determine whether the target facial image is true facial image according to the characteristic information.
If 3. according to the method described in claim 2, it is characterized in that, the difference image includes multiple foreground images,
Then the characteristic information further includes the distance between each foreground image.
4. according to the method described in claim 2, it is characterized in that, described determine the target face according to the characteristic information
Whether image is true facial image, including:
The characteristic information is inputted into pre-set facial image grader, obtains the target face figure of grader output
It seem the no judgement result for true facial image.
5. the according to the method described in claim 1, it is characterized in that, mesh predicted in the previous frame video image
Predicted position of the facial image in the rear in a frame video image is marked, including:
The facial image in a previous frame video image frame video figure in the rear is predicted using Face tracking algorithm
Predicted position as in;
Wherein, the Face tracking algorithm includes at least:Particle filter PF, tracking study detection TLD and multiple-domain network
MdNet。
6. a kind of face image processing device, which is characterized in that described device includes:
First acquisition module, it is described previous for obtaining adjacent previous frame video image and latter frame video image in video
Include target facial image in frame video image and the latter frame video image;
Prediction module, for predicting the frame video figure in the rear of the target facial image in the previous frame video image
Predicted position as in;
Second acquisition module, for obtaining a frame video in the rear for the facial image in the latter frame video image
Actual position in image;
Judgment module, for judging whether the distance between the actual position and the predicted position are more than pre-determined distance threshold
Value;
First determining module if being more than the pre-determined distance threshold value for the distance, determines that the target facial image is
Non-genuine facial image.
7. device according to claim 6, which is characterized in that described device further includes:
Subtraction module, if being less than or equal to the pre-determined distance threshold value for the distance, by the latter frame video image
Subtract each other with the previous frame video image, obtain difference image, include at least in difference image and obtained by the target facial image
The foreground image arrived;
Binarization block for carrying out binary conversion treatment to the difference image, obtains movable information image;
Third acquisition module, for obtaining the characteristic information of the movable information image, characteristic information includes at least the prospect
Dispersion, area and the position of image;
Second determining module, for determining whether the target facial image is true face figure according to the characteristic information
Picture.
8. device according to claim 7, which is characterized in that if the difference image includes multiple foreground images,
Then the characteristic information further includes the distance between each foreground image.
9. device according to claim 7, which is characterized in that second determining module is specifically used for:
The characteristic information is inputted into pre-set facial image grader, obtains the target face figure of grader output
It seem the no judgement result for true facial image.
10. device according to claim 6, which is characterized in that the prediction module is specifically used for:
The facial image in a previous frame video image frame video figure in the rear is predicted using Face tracking algorithm
Predicted position as in;
Wherein, the Face tracking algorithm includes at least:Particle filter PF, tracking study detection TLD and multiple-domain network
MdNet。
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711435188.2A CN108229359A (en) | 2017-12-26 | 2017-12-26 | A kind of face image processing process and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711435188.2A CN108229359A (en) | 2017-12-26 | 2017-12-26 | A kind of face image processing process and device |
Publications (1)
Publication Number | Publication Date |
---|---|
CN108229359A true CN108229359A (en) | 2018-06-29 |
Family
ID=62648090
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201711435188.2A Pending CN108229359A (en) | 2017-12-26 | 2017-12-26 | A kind of face image processing process and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108229359A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111046788A (en) * | 2019-12-10 | 2020-04-21 | 北京文安智能技术股份有限公司 | Method, device and system for detecting staying personnel |
CN112446229A (en) * | 2019-08-27 | 2021-03-05 | 北京地平线机器人技术研发有限公司 | Method and device for acquiring pixel coordinates of marker post |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103679118A (en) * | 2012-09-07 | 2014-03-26 | 汉王科技股份有限公司 | Human face in-vivo detection method and system |
US8856541B1 (en) * | 2013-01-10 | 2014-10-07 | Google Inc. | Liveness detection |
CN104269028A (en) * | 2014-10-23 | 2015-01-07 | 深圳大学 | Fatigue driving detection method and system |
CN105205455A (en) * | 2015-08-31 | 2015-12-30 | 李岩 | Liveness detection method and system for face recognition on mobile platform |
CN106372576A (en) * | 2016-08-23 | 2017-02-01 | 南京邮电大学 | Deep learning-based intelligent indoor intrusion detection method and system |
CN106897658A (en) * | 2015-12-18 | 2017-06-27 | 腾讯科技(深圳)有限公司 | The discrimination method and device of face live body |
CN107133973A (en) * | 2017-05-12 | 2017-09-05 | 暨南大学 | A kind of ship detecting method in bridge collision prevention system |
-
2017
- 2017-12-26 CN CN201711435188.2A patent/CN108229359A/en active Pending
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103679118A (en) * | 2012-09-07 | 2014-03-26 | 汉王科技股份有限公司 | Human face in-vivo detection method and system |
US8856541B1 (en) * | 2013-01-10 | 2014-10-07 | Google Inc. | Liveness detection |
CN104269028A (en) * | 2014-10-23 | 2015-01-07 | 深圳大学 | Fatigue driving detection method and system |
CN105205455A (en) * | 2015-08-31 | 2015-12-30 | 李岩 | Liveness detection method and system for face recognition on mobile platform |
CN106897658A (en) * | 2015-12-18 | 2017-06-27 | 腾讯科技(深圳)有限公司 | The discrimination method and device of face live body |
CN106372576A (en) * | 2016-08-23 | 2017-02-01 | 南京邮电大学 | Deep learning-based intelligent indoor intrusion detection method and system |
CN107133973A (en) * | 2017-05-12 | 2017-09-05 | 暨南大学 | A kind of ship detecting method in bridge collision prevention system |
Non-Patent Citations (1)
Title |
---|
胡一帆等: "基于视频监控的人脸检测跟踪识别系统研究", 《计算机工程与应用》 * |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112446229A (en) * | 2019-08-27 | 2021-03-05 | 北京地平线机器人技术研发有限公司 | Method and device for acquiring pixel coordinates of marker post |
CN111046788A (en) * | 2019-12-10 | 2020-04-21 | 北京文安智能技术股份有限公司 | Method, device and system for detecting staying personnel |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108875676B (en) | Living body detection method, device and system | |
KR102189205B1 (en) | System and method for generating an activity summary of a person | |
JP5674212B2 (en) | Method, computer program and system for video stream processing | |
CN107958235B (en) | Face image detection method, device, medium and electronic equipment | |
US10580143B2 (en) | High-fidelity 3D reconstruction using facial features lookup and skeletal poses in voxel models | |
KR20200096565A (en) | Face recognition method and device, electronic device and storage medium | |
CN109086873A (en) | Training method, recognition methods, device and the processing equipment of recurrent neural network | |
CN106295515B (en) | Determine the method and device of the human face region in image | |
Jain et al. | Deep NeuralNet for violence detection using motion features from dynamic images | |
CN109145867A (en) | Estimation method of human posture, device, system, electronic equipment, storage medium | |
CN111614867B (en) | Video denoising method and device, mobile terminal and storage medium | |
JP6103080B2 (en) | Method and apparatus for detecting the type of camera motion in a video | |
CN109862323A (en) | Playback method, device and the processing equipment of multi-channel video | |
CN107451066A (en) | Interim card treating method and apparatus, storage medium, terminal | |
CN109284864A (en) | Behavior sequence obtaining method and device and user conversion rate prediction method and device | |
CN111160187B (en) | Method, device and system for detecting left-behind object | |
CN110111106A (en) | Transaction risk monitoring method and device | |
CN107609703A (en) | A kind of commodity attribute method and system applied to physical retail store | |
CN111291668A (en) | Living body detection method, living body detection device, electronic equipment and readable storage medium | |
CN108229359A (en) | A kind of face image processing process and device | |
CN109783680B (en) | Image pushing method, image acquisition device and image processing system | |
Liu et al. | ACDnet: An action detection network for real-time edge computing based on flow-guided feature approximation and memory aggregation | |
Yeh et al. | Efficient camera path planning algorithm for human motion overview | |
CN111260685B (en) | Video processing method and device and electronic equipment | |
EP3971781A1 (en) | Method and apparatus with neural network operation |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20180629 |