CN106778496A - Biopsy method and device - Google Patents

Biopsy method and device Download PDF

Info

Publication number
CN106778496A
CN106778496A CN201611039845.7A CN201611039845A CN106778496A CN 106778496 A CN106778496 A CN 106778496A CN 201611039845 A CN201611039845 A CN 201611039845A CN 106778496 A CN106778496 A CN 106778496A
Authority
CN
China
Prior art keywords
lip
character
verification code
random verification
characteristic vector
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201611039845.7A
Other languages
Chinese (zh)
Inventor
周曦
邓武平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing Zhongke Yuncong Technology Co Ltd
Original Assignee
Chongqing Zhongke Yuncong Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing Zhongke Yuncong Technology Co Ltd filed Critical Chongqing Zhongke Yuncong Technology Co Ltd
Priority to CN201611039845.7A priority Critical patent/CN106778496A/en
Publication of CN106778496A publication Critical patent/CN106778496A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/40Spoof detection, e.g. liveness detection
    • G06V40/45Detection of the body part being alive
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition
    • G06V40/176Dynamic expression

Landscapes

  • Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Image Analysis (AREA)

Abstract

The present invention provides a kind of biopsy method and device, and for the facial image of detection and identification object to recognize whether it is living person, the method includes:Collection differentiates that object reads the video image of random verification code;Obtain the characteristic vector of the lip image sequence of lip region in video image described in per frame;The characteristic vector according to continuous multiple frames calls the lip reading identification model of training in advance to recognize the lip information for differentiating object;Detect whether the lip information is consistent with character in random verification code;When the lip information is consistent with character in random verification code, discriminating object is live body.Relative to traditional discrimination method, lip feature is also acquired while picture pick-up device collection face, without additionally increasing any hardware device newly, reduce the cost of checking system, it is more convenient to use;Differentiate that object reads random verification code and can directly determine whether live body, not only increase the security and anti-counterfeit capability of identifying system, also improve the efficiency of live body checking.

Description

Biopsy method and device
Technical field
The present invention relates to biometrics identification technology field, more particularly to a kind of live body inspection based on the identification of image lip reading Survey method and device.
Background technology
With the development of biometrics identification technology, face identification method has become a kind of the conventional of confirmation user identity Method.In the prior art, face live body mirror method for distinguishing is increased in some face identification methods, face can be preferably carried out Detection and knowledge.
However, existing face recognition process, during being based particularly on face recognition, illegal registrant can be by puppet Make children face " deception " camera or other image capture devices so that the photo that image capture device is obtained is not live body Human face photo.For example before being placed in image capture device using the human face photo or face video fragment of registrant, image is adopted The mug shot of the active user acquired in collection equipment comes from photo or video segment actually, or, illegal registrant Three-dimensional face model can also be forged, before the three-dimensional face model is placed in into image capture device, now image capture device Acquired human face photo is the photo of three-dimensional face model, but in the feature carried out based on face and the comparison being distributed, This point cannot be discovered, the anti-counterfeit capability that this has resulted in identification system is weak, the low problem of security.
The content of the invention
The shortcoming of prior art in view of the above, it is an object of the invention to provide a kind of biopsy method and dress Put, whether the face for solving that detection object cannot be determined in the prior art is live body, causes the false proof of identification system Ability is weak, the low problem of security.
In order to achieve the above objects and other related objects, the present invention provides a kind of biopsy method, for detection and identification To recognize whether it is living person, the biopsy method includes the facial image of object:
Collection differentiates that object reads the video image of random verification code;
Obtain the characteristic vector of the lip image sequence of lip region in video image described in per frame;
The characteristic vector according to continuous multiple frames calls the lip reading identification model of training in advance to recognize the lip for differentiating object Information;
Detect whether the lip information is consistent with character in random verification code;When the lip information and random verification code When middle character is consistent, discriminating object is live body.
Another object of the present invention is to provide living body detection device, for the facial image of detection and identification object recognizing Whether it is living person, including:
Acquisition module, for gathering the video image for differentiating that object reads random verification code;
Characteristic extracting module, for obtain lip region in video image described in every frame lip image sequence feature to Amount;
Lip reading identification module, the lip reading identification model for calling training in advance for the characteristic vector according to continuous multiple frames is known Not Jian Bie object lip information;
Detection module, it is whether consistent with character in random verification code for detecting the lip information;When lip letter When breath is consistent with character in random verification code, discriminating object is live body.
As described above, biopsy method of the invention and device, have the advantages that:
By gathering the video image for differentiating that object reads random verification code, the video image is pre-processed successively, Segmentation, alignment, so as to extract the characteristic vector of the lip image sequence for differentiating object;Lip reading identification mould according to default training The corresponding lip reading information of the characteristic vector of type identification lip image sequence, detects the lip reading information with word in random verification code Whether whether symbol is consistent, differentiates whether object is live body according to the completely the same determination of character.Make relative to traditional discrimination method Used time, lip feature is also acquired while picture pick-up device collection face, without additionally increasing any hardware device newly, reduce and test The cost of card system, it is more convenient to use;Differentiate that object reads random verification code and can directly determine whether live body, not only carry The high security and anti-counterfeit capability of identifying system, also improves the efficiency of live body checking.
Brief description of the drawings
Fig. 1 is shown as a kind of biopsy method flow chart of present invention offer;
Fig. 2 is shown as the flow chart of step S2 in a kind of biopsy method of present invention offer;
Fig. 3 be shown as the present invention offer based on lip image segmentation alignment structures schematic block diagram;
Fig. 4 is shown as the moment feature extraction mode structured flowchart of present invention offer;
Fig. 5 is shown as the ISA network structure block diagrams of present invention offer;
Fig. 6 be shown as the present invention offer based on stacking convolution ISA network structure block diagrams;
Fig. 7 is shown as the flow chart based on stacking convolution ISA network calculations video features of present invention offer;
Fig. 8 is shown as the stream that observational networks are produced based on time series cutting and HMM of present invention offer Cheng Tu;
Fig. 9 is shown as the Hidden Markov Model state transfer figure of present invention offer;
Figure 10 is shown as the flow chart of step S3 in a kind of biopsy method of present invention offer;
Figure 11 is shown as the flow chart of step S4 in a kind of biopsy method of present invention offer;
Figure 12 is shown as a kind of living body detection device structured flowchart of present invention offer;
Figure 13 is shown as the structured flowchart of characteristic extracting module in a kind of living body detection device of present invention offer;
Figure 14 is shown as the structured flowchart of lip reading identification module in a kind of living body detection device of present invention offer;
Figure 15 is shown as the structured flowchart of detection module in a kind of living body detection device of present invention offer.
Specific embodiment
Embodiments of the present invention are illustrated below by way of specific instantiation, those skilled in the art can be by this specification Disclosed content understands other advantages of the invention and effect easily.The present invention can also be by specific realities different in addition The mode of applying is embodied or practiced, the various details in this specification can also based on different viewpoints with application, without departing from Various modifications or alterations are carried out under spirit of the invention.It should be noted that, in the case where not conflicting, following examples and implementation Feature in example can be mutually combined.
It should be noted that the diagram provided in following examples only illustrates basic structure of the invention in a schematic way Think, component count, shape and size when only display is with relevant component in the present invention rather than according to actual implementation in schema then Draw, it is actual when the implementing kenel of each component, quantity and ratio can be a kind of random change, and its assembly layout kenel It is likely more complexity.
Fig. 1 is referred to, the present invention provides a kind of biopsy method flow chart, for the facial image of detection and identification object To recognize whether it is living person, the biopsy method includes:
Step S1, collection differentiates that object reads the video image of random verification code;
Specifically, the random verification code is according to random generation in advance training set, by image capture device Collection differentiates that object reads the video image of random verification code, wherein it is desired to the image capture device for passing through correlation carries out face The acquisition of image;Immediately obtained by image capture device, in this manner it is ensured that acquired image is to be currently at figure As the image of the people of the image acquisition region of collecting device.
Step S2, obtains the characteristic vector of the lip image sequence of lip region in video image described in per frame;
Specifically, lip change includes lip shape, lip texture and lip color corresponding to lip region in the application In any one, wherein, lip image sequence comprising in lip shape sequence, lip texture sequence and lip colour sequential arbitrarily One kind, and lip change is corresponded with lip image sequence.
Step S3, the characteristic vector according to continuous multiple frames calls the lip reading identification model of training in advance to recognize discriminating object Lip information;
Specifically, it is right using a large amount of characters institute based on HMM before using lip reading identification model The lip image sequence (state-transition matrix) answered is trained to the lip reading identification model, then, will with random verification code Differentiate the characteristic vector reversal of identification of the lip image sequence corresponding to object, identify that lip reads the lip of random verification code Information.
Step S4, detects whether the lip information is consistent with character in random verification code;When the lip information with When character is consistent in machine identifying code, discriminating object is live body.
Specifically, as shown in figure 11, the flow chart of step S4 in a kind of biopsy method for being provided for the present invention, bag Include:
Whether step S401, the lip reading information of detection and identification object is consistent with character in random verification code;
Step S402, when the lip reading information is consistent with character in random verification code, discriminating object is live body;
Step S403, when character is inconsistent in the lip reading information and random verification code, differentiates that object is not live body.
Face live body discrimination method common at present is determined using infrared thermal imaging detection temperature by secondary light source It is authenticated whether object is live body;Or by receiving preset instructions, head enters horizontal deflection, to confirm to differentiate whether object is living Body.
In the present embodiment, when in use, lip feature is also acquired while picture pick-up device collection face, without extra Increase any hardware device newly, reduce the cost of checking system, it is more convenient to use;Differentiate that object reading random verification code can Directly determine whether live body, not only increase the security and anti-counterfeit capability of identifying system, also improve the effect of live body checking Rate.
Fig. 2 is referred to, the flow chart of step S2 in a kind of biopsy method provided for the present invention, details are as follows:
Step S201, carries out pre-processing the video image for obtaining default specification to video image described in every frame;
Step S202, the video image of the default specification of segmentation obtains lip region, and affine change is carried out to the lip region Get the lip image sequence of alignment in return;
Step S203, the feature extraction algorithm based on stacking convolution independence subspace analysis network calculates the lip image The characteristic vector of sequence.
In the present embodiment, based on human face detection tech and key point extractive technique, face in the video image is positioned Lip;As shown in figure 3, the key point of lip is 13 in the facial image, two key points of the corners of the mouth are angle point, using two Individual key point calculates the translation and twiddle factor relative to standard mouth.Due to different people and the facial image mouth size of different frame Different, in order to exclude influence of the size to recognizing, residing pretreatment, will also be to all in addition to rotation and translation is converted Image carries out the change of scale based on standard mouth.In the prior art, affine transformation only needs 3 key points of mouth, according to Above-mentioned angle point can complete normalization alignment using geometrical relationship.However, the interframe which will lose is with respect to variability, because The mouth entered after line translation can make all segmentations align based on corners of the mouth key point has identical width.The alignment difficult point of lip exists In how retaining the relative change of interframe, and the mouth of different frame is normalized under identical yardstick.
Be the problem that the lip thickness and width for overcoming different people differs greatly, using eye spacing as benchmark come by difference The lip of people transforms to identical yardstick;Specifically, in figure 3 based on lip image segmentation alignment structures schematic block diagram, canthus Two key points and scale factor is calculated according to eye spacing, obtained translation, rotation and the scale factor of affine transformation;It is right Facial image carries out the lip image sequence just alignd after the above-mentioned lip segmentation based on key point and affine transformation.
In order to obtain, with distinction characteristic vector between uniformity and class, being easy to follow-up recognition feature vector;For example, traditional The dynamic visual signature of lip include:The feature being made up of the position and its difference vector of key point, e.g., the image statisticses such as HoG, SIFT Feature etc..The algorithm that traditional characteristic is extracted is manually to analyze to design rule of thumb and to problem, due to effect characteristicses table The factor of Danone power is very more, thus the robustness of features described above all has obvious limitation.Because in the video image of collection Simultaneously comprising face and mouth image and its movable information, therefore, a characteristic vector is formed from multiframe consecutive image Subimage sequence in extract, as shown in figure 4, for the present invention in moment feature extraction mode structured flowchart, in order to ensure feature Comprising as far as possible many useful informations, there is the overlap of certain frame number between subimage sequence, i.e. each characteristic vector sequence is corresponding Multiple subimage sequence F have frame number overlap with further feature sequence vector.
As shown in figure 5, being the ISA network structure block diagrams for providing of the invention, constituting the basis of stacking convolution ISA networks is ISA networks, it is a two-layer neutral net, and the non-linear node of ground floor and the second layer performs quadratic sum evolution fortune respectively Calculate, connected by weight matrix W and V respectively between input node and the first node layer, between the second layer and third layer node. It is x to make input vectort, then the response of i-th node of the second layer isIts In, W, V are respectively weight matrix, xtIt is input vector, m, n, i, j are respectively the integer more than 1.Parameter W in ISA networks and V solves optimization problem by Projected descent method, specific as follows:It is minimum value, T is sample, subject To is constraints.
subject to WWT=I
Employ the feature extraction algorithm based on stacking convolution independence subspace analysis network.Stacking convolution Independent subspace Analysis (ISA) network is a kind of deep learning algorithm designed to extract video features, and its advantage includes:1) spy for extracting Levy simultaneously comprising image and movable information, it is adaptable to the identification of lip motion;2) non-linear unit is simple, fast operation, fits For extracting feature from video higher-dimension big data;3) network structure is simply clear, it is easy to accomplish;4) training method is unsupervised , it is very convenient without artificial mark mass data.
As shown in fig. 6, based on stacking convolution ISA network structure block diagrams, when input video pixel is high dimensional data, ISA The training process of network is very slow, and the problem can be overcome using stacking convolution ISA networks.Stacking convolution ISA networks by ISA and Principal component analysis (PCA) successively stacks composition.The mode of convolution ISA network calculations video features is stacked, as shown in fig. 7, being this The flow chart based on stacking convolution ISA network calculations video features that invention is provided, first, by the pixel in less video block Ground floor ISA networks are input to after pulling into a vector, then, the ISA outputs of adjacent video block are combined in bigger region Get up, by after the pretreatment of PCA dimensionality reductions, being input to second layer ISA networks, and so on, finally, every layer of ISA network it is defeated Go out to be connected into characteristic vector of the vector as the video block.Due to each layer of dimension of the input data of ISA networks all Will not be too high, and can successively train, therefore the training speed of stacking convolution ISA networks can be improved.
In order to obtain characteristic vector Ft, multiple is divided into such as Fig. 7 institutes by the video block of multiple continuous lip reading image constructions The small video block for showing, each small video block extracts characteristic vector in being imported into stacking convolution ISA networks, finally, each is small The characteristic vector of video block is connected into the characteristic vector F for obtainingt
Preferably, the lip reading identification model of the training in advance, including:
Training set include several characters, the random verification code be training set in generate at random;It is every in the training set Individual character is based on HMM and is trained to obtain the unique forecast model sequence of correspondence, the Hidden Markov mould Type includes N number of forecast model, N >=1;The state-transition matrix and mixed Gauss model of HMM are calculated, obtains pre- If the lip reading identification model of training;Wherein, the corresponding time series fragment of each character is produced by HMM;And institute Each moment corresponds to a hidden state in stating time series fragment, by the change of the hidden state by state-transition matrix table Show, each hidden state also corresponds to an observational networks, and the observational networks are modeled as into mixed Gauss model.
In the present embodiment, as shown in figure 8, for the present invention provide based on time series cutting and HMM The flow chart of observational networks is produced, identifies that the multiple individual characters for wherein including are needed to time sequence from one section of lid speech characteristic sequence Row are segmented, and identify the individual character corresponding to each section, and the process is similar to speech recognition, therefore using in speech recognition Conventional hidden Markov model (HMM) come realize lip reading recognize.Specifically, the time series fragment corresponding to each individual character by One HMM is produced, and each moment t correspondences one hidden state St, such as S1 in fragment are original state, and St is end-state.Such as Shown in Fig. 9, Hidden Markov Model state transfer figure, in HMM, the generation probability and upper of the hidden state St at each moment The hidden state St-1 at individual moment is relevant, and the hidden state of adjacent moment is coupled by state-transition matrix A, the i-th row jth row in A Element aij represent the probability that state j is transferred to by state i, state-transition matrix can carry out more intuitively table by state transition diagram Show, as shown in Figure 9.Each state possesses an one's own observational networks, and therefrom produces characteristic vector, the observational networks Mixed Gauss model (GMM) is modeled as, it can represent complicated multi-modal distribution.
Lip reading identification model the destination of study is to estimate the state-transition matrix and GMM parameters in HMM model.Lip reading is online Identification is then the optimum state path that characteristic sequence to be identified is estimated in the case where model parameter has determined, And individual character path is combined into by state path, this task is completed by famous Viterbi (Viterbi) decoding algorithm.It is based on The study of lip reading identification model and identification of HMM can come by the ripe tool box of some of field of speech recognition such as Kaldi and HTK Complete.
Figure 10 is shown as the flow chart of step S3 in a kind of biopsy method of present invention offer, including:
Step S301, the characteristic vector of lip image sequence is matched according to HMM, according to time series meter The optimum state path of the characteristic vector of the lip image sequence is calculated, according to the single character of optimum state Path Recognition;
In the present embodiment, with the continuous HMM of time series, voice is parsed frame by frame, according to every frame language The static nature of sound and be behavioral characteristics relative to the dynamic change of former frame, judge character corresponding to present frame and The residing time state in the standard voice signals of space symbol, analysis result of the series connection per frame, that is, obtain audio user signal Voice messaging be identified as correspondence character.Wherein, the continuous HMM of the time series for being used is two-stage knot Structure:The first order is the other Hidden Markov time series models of character level, and the received pronunciation of each character includes four by one The single order time series models of voice status represent that the change of each voice status is only related to previous voice status, such as Shown in Fig. 9, wherein 0.4 is initial state, 1 is final state, and the Gaussian Mixture degree of each state is 4;The second level is character string The continuous HMM of rank, the received pronunciation of random verification code is by character of arbitrarily connecting " Hidden Markov " rank Model is constituted, wherein containing N number of forecast model, N >=1.
Step S302, combination producing differentiates object to the single character that will be recognized corresponding to the characteristic vector in temporal sequence Lip reading information.
According to the single character of characteristic vector correspondence identification, single character is arranged according to time series, you can reflected Other object reads the lip reading information of random verification code.
Figure 12 is referred to, another object of the present invention is to provide living body detection device, for the people of detection and identification object Face image recognizing whether it is living person, including:
Acquisition module 1, for gathering the video image for differentiating that object reads random verification code;
Characteristic extracting module 2, the feature of the lip image sequence for obtaining lip region in video image described in every frame Vector;
Lip reading identification module 3, the lip reading identification model of training in advance is called for the characteristic vector according to continuous multiple frames Identification differentiates the lip information of object;
Detection module 4, it is whether consistent with character in random verification code for detecting the lip information;When lip letter When breath is consistent with character in random verification code, discriminating object is live body.
Figure 13 is referred to, the structured flowchart of characteristic extracting module in a kind of living body detection device provided for the present invention, bag Include:
Pretreatment unit 21, for carrying out pre-processing the video image for obtaining default specification to video image described in every frame;
Segmentation alignment unit 22, the video image that treatment has been preset for splitting obtains lip region, to the lip area Domain carries out the lip image sequence that affine transformation is alignd;
Feature extraction unit 23, institute is calculated for the feature extraction algorithm based on stacking convolution independence subspace analysis network State the characteristic vector of lip image sequence.
Preferably, lip reading identification model is specially described in training in advance:
Training set include several characters, the random verification code be training set in generate at random;It is every in the training set Individual character is based on HMM and is trained to obtain the unique forecast model sequence of correspondence, the Hidden Markov mould Type includes N number of forecast model, N >=1;The state-transition matrix and mixed Gauss model of HMM are calculated, obtains pre- If the lip reading identification model of training;Wherein, the corresponding time series fragment of each character is produced by HMM;And institute Each moment corresponds to a hidden state in stating time series fragment, by the change of the hidden state by state-transition matrix table Show, each hidden state also corresponds to an observational networks, and the observational networks are modeled as into mixed Gauss model.
Figure 14 is referred to, the structured flowchart of lip reading identification module in a kind of living body detection device provided for the present invention, bag Include:
Recognition unit 31, the characteristic vector for matching lip image sequence according to HMM calculates described The optimum state path of the characteristic vector of lip image sequence, state path is combined determine according to the time series of IMAQ Individual character path is recognizing single character;
Assembled unit 32, for will corresponding to the characteristic vector recognize single character in temporal sequence combination producing mirror The lip reading information of other object.
Figure 15 is referred to, the structured flowchart of detection module in a kind of living body detection device provided for the present invention, including:
Whether detection unit 42, the lip reading information for detection and identification object is consistent with character in random verification code;
First confirmation unit 42, for when the lip reading information is consistent with character in random verification code, differentiates that object is Live body;
Second confirmation unit 43, for when character is inconsistent in the lip reading information and random verification code, differentiating object It is not live body.
In sum, the present invention differentiates the video image of object reading random verification code by gathering, successively to the video Image is pre-processed, is split, is alignd, so as to extract the characteristic vector of the lip image sequence for differentiating object;According to default The corresponding lip reading information of the characteristic vector of the lip reading identification model identification lip image sequence of training, detects the lip reading information It is whether consistent with character in random verification code, differentiate whether object is live body according to the whether completely the same determination of character.Relative to Traditional discrimination method when in use, lip feature is also acquired while picture pick-up device collection face, any without additionally increasing newly Hardware device, reduces the cost of checking system, more convenient to use;Differentiate that object reads random verification code and can directly judge Whether it is live body, not only increases the security and anti-counterfeit capability of identifying system, also improves the efficiency of live body checking.So, The present invention effectively overcomes various shortcoming of the prior art and has high industrial utilization.
The above-described embodiments merely illustrate the principles and effects of the present invention, not for the limitation present invention.It is any ripe The personage for knowing this technology all can carry out modifications and changes under without prejudice to spirit and scope of the invention to above-described embodiment.Cause This, those of ordinary skill in the art is complete with institute under technological thought without departing from disclosed spirit such as Into all equivalent modifications or change, should be covered by claim of the invention.

Claims (10)

1. a kind of biopsy method, for the facial image of detection and identification object to recognize whether it is living person, the live body Detection method includes:
Collection differentiates that object reads the video image of random verification code;
Obtain the characteristic vector of the lip image sequence of lip region in video image described in per frame;
The characteristic vector according to continuous multiple frames calls the lip reading identification model of training in advance to recognize the lip information for differentiating object;
Detect whether the lip information is consistent with character in random verification code;When word in the lip information and random verification code When according with consistent, discriminating object is live body.
2. biopsy method according to claim 1, it is characterised in that mouth in acquisition video image described in per frame The step of characteristic vector of the lip image sequence in lip region, including:
Video image described in every frame is carried out to pre-process the video image for obtaining default specification;
The video image of the default specification of segmentation obtains lip region, and the mouth that affine transformation is alignd is carried out to the lip region Lip image sequence;
Feature extraction algorithm based on stacking convolution independence subspace analysis network calculate the feature of the lip image sequence to Amount.
3. biopsy method according to claim 1, it is characterised in that the lip reading identification model of the training in advance, Including:
Training set include several characters, the random verification code be training set in generate at random;Each word in the training set Symbol is based on HMM and is trained to obtain the unique forecast model sequence of correspondence, the HMM bag Containing N number of forecast model, N >=1;The state-transition matrix and mixed Gauss model of HMM are calculated, default instruction is obtained Experienced lip reading identification model;Wherein, the corresponding time series fragment of each character is produced by HMM;And when described Between in sequence fragment each moment correspond to a hidden state, the change of the hidden state is represented by state-transition matrix, often Individual hidden state also corresponds to an observational networks, and the observational networks are modeled as into mixed Gauss model.
4. biopsy method according to claim 1, it is characterised in that described to be called in advance according to the characteristic vector The step of lip reading identification model identification of training differentiates the lip information of object, including:
The characteristic vector of lip image sequence is matched according to HMM, the lip image is calculated according to time series The optimum state path of the characteristic vector of sequence, according to the single character of optimum state Path Recognition;
Will corresponding to the characteristic vector recognize single character in temporal sequence combination producing differentiate object lip reading information.
5. biopsy method according to claim 1, it is characterised in that the detection lip information with test at random Whether character is consistent in card code;When the lip information is consistent with character in random verification code, differentiate that object is the step of live body Suddenly, including:
Whether the lip reading information of detection and identification object is consistent with character in random verification code;When the lip reading information and accidental validation When character is consistent in code, discriminating object is live body;When character is inconsistent in the lip reading information and random verification code, it is right to differentiate As not being live body.
6. a kind of living body detection device, it is characterised in that for the facial image of detection and identification object recognizing whether it is living People, including:
Acquisition module, for gathering the video image for differentiating that object reads random verification code;
Characteristic extracting module, the characteristic vector of the lip image sequence for obtaining lip region in video image described in every frame;
Lip reading identification module, the lip reading identification model for calling training in advance for the characteristic vector according to continuous multiple frames recognizes mirror The lip information of other object;
Detection module, it is whether consistent with character in random verification code for detecting the lip information;When the lip information with When character is consistent in random verification code, discriminating object is live body.
7. living body detection device according to claim 6, it is characterised in that the characteristic extracting module includes:
Pretreatment unit, for carrying out pre-processing the video image for obtaining default specification to video image described in every frame;
Segmentation alignment unit, the video image for splitting default specification obtains lip region, the lip region is imitated Penetrate the lip image sequence that conversion is alignd;
Feature extraction unit, the lip is calculated for the feature extraction algorithm based on stacking convolution independence subspace analysis network The characteristic vector of image sequence.
8. living body detection device according to claim 6, it is characterised in that the default training of the lip reading identification module is specific Including:
Training set include several characters, the random verification code be training set in generate at random;
Each character is based on HMM and is trained to obtain the unique forecast model sequence of correspondence in the training set Row, the HMM includes N number of forecast model, N >=1;
The state-transition matrix and mixed Gauss model of HMM are calculated, the lip reading identification mould of default training is obtained Type;
Wherein, the corresponding time series fragment of each character is produced by HMM;And in the time series fragment Each moment corresponds to a hidden state, and the change of the hidden state is represented by state-transition matrix, and each hidden state is also right An observational networks are answered, the observational networks are modeled as mixed Gauss model.
9. living body detection device according to claim 6, it is characterised in that the lip reading identification module includes:
Recognition unit, the characteristic vector for matching lip image sequence according to HMM, according to time series meter The optimum state path of the characteristic vector of the lip image sequence is calculated, according to the single character of optimum state Path Recognition;
Assembled unit, for will corresponding to the characteristic vector recognize single character in temporal sequence combination producing differentiate object Lip reading information.
10. living body detection device according to claim 6, it is characterised in that the detection module is specifically included:
Whether detection unit, the lip reading information for detection and identification object is consistent with character in random verification code;
First confirmation unit, for when the lip reading information is consistent with character in random verification code, discriminating object to be live body;
Second confirmation unit, for when character is inconsistent in the lip reading information and random verification code, differentiating object not to live Body.
CN201611039845.7A 2016-11-22 2016-11-22 Biopsy method and device Pending CN106778496A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201611039845.7A CN106778496A (en) 2016-11-22 2016-11-22 Biopsy method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201611039845.7A CN106778496A (en) 2016-11-22 2016-11-22 Biopsy method and device

Publications (1)

Publication Number Publication Date
CN106778496A true CN106778496A (en) 2017-05-31

Family

ID=58973805

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201611039845.7A Pending CN106778496A (en) 2016-11-22 2016-11-22 Biopsy method and device

Country Status (1)

Country Link
CN (1) CN106778496A (en)

Cited By (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107992833A (en) * 2017-12-08 2018-05-04 北京小米移动软件有限公司 Image-recognizing method, device and storage medium
CN107992812A (en) * 2017-11-27 2018-05-04 北京搜狗科技发展有限公司 A kind of lip reading recognition methods and device
CN108416595A (en) * 2018-03-27 2018-08-17 百度在线网络技术(北京)有限公司 Information processing method and device
CN108470169A (en) * 2018-05-23 2018-08-31 国政通科技股份有限公司 Face identification system and method
CN109271915A (en) * 2018-09-07 2019-01-25 北京市商汤科技开发有限公司 False-proof detection method and device, electronic equipment, storage medium
CN109409204A (en) * 2018-09-07 2019-03-01 北京市商汤科技开发有限公司 False-proof detection method and device, electronic equipment, storage medium
CN109461437A (en) * 2018-11-28 2019-03-12 平安科技(深圳)有限公司 The verifying content generating method and relevant apparatus of lip reading identification
CN110113319A (en) * 2019-04-16 2019-08-09 深圳壹账通智能科技有限公司 Identity identifying method, device, computer equipment and storage medium
WO2019223102A1 (en) * 2018-05-22 2019-11-28 平安科技(深圳)有限公司 Method and apparatus for checking validity of identity, terminal device and medium
CN110807356A (en) * 2019-09-15 2020-02-18 成都恒道智融信息技术有限公司 Living body detection method based on image lip language identification verification code
CN111339806A (en) * 2018-12-19 2020-06-26 马上消费金融股份有限公司 Training method of lip language recognition model, living body recognition method and device
CN111401134A (en) * 2020-02-19 2020-07-10 北京三快在线科技有限公司 Living body detection method, living body detection device, electronic apparatus, and storage medium
CN112257561A (en) * 2020-10-20 2021-01-22 广州云从凯风科技有限公司 Human face living body detection method and device, machine readable medium and equipment
CN112287722A (en) * 2019-07-23 2021-01-29 北京中关村科金技术有限公司 In-vivo detection method and device based on deep learning and storage medium
CN112287723A (en) * 2019-07-23 2021-01-29 北京中关村科金技术有限公司 In-vivo detection method and device based on deep learning and storage medium
CN112417925A (en) * 2019-08-21 2021-02-26 北京中关村科金技术有限公司 In-vivo detection method and device based on deep learning and storage medium
CN112560554A (en) * 2019-09-25 2021-03-26 北京中关村科金技术有限公司 Lip language-based living body detection method, device and storage medium
CN112633211A (en) * 2020-12-30 2021-04-09 海信视像科技股份有限公司 Service equipment and man-machine interaction method

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101046959A (en) * 2007-04-26 2007-10-03 上海交通大学 Identity identification method based on lid speech characteristic
CN104808794A (en) * 2015-04-24 2015-07-29 北京旷视科技有限公司 Method and system for inputting lip language
CN104834900A (en) * 2015-04-15 2015-08-12 常州飞寻视讯信息科技有限公司 Method and system for vivo detection in combination with acoustic image signal
CN104966086A (en) * 2014-11-14 2015-10-07 深圳市腾讯计算机系统有限公司 Living body identification method and apparatus
CN106096519A (en) * 2016-06-01 2016-11-09 腾讯科技(深圳)有限公司 Live body discrimination method and device

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101046959A (en) * 2007-04-26 2007-10-03 上海交通大学 Identity identification method based on lid speech characteristic
CN104966086A (en) * 2014-11-14 2015-10-07 深圳市腾讯计算机系统有限公司 Living body identification method and apparatus
CN104834900A (en) * 2015-04-15 2015-08-12 常州飞寻视讯信息科技有限公司 Method and system for vivo detection in combination with acoustic image signal
CN104808794A (en) * 2015-04-24 2015-07-29 北京旷视科技有限公司 Method and system for inputting lip language
CN106096519A (en) * 2016-06-01 2016-11-09 腾讯科技(深圳)有限公司 Live body discrimination method and device

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
YUQINGW520: "高安全性人脸识别系统中的唇语识别算法研究", 《HTTPS://WWW.DOC88.COM/P-2068920386740.HTML》 *
任玉强 等: "高安全性人脸识别系统中的唇语识别算法研究", 《计算机应用研究》 *

Cited By (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107992812A (en) * 2017-11-27 2018-05-04 北京搜狗科技发展有限公司 A kind of lip reading recognition methods and device
CN107992833A (en) * 2017-12-08 2018-05-04 北京小米移动软件有限公司 Image-recognizing method, device and storage medium
CN108416595A (en) * 2018-03-27 2018-08-17 百度在线网络技术(北京)有限公司 Information processing method and device
WO2019223102A1 (en) * 2018-05-22 2019-11-28 平安科技(深圳)有限公司 Method and apparatus for checking validity of identity, terminal device and medium
CN108470169A (en) * 2018-05-23 2018-08-31 国政通科技股份有限公司 Face identification system and method
CN109271915A (en) * 2018-09-07 2019-01-25 北京市商汤科技开发有限公司 False-proof detection method and device, electronic equipment, storage medium
CN109409204A (en) * 2018-09-07 2019-03-01 北京市商汤科技开发有限公司 False-proof detection method and device, electronic equipment, storage medium
CN109271915B (en) * 2018-09-07 2021-10-08 北京市商汤科技开发有限公司 Anti-counterfeiting detection method and device, electronic equipment and storage medium
CN109461437A (en) * 2018-11-28 2019-03-12 平安科技(深圳)有限公司 The verifying content generating method and relevant apparatus of lip reading identification
CN109461437B (en) * 2018-11-28 2023-05-09 平安科技(深圳)有限公司 Verification content generation method and related device for lip language identification
CN111339806A (en) * 2018-12-19 2020-06-26 马上消费金融股份有限公司 Training method of lip language recognition model, living body recognition method and device
CN111339806B (en) * 2018-12-19 2021-04-13 马上消费金融股份有限公司 Training method of lip language recognition model, living body recognition method and device
CN110113319A (en) * 2019-04-16 2019-08-09 深圳壹账通智能科技有限公司 Identity identifying method, device, computer equipment and storage medium
CN112287722A (en) * 2019-07-23 2021-01-29 北京中关村科金技术有限公司 In-vivo detection method and device based on deep learning and storage medium
CN112287723A (en) * 2019-07-23 2021-01-29 北京中关村科金技术有限公司 In-vivo detection method and device based on deep learning and storage medium
CN112417925A (en) * 2019-08-21 2021-02-26 北京中关村科金技术有限公司 In-vivo detection method and device based on deep learning and storage medium
CN110807356A (en) * 2019-09-15 2020-02-18 成都恒道智融信息技术有限公司 Living body detection method based on image lip language identification verification code
CN112560554A (en) * 2019-09-25 2021-03-26 北京中关村科金技术有限公司 Lip language-based living body detection method, device and storage medium
CN111401134A (en) * 2020-02-19 2020-07-10 北京三快在线科技有限公司 Living body detection method, living body detection device, electronic apparatus, and storage medium
CN112257561A (en) * 2020-10-20 2021-01-22 广州云从凯风科技有限公司 Human face living body detection method and device, machine readable medium and equipment
CN112633211A (en) * 2020-12-30 2021-04-09 海信视像科技股份有限公司 Service equipment and man-machine interaction method

Similar Documents

Publication Publication Date Title
CN106778496A (en) Biopsy method and device
CN110348319B (en) Face anti-counterfeiting method based on face depth information and edge image fusion
CN111563417B (en) Pyramid structure convolutional neural network-based facial expression recognition method
CN108182409A (en) Biopsy method, device, equipment and storage medium
CN104504362A (en) Face detection method based on convolutional neural network
CN106778506A (en) A kind of expression recognition method for merging depth image and multi-channel feature
CN107330444A (en) A kind of image autotext mask method based on generation confrontation network
CN106599883A (en) Face recognition method capable of extracting multi-level image semantics based on CNN (convolutional neural network)
CN106295506A (en) A kind of age recognition methods based on integrated convolutional neural networks
CN107180234A (en) The credit risk forecast method extracted based on expression recognition and face characteristic
CN106570491A (en) Robot intelligent interaction method and intelligent robot
CN109377429A (en) A kind of recognition of face quality-oriented education wisdom evaluation system
CN109117755A (en) A kind of human face in-vivo detection method, system and equipment
CN106960181A (en) A kind of pedestrian's attribute recognition approach based on RGBD data
CN110796101A (en) Face recognition method and system of embedded platform
Paul et al. Extraction of facial feature points using cumulative histogram
CN112966736B (en) Vehicle re-identification method based on multi-view matching and local feature fusion
CN112200176B (en) Method and system for detecting quality of face image and computer equipment
CN114662497A (en) False news detection method based on cooperative neural network
CN110110603A (en) A kind of multi-modal labiomaney method based on facial physiologic information
CN106599834A (en) Information pushing method and system
Shi et al. Visual speaker authentication by ensemble learning over static and dynamic lip details
CN113343198B (en) Video-based random gesture authentication method and system
CN105160285A (en) Method and system for recognizing human body tumble automatically based on stereoscopic vision
RU2005100267A (en) METHOD AND SYSTEM OF AUTOMATIC VERIFICATION OF THE PRESENCE OF A LIVING FACE OF A HUMAN IN BIOMETRIC SECURITY SYSTEMS

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: 401122 5 stories, Block 106, West Jinkai Avenue, Yubei District, Chongqing

Applicant after: Chongqing Zhongke Yuncong Technology Co., Ltd.

Address before: 401122, Chongqing North New District, Mount Huangshan Avenue, mercury science and technology building, B District, are six floor

Applicant before: CHONGQING ZHONGKE YUNCONG TECHNOLOGY CO., LTD.