CN105138954B - A kind of image automatic screening inquiry identifying system - Google Patents
A kind of image automatic screening inquiry identifying system Download PDFInfo
- Publication number
- CN105138954B CN105138954B CN201510406384.1A CN201510406384A CN105138954B CN 105138954 B CN105138954 B CN 105138954B CN 201510406384 A CN201510406384 A CN 201510406384A CN 105138954 B CN105138954 B CN 105138954B
- Authority
- CN
- China
- Prior art keywords
- face
- eyes
- modeling
- module
- pixel
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
- G06V40/171—Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/50—Information retrieval; Database structures therefor; File system structures therefor of still image data
- G06F16/58—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
- G06F16/583—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
- G06F16/5838—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content using colour
Landscapes
- Engineering & Computer Science (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Health & Medical Sciences (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- Library & Information Science (AREA)
- General Engineering & Computer Science (AREA)
- Databases & Information Systems (AREA)
- Data Mining & Analysis (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Human Computer Interaction (AREA)
- Multimedia (AREA)
- Image Analysis (AREA)
- Collating Specific Patterns (AREA)
Abstract
This patent discloses a kind of image automatic screening identifying systems, including image input device, Database Systems, storage system, face detection system, face registration system and face alignment system, image input device includes real-time video input, playing back videos input, photo batch importing;The personal association figure information of Database Systems storage, including personal document information, certificate photo, living photo, group picture and corresponding characteristic value;Storage system saves figure information, that is, characteristic value of 3D modeling and source mark;Face detection system reads the image of image input device input, carries out Face datection, acquisition meets the portrait of modeling conditions;Face registration system carries out face modeling, and the human face photo of modeling derives from face detection system, and to storage system registered face modeling information;Face identification system compares retrieval by carrying out characteristic value in Database Systems to the figure information in storage system, looks into people with people, inquires the true identity of people.
Description
Technical field
The present invention relates to a kind of area of computer aided living creature characteristic recognition system, more particularly to a kind of image is automatic
Screening inquiry identifying system.
Background technique
Along with rapid development of economy, the concentration of population is further concentrated, in particular with the tight of anti-terrorism situation
High, we are faced with all kinds of public safety situations.Meanwhile modern commerce, logistics and the rapid development of network are also to other side's identity
Audit, the requirement of confirmation it is also extremely urgent.
Scientific and technological continuous development, cloud storage, cloud computing, large database concept excavate a series of recognition of face under technical supports without
It is suspected to be the best mode of remote identity confirmation.
Remote human face identification takes dynamic human face identification undoubtedly best, most reliable.But it is constrained to current hard
Excessively high etc. reasons of part, horizontal network and cost, it is universal that the online recognition of face of dynamic is also difficult to large area, therefore using dynamic and static
The recognition of face mode that state combines is undoubtedly optimal selection.
Summary of the invention
This patent stores two big technologies mainly in combination with recognition of face retrieval and big data.
The maximum feature of this face recognition technology is dynamic acquisition, static identification, combining dynamic and static research.Video camera passes through network
Real-time dynamic video is uploaded, face identification system automatic collection contains the portrait frame that can be modeled, photo is converted to, then people
Face image modeling is transferred to backstage storage system, then is inquired by big data, and 1:N static state concurrently compares, in Database Systems
The portrait of storage compares retrieval, looks into people with people, inquires suspect's true identity information.
According to this technical field of recognition of face, face is by six forehead, eyebrow, eyes, nose, mouth, cheeks regions
It is composed, this six interregional relative positions, the size of region organ, relative scale and feature constitute in each face
The identity characteristic information contained.The identity characteristic information contained in each face is extracted, to may recognize that everyone
Identity.
The main characteristic value of face obtains, and is limited to the eyebrow of people hereinafter, between lower jaw, according to the mankind solve that plane learns at
Just, face is learnt in specific trigonum, and size, position, the relative scale of certain organs have uniqueness and invariance;And
The ratio of other organs is able to maintain opposite stability though changing.Around this principle, as long as finding face spy
It levies in region, size, position, the relative scale of related organ, which can be crossed, extracts its characteristic value.
The facial area feature analysis al that the face recognition technology that this patent uses uses, it has merged computer picture
Processing technique and biostatistics principle extract portrait feature using computer image processing technology in one from video image
Point carries out analysis using the principle of biostatistics and establishes 3D mathematical model, i.e. skin detection.Using having been given birth in photo library
At the characteristic value that is generated with collected face of face characteristic value compare, provide a similarity, can be true by this value
Whether fixed is same people.
Specifically, the present invention is that a kind of image automatic screening inquires identifying system, including image input device, database
System, storage system, face detection system, face registration system and face alignment system.
Described image input equipment includes real-time video input (dynamic), playing back videos input (dynamic), photo batch is led
Enter (static state);
The personal association figure information of Database Systems storage, including personal document information (identity card, driver's license, family
Nationality, social security, passport etc.), certificate photo (certificate photos such as identity card, driver's license, social security card, passport), living photo, group picture and right
The characteristic value etc. answered;
The storage system saves the figure information (i.e. characteristic value) of 3D modeling and source mark;
The face detection system reads the image of image input device input, carries out Face datection, acquisition meets modeling
The portrait of condition;
The face registration system carries out face modeling, and the human face photo of modeling derives from face detection system, and to depositing
Storage system registered face modeling information;
The face comparison system in Database Systems by carrying out characteristic value ratio to the figure information in storage system
To retrieval, people is looked into people, inquires the true identity of people.
Described image automatic screening inquires identifying system, is combining dynamic and static research, i.e., front end is dynamic figure acquisition system, after
End is the concurrent Compare System in 4000 tunnel of still photo;
Face dynamic capture engine include screening module, spell frame composograph module, background processing module, track with
Track module;
The interference reduction engine includes light interference recovery module, ethnic group identification module, age recovery module, expression reduction
Module, blocks recovery module at posture recovery module.
The face Modeling engine to collecting 2D portrait, by the size of image surface face profile, ratio, relative position,
The fixed attributes such as distance, are unfolded by 3D image surface organ template, and corresponding geometrical relationship forms identification parameter and data, are calculated
Mutual association geometric vector (characteristic value), i.e. generation 3D characteristic value.
The data register engine carries out source associated designation (information such as time, camera number) to modeling data, and
It is stored by database standard login mode, so as to data query.
The face alignment system is the characteristic value comparison engine of " three-in-one ", includes three kinds of characteristic value comparison modules: 1.:
- 24 pixel of 12 pixel (400 points) comparison module between eyes;2.: -40 pixel of 24 pixel (1500 points) comparison module between eyes;3.:
- 60 pixel of 40 pixel (4000 points) comparison module between eyes.System calculates the quantity of pixel between face eyes automatically, according to people
The quantity of pixel between face eyes chooses corresponding one kind in above-mentioned three kinds of comparison modules, three kinds of comparison modules automatically
It is assembled together, synthesizes " three-in-one " comparison engine.
The screening module includes the following steps:
Step 1: the degree of conformity inspection of image and face basic templates in the video input apparatus, i.e. face are basic
The trigonum that template filtration method, two eyes and a nose are constituted is the most basic feature of face, qualified to enter step
Rapid two;
Step 2: in the video input apparatus facial angle compared with standard portrait, differential seat angle range left and right ±
25 °/± 15 ° up and down/the eligible of ± 10 ° of rotation enters step three;
Step 3: eyes are as it can be seen that pixel inspection between eyes, according to area between video input apparatus total pixel value and eyes
Domain area accounts for the ratio of entire camera picture, calculates the pixel point value in region between eyes, and pixel point value between eyes is needed to be greater than
12, meet conditions above does face acquisition;
The track following module: system is to video flowing framing, since collecting the face frame that first frame can compare,
In in subsequent 2 seconds, system can carry out verifying mutually between frame automatically, choose out of 50 frames (2 seconds * 25 frames/seconds) clearest
Two width portrait frames spell frame as frame is compared, and synthesis is used as comparison source in comparison module;Collected portrait is carried out simultaneously
Mark, based on the algorithm combined with motion model, is compared tracking in front end, if it is confirmed that be same people, will not do the
Secondary face acquisition.In this way, be greatly saved backstage CPU, transmission bandwidth, storage hardware resource.
Detailed description of the invention
Fig. 1 is to identify part face structural schematic diagram.
Fig. 2, Fig. 3 are background processing module functional schematics.
Fig. 4 is light interference recovery module functional schematic.
Fig. 5 is different ethnic group template schematic diagrames.
Fig. 6 is expression reduction schematic diagram.
Fig. 7, Fig. 8 are posture reduction schematic diagrames.
Fig. 9 is to block reduction schematic diagram.
Figure 10 is face modeling schematic diagram.
Figure 11 is that twins identify schematic diagram.
Figure 12 is system structure diagram.
Figure 13 is system flow chart
It illustrates:
1- eye pouch
2- tear ditch, apple flesh are sagging
3- decree line
4- puppet line
5- contour line
Specific embodiment
Illustrate specific embodiment with reference to the accompanying drawings of the specification:
One, characteristic point is summarized
As described in Summary, characteristic point is the core of this patent, and face is in specific trigonum, certain organs
Size, position, relative scale, have uniqueness and invariance;And the ratio of other organs can protect though changing
Opposite stability is held, according to the size of these organs, position, proportional amount of variation degree, it is big that this patent is classified as three
Class: A) uniqueness invariant relation:
Eyes spacing;
The position proportional relationship of pupil of both eyes and face bridge of the nose tip
Bridge of the nose radian
The width ratio relationship of bridge of the nose arc length and nose
The geometry of cheekbone;
Place between the eyebrows is to upper lip spacing;
The spacing at eyes angle;
B) the feature regularly changing with the age, according to the deduction of these features it can be concluded that these parts of face changed
Trend, with reduction:
Eyes angle is sagging;
Eye pouch;
Lips angle is sagging;
The variation of decree line;(decree line be position wing of nose side extend and under twice lines, be that typical skin histology is old
The phenomenon that changing, skin surface caused to be recessed;And often making up, laugh and do not pay attention to maintenance can all make female friend generate decree
Line.)
C) extremely labile part
The point in portion, is commonly called as contour line between ear-lobe and lower jaw.
In conclusion be not only one characteristic point in various typical parts, but one group of feature point group at function it is bent
Line ultimately forms facial feature points set;Fig. 1 identifies above-mentioned part human face structure.
Two, each module is described in detail
Image automatic screening inquiry identifying system includes that face captures engine, interference reduction engine, face Modeling engine, people
Face comparison engine, photo eigen library, template library.
1, face captures engine
Face captures engine and screens first to the image of acquisition, and screening module specifically includes following three step: step
Rapid one, the degree of conformity inspection of image and face basic templates (trigonums that two eyes and a nose are constituted) in video camera
It looks into, i.e. face basic templates filtration method, it is qualified to enter step two;Step 2, in video facial angle compared with portrait,
Differential seat angle range enters step three in ± 25 °/± 15 ° up and down/the eligible of ± 10 ° of rotation in left and right;Step 3, eyes can
See, pixel inspection between eyes, entire camera picture is accounted for according to region area between video input apparatus total pixel value and eyes
Ratio, calculate the pixel point value in region between eyes, need pixel point value between eyes to be greater than 12, meet conducting oneself for conditions above
Face acquisition;
Above-mentioned screening can pass through cascade classifier screening method: detected image passes sequentially through each classifier, can be with
By, it can be determined as qualifying object, into next classifier.It, can will be most stringent meanwhile in order to consider efficiency
Classifier be placed on the top of entire cascade classifier, matching times can be reduced like that.
It spells frame composograph module to include framing and spell frame, judges it is first two seconds of the video flowing of face in screening module
Video flowing decompose framing, every frame image of acquisition is specifically exactly done available pixel point and compared by progresss frame comparison per second,
The frame that the most frame of available pixel point is used to make to spell frame is picked out, two clearest picture frames are obtained in two seconds.
Frame technique is spelled, above-mentioned two clearest picture frames are exactly subjected to spelling frame, to prevent frame losing in transmission, two frames one
Standby one uses.
In actual operation, framing, frame compare, spell frame and can repeatedly interact with screening module.
Background processing module, which refers to the process of, is distinguished complicated background with face, therefore it first has to judge
The boundary of face out could distinguish background.Such as Fig. 2, Fig. 3.
Track following module: system is to video flowing framing, since collecting the face frame that first frame can compare, subsequent
2 seconds in, system can carry out verifying mutually between frame automatically, choose two clearest width out of 50 frames (2 seconds * 25 frames/seconds)
Portrait frame spells frame as frame is compared, and synthesis is used as comparison source in comparison module;Collected portrait is identified simultaneously,
Based on the algorithm that movement is combined with model, tracking is compared in front end, if it is confirmed that being same people, second of people will not be
Face acquisition.In this way, be greatly saved backstage CPU, transmission bandwidth, storage hardware resource.
2. interference reduction engine
The second largest module is interference reduction engine, is modified reduction, specifically, face to the human face photo captured
Restoring engine includes:
2.1 light interference reduction: mainly two kinds of light interference: yin-yang face and backlight.
Yin-yang face is modified by the principle of facial symmetry.
Backlight is modified (light intensity) by the brightness contrast to background and portrait.Such as Fig. 4, wherein it is white to represent grey black for X-axis
Degree, Y represent specific gravity.
The identification of 2.2 ethnic groups
Ethnic group is divided into yellow/white people/black race/brown people
It is identified by the face's elementary contour feature and the colour of skin of four big ethnic groups, such as Fig. 5
The reduction of 2.3 ages
Age reduction refers to B category feature point in " one, characteristic point summary ", generates one group of changing value within a certain range,
As additional feature value;Female thyroid cartilage determines the size (A ± B%) of positive and negative correction value B.Photo year in practical photograph and library
Age difference is bigger, this B value is suitably amplified.
In the comparison process of module later, these additional feature values have same as the former characteristic parameter at this
Weight power, if for example the former characteristic value of the point and the value for being compared photo are variant, but in this group of additional feature value, but have
The value met can equally improve the accordance of comparison.
The reduction of 2.4 expressions
Principle is dissected by physiology, the point for simulating each epidermis variation, which is kept in the center, to be set.Correction value revised law, specific algorithm.
Expression reduction, refers to, in a certain range, can will deform little expression, revert to normal expression, such as Fig. 6.
The reduction of 2.5 postures
This patent can be to ± 25 ° of/± 15 ° up and down/± 10 ° of rotations be controlled, and photo carries out posture also in eyes visible range
Former normotopia, reaching eyes is that horizontal coordinate is symmetrically adjusted to normotopia.Such as Fig. 7, Fig. 8.
2.6 block reduction
Defect symmetry correction or average value compensation are carried out to shelters such as glasses/fringe/scarf/polo-neck/thes brim of a hat.Such as figure
9. for example: left part is occluded, and by right face and the symmetrical principle of left face, is modified.For another example: chin is blocked by polo-neck
, characteristic value according to ethnic group chin average value, as this part.
3. face Modeling engine
The third-largest engine is the substantive characteristics to the collected 2D portrait face for meeting modeling conditions, face profile it is big
The fixed attributes such as small, position, distance are unfolded by 3D image surface organ template, and corresponding geometrical relationship forms identification parameter and number
According to, calculate mutual association geometric vector (characteristic value), i.e., generation 3D characteristic value.Such as Figure 10.
3D modeling can resist the variation of light, skin color, facial hair, hair style, glasses, expression and posture, have
Powerful reliability.
4. face alignment engine
There are commonly the methods of Gabor wavelet, Adaboost learning algorithm and support vector machines for face recognition technology.
This patent uses the comparison to human face characteristic point, selectes " three-in-one " by the quantity of pixel between eyes first
A kind of corresponding comparison module, is compared by the neural network algorithm of deep learning in comparison engine.Deep learning
It is a kind of structural information algorithm by simulating human nerve circuit " neuroid " on computers.It can pass through multilayer
Secondary combination low-level feature forms more abstract high-level characteristic, to realize automatic learning characteristic, participates in feature without people
Selection.Deep learning neural network algorithm is exactly based on the multi-level analysis mode of simulation human brain to improve the accurate of analysis
Property and analysis speed.
From " one, characteristic point summary " as can be seen that the ratio variation of each section, some variations are small, and some variations are big, therefore
Algorithm is different in fact in the weight for determining characteristic point.
Point is fewer, and the specific gravity that A class point accounts for is bigger.It is basic to keep because they are in the whole body of people for A category feature point
Specific proportionate relationship, and there is uniqueness, so occupying biggish weight when acquiring face essential characteristic, this kind of point exists
In total on about 4000 points, occupy 56% ratio, and weight distribution is 60% or more;For B category feature point, although with people
All one's life, can change, but it is this variation be it is foreseeable, therefore, compare when, redundancy deduction, this kind of point can be carried out
Account for about 32% always to count, weight distribution is 30% or so, and last a kind of point, such as facial contour line, with age and ring
The variation in border, it may occur that frequent variation accounts for about 12% always to count, because changing greatly, weight distribution minimum only has 10%.
According to the above principle, we have obtained the essential characteristic point group of face, and obtain a part of characteristic value, but sometimes these are inclined
Difference or the uniqueness that cannot accurately react face.
Therefore this patent also takes compensatory algorithm i.e. face surface integration method, the reason for this is that, by research, if by this
Surface area around 4000 points makees a calculating discovery, and the registration probability of everyone surface area will be far smaller than feature
It is worth duplicate probability, therefore above-mentioned characteristic point is connected with each other by we, makes each adjacent 3 points composition equilateral triangle
(vertex upper, non-" equilateral triangle "), when taking these points, it has been contemplated that the positional factor of its geometry equilateral triangle,
To guarantee to obtain these equilateral triangles, these certain triangles are also to have weight distribution in fact, principle and characteristic point one
Cause, using Gauss theorem, by calculate A B tri- groups of obtained areas of equilateral triangle of C be appended to form a redundancy value
Characteristic value obtains in parameter end.By face surface integration method, the accuracy of recognition of face is further promoted, in actual operation,
Twins, such as Figure 11 can be differentiated.
Photo is raw in the 3D characteristic value and photo library that face alignment engine generates the portrait frame intercepted in video flowing modeling
At 3D characteristic value be compared, obtain comparing result.
Three, recognition of face is used for the specific embodiment of big data retrieval
As shown in Figure 12 and Figure 13, the personal association figure information of Database Systems storage, millions or even hundreds of millions grades, big number
According to storage;Storage system saves the figure information for having built up mould transmitted on each point and source mark;Face detection system is read
Front-end image is taken, carries out Face datection, acquisition meets the portrait of modeling conditions;The face registration system carries out face modeling,
And the figure information for having built up mould is uploaded by network the storage system for being registered to monitoring center;The face identification system is logical
It crosses to the figure information in storage system and the figure information in Database Systems, retrieval is concurrently compared by 4000 tunnels, with people
People is looked into, the true identity of people is inquired.
Beneficial effect
People is looked into people, is compared by collected portrait and the portrait in database, the identity letter in database is retrieved
Breath, accurately inquires target person.Hundreds of millions grades of model database, the high-accuracy of identification and inquiry velocity are the three of this patent fastly
Big beneficial effect.This patent on the basis of existing technology, combines multinomial new model, algorithm, including dynamic and static combination
Recognition of face, 4000 tunnels concurrently compare retrieval, big data storage inquiry, can fast and accurately complete and look into the big of people with people
Data query.
Ministry of Public Security's measured data:
Single machine static test: static database: 10,000,000 standards are shone;
One-to-many static comparison (1:N): discrimination>98%, recognition speed<2 second/people;
Claims (5)
1. a kind of image automatic screening inquires identifying system, including image input device, Database Systems, storage system, face
Detection system, face registration system and face alignment system,
Described image input equipment includes that real-time video input and playing back videos input and photo batch import;
The personal association figure information of Database Systems storage, including personal document information, certificate photo, living photo, group picture,
And corresponding characteristic value;
The storage system saves figure information, that is, characteristic value of 3D modeling and source mark;
The face detection system reads the image of image input device input, carries out Face datection, acquisition meets modeling conditions
Portrait;
The face registration system carries out face modeling, and the human face photo of modeling derives from face detection system, and is to storage
System registered face modeling information;
The face comparison system compares inspection by carrying out characteristic value in Database Systems to the figure information in storage system
Rope looks into people with people, inquires the true identity of people;
The face alignment system is the characteristic value comparison engine of " three-in-one ", includes three kinds of characteristic value comparison modules: 1.: eyes
Between -24 pixel comparison module of 12 pixel i.e. 400 comparison modules;2.: i.e. 1500 points of -40 pixel comparison module of 24 pixel between eyes
Comparison module;3.: -60 pixel comparison module of 40 pixel i.e. 4000 comparison modules between eyes;System calculates face eyes automatically
Between the quantity of pixel chosen automatically corresponding in above-mentioned three kinds of comparison modules according to the quantity of pixel between face eyes
One kind, three kinds of comparison modules are assembled together, synthesize " three-in-one " comparison engine.
2. image automatic screening according to claim 1 inquires identifying system, it is characterised in that: the face detection system
Engine and interference reduction engine are captured including face dynamic;
It includes screening module and spelling frame composograph module and background processing module and track following mould that face dynamic, which captures engine,
Block;
Interference reduction engine includes light interference recovery module and ethnic group identification module and age recovery module and expression recovery module
With posture recovery module and block recovery module.
3. image automatic screening according to claim 1 inquires identifying system, it is characterised in that: the face registration system
Including face Modeling engine and data register engine;
The face Modeling engine passes through the fixed attribute of image surface face profile to collected 2D portrait, comprising: size and ratio
Example and relative position and distance, are unfolded by 3D image surface organ template, and corresponding geometrical relationship forms identification parameter and data, meter
Mutual association geometric vector is calculated, i.e. generation 3D characteristic value;
Data register engine carries out source associated designation to modeling data, and stores by database standard login mode, to count
It is investigated that asking.
4. image automatic screening according to claim 2 inquires identifying system, it is characterised in that: the screening module includes
Following steps:
Step 1: the degree of conformity inspection of image and face basic templates in the video input apparatus, i.e. face basic templates
The trigonum that filtration method, two eyes and a nose are constituted is the most basic feature of face, qualified to enter step two;
Step 2: in the video input apparatus facial angle compared with standard portrait, differential seat angle range left and right ± 25 °/on
Under ± 10 ° eligible of ± 15 °/rotation enter step three;
Step 3: eyes are as it can be seen that pixel inspection between eyes, according to area surface between video input apparatus total pixel value and eyes
Product accounts for the ratio of entire camera picture, calculates the pixel point value in region between eyes, needs pixel point value between eyes small greater than 12
In 60, meets pixel point value between eyes and be greater than 12 and do face acquisition less than 60 conditions.
5. image automatic screening according to claim 2 inquires identifying system, it is characterised in that: the track following module
It is interior in subsequent 2 seconds since collecting first frame and meeting the face frame of face acquisition standard to video flowing framing, system
Meeting carries out verifying mutually between frame automatically, and two clearest width portrait frames are chosen out of 50 frames as frame is compared, frame is spelled, synthesizes,
Comparison source is used as in comparison module;Collected portrait is identified simultaneously, based on the algorithm combined with motion model,
Tracking is compared in front end, if it is confirmed that being same people, second of face acquisition will not be done.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510406384.1A CN105138954B (en) | 2015-07-12 | 2015-07-12 | A kind of image automatic screening inquiry identifying system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510406384.1A CN105138954B (en) | 2015-07-12 | 2015-07-12 | A kind of image automatic screening inquiry identifying system |
Publications (2)
Publication Number | Publication Date |
---|---|
CN105138954A CN105138954A (en) | 2015-12-09 |
CN105138954B true CN105138954B (en) | 2019-06-04 |
Family
ID=54724298
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201510406384.1A Active CN105138954B (en) | 2015-07-12 | 2015-07-12 | A kind of image automatic screening inquiry identifying system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN105138954B (en) |
Families Citing this family (22)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105912997B (en) * | 2016-04-05 | 2019-05-28 | 福建兴宇信息科技有限公司 | Face recognition method and system |
CN106408658B (en) * | 2016-10-31 | 2021-06-15 | 北京旷视科技有限公司 | Face modeling method and device |
CN106778925B (en) * | 2016-11-03 | 2021-10-08 | 五邑大学 | Face recognition pose over-complete face automatic registration method and device |
CN108206929A (en) * | 2016-12-16 | 2018-06-26 | 北京华泰科捷信息技术股份有限公司 | A kind of contactless personnel information acquisition device and its acquisition method |
CN106778645B (en) * | 2016-12-24 | 2018-05-18 | 深圳云天励飞技术有限公司 | A kind of image processing method and device |
CN108171207A (en) * | 2018-01-17 | 2018-06-15 | 百度在线网络技术(北京)有限公司 | Face identification method and device based on video sequence |
CN108256479B (en) * | 2018-01-17 | 2023-08-01 | 百度在线网络技术(北京)有限公司 | Face tracking method and device |
CN108345847B (en) * | 2018-01-30 | 2021-03-30 | 一石数字技术成都有限公司 | System and method for generating label data of face image |
CN108399247A (en) * | 2018-03-01 | 2018-08-14 | 深圳羚羊极速科技有限公司 | A kind of generation method of virtual identity mark |
WO2019178711A1 (en) * | 2018-03-18 | 2019-09-26 | 广东欧珀移动通信有限公司 | Image processing method and apparatus, storage medium, and electronic device |
CN108764350A (en) * | 2018-05-30 | 2018-11-06 | 苏州科达科技股份有限公司 | Target identification method, device and electronic equipment |
CN108897777B (en) * | 2018-06-01 | 2022-06-17 | 深圳市商汤科技有限公司 | Target object tracking method and device, electronic equipment and storage medium |
CN109087157A (en) * | 2018-06-08 | 2018-12-25 | 成都第二记忆科技有限公司 | A kind of video-photographic works sale service system and method and business model |
CN108877001B (en) * | 2018-06-19 | 2021-08-24 | 北京金山安全软件有限公司 | Visitor information processing method and device and electronic equipment |
CN109255319A (en) * | 2018-09-02 | 2019-01-22 | 珠海横琴现联盛科技发展有限公司 | For the recognition of face payment information method for anti-counterfeit of still photo |
CN109214335A (en) * | 2018-09-04 | 2019-01-15 | 高新兴科技集团股份有限公司 | A kind of alarm method and equipment |
CN111241328A (en) * | 2018-11-28 | 2020-06-05 | 航天信息股份有限公司 | Identity authentication and identification service method and device, readable storage medium and electronic equipment |
CN109886360A (en) * | 2019-03-25 | 2019-06-14 | 山东浪潮云信息技术有限公司 | A kind of certificate photo Classification and Identification based on deep learning and detection method without a hat on and system |
CN110059576A (en) * | 2019-03-26 | 2019-07-26 | 北京字节跳动网络技术有限公司 | Screening technique, device and the electronic equipment of picture |
CN111127639A (en) * | 2019-12-30 | 2020-05-08 | 深圳小佳科技有限公司 | Cloud-based face 3D model construction method, storage medium and system |
CN111738742A (en) * | 2020-05-07 | 2020-10-02 | 广东电网有限责任公司 | Portrait data processing system for power customer service |
CN111680280B (en) * | 2020-05-20 | 2022-05-24 | 青岛黄海学院 | Computer portrait recognition system |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102201061A (en) * | 2011-06-24 | 2011-09-28 | 常州锐驰电子科技有限公司 | Intelligent safety monitoring system and method based on multilevel filtering face recognition |
CN104077804A (en) * | 2014-06-09 | 2014-10-01 | 广州嘉崎智能科技有限公司 | Method for constructing three-dimensional human face model based on multi-frame video image |
CN104091176A (en) * | 2014-07-18 | 2014-10-08 | 吴建忠 | Technology for applying figure and head portrait comparison to videos |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9262671B2 (en) * | 2013-03-15 | 2016-02-16 | Nito Inc. | Systems, methods, and software for detecting an object in an image |
-
2015
- 2015-07-12 CN CN201510406384.1A patent/CN105138954B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102201061A (en) * | 2011-06-24 | 2011-09-28 | 常州锐驰电子科技有限公司 | Intelligent safety monitoring system and method based on multilevel filtering face recognition |
CN104077804A (en) * | 2014-06-09 | 2014-10-01 | 广州嘉崎智能科技有限公司 | Method for constructing three-dimensional human face model based on multi-frame video image |
CN104091176A (en) * | 2014-07-18 | 2014-10-08 | 吴建忠 | Technology for applying figure and head portrait comparison to videos |
Non-Patent Citations (1)
Title |
---|
3D人脸识别技术及应用;田强 等;《警察技术》;20141231;全文 |
Also Published As
Publication number | Publication date |
---|---|
CN105138954A (en) | 2015-12-09 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN105138954B (en) | A kind of image automatic screening inquiry identifying system | |
CN105046219B (en) | A kind of face identification system | |
CN104951773B (en) | A kind of real-time face recognition monitoring system | |
CN108319953B (en) | Occlusion detection method and device, electronic equipment and the storage medium of target object | |
CN104866829B (en) | A kind of across age face verification method based on feature learning | |
US10509985B2 (en) | Method and apparatus for security inspection | |
Dibeklioğlu et al. | Combining facial dynamics with appearance for age estimation | |
CN104751108B (en) | Facial image identification device and facial image recognition method | |
Zafaruddin et al. | Face recognition: A holistic approach review | |
Jana et al. | Age estimation from face image using wrinkle features | |
CN104091176A (en) | Technology for applying figure and head portrait comparison to videos | |
KR20100134533A (en) | An iris and ocular recognition system using trace transforms | |
CN109934047A (en) | Face identification system and its face identification method based on deep learning | |
Paul et al. | Extraction of facial feature points using cumulative histogram | |
Ortega et al. | Dynamic facial presentation attack detection for automated border control systems | |
Ling et al. | Driver eye location and state estimation based on a robust model and data augmentation | |
CN111694980A (en) | Robust family child learning state visual supervision method and device | |
Sharma et al. | Indian face age database: A database for face recognition with age variation | |
Park | Face Recognition: face in video, age invariance, and facial marks | |
Chintalapati et al. | Illumination, expression and occlusion invariant pose-adaptive face recognition system for real-time applications | |
CN118071359B (en) | Financial virtual identity verification method and system | |
Ayari et al. | A review of gender separation of moving faces | |
Wang et al. | Face detection based on adaboost and face contour in e-learning | |
Xu et al. | CNN expression recognition based on feature graph | |
Xu et al. | Eye location using hierarchical classifier |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |