CN106056080A - Visualized biometric information acquisition device and acquisition method - Google Patents

Visualized biometric information acquisition device and acquisition method Download PDF

Info

Publication number
CN106056080A
CN106056080A CN201610375295.XA CN201610375295A CN106056080A CN 106056080 A CN106056080 A CN 106056080A CN 201610375295 A CN201610375295 A CN 201610375295A CN 106056080 A CN106056080 A CN 106056080A
Authority
CN
China
Prior art keywords
image
unit
average
gradation
palm
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201610375295.XA
Other languages
Chinese (zh)
Other versions
CN106056080B (en
Inventor
车全宏
李治农
段松林
杨李木
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Entropy Technology Co Ltd
Original Assignee
DONGGUAN ZK ELECTRONIC TECHNOLOGY Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by DONGGUAN ZK ELECTRONIC TECHNOLOGY Co Ltd filed Critical DONGGUAN ZK ELECTRONIC TECHNOLOGY Co Ltd
Priority to CN201610375295.XA priority Critical patent/CN106056080B/en
Publication of CN106056080A publication Critical patent/CN106056080A/en
Application granted granted Critical
Publication of CN106056080B publication Critical patent/CN106056080B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/12Fingerprints or palmprints
    • G06V40/13Sensors therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/12Fingerprints or palmprints
    • G06V40/1347Preprocessing; Feature extraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/19Sensors therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/193Preprocessing; Feature extraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/40Spoof detection, e.g. liveness detection
    • G06V40/45Detection of the body part being alive

Landscapes

  • Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Ophthalmology & Optometry (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)

Abstract

The invention provides a device for acquiring biometric information, which comprises an acquisition unit, a display unit, a feature extraction unit and a processing unit, wherein the acquisition unit is used for acquiring an image containing biometric information of a to-be-recognized person; the display unit is used for displaying the image acquired by the acquisition unit; the feature extraction unit is used for recognizing a feature area containing the biometric information of the to-be-recognized person from the image acquired by the acquisition unit; the processing unit is used for generating a guide identifier representing the feature area on the display unit according to the feature area recognized by the feature extraction unit; and the guide identifier changes along with changes of the image acquired by the acquisition unit. The method provided by the invention simply and quickly provides directional guide for a target containing to-be-acquired biometric information, the operation is quick and clear, the realization is simple, the cost is low, and the user experience is good.

Description

A kind of visual biometric information harvester and method
Technical field
The application relates to field of biological recognition, especially, relate to a kind of visual biometric information harvester and Method.
Background technology
Biological identification technology and Related product have been widely used in user identification and verification/authentication, such as attendance recorder, peace The smart lock of anti-industry, door access machine, ladder control, the passage product of access control, parking lot product, video monitoring product, all may be used To introduce this biometric identity recognition method, reach the purposes such as time management, exit/entry security control, artificial abortion's statistics.Raw Thing identification technology integrated can also be applied to large software, project, and specifically these products can apply to bank, enterprises and institutions' list Any environment needing access control, security control in position, prison, family, school etc., it might even be possible to set up integrated bio special The identity card levied, sets up data base, wanders away for searching, abducts child.
Fingerprint has uniqueness, and is prone to gather, and is therefore to be most commonly used for one of biological characteristic identifying individual.With This is simultaneously as fingerprint collecting is contact collection, and during sampling, the requirement to finger is the highest, the dry and wet journey on finger print surface Degree, clean-up performance all can produce impact to the fingerprint image collected, and make the accuracy of identification, and Consumer's Experience is poor.Mesh Before, one of scheme solving this problem is that the biology using contactless collection is as the biological characteristic identifying individual.Such as: Palm palmmprint, palm blood-vessel image, face-image, facial skeleton image, iris etc..Contactless acquisition method solves effectively The drawback that the way of contact of having determined is brought.
But, contactless collection there is also certain problem, such as: the capture angle of collecting device is certain, takes Just can collect effectively as object necessarily be in the range of suitably, readily identified image.But in reality, owing to waiting to know Not individual individual variation and the invisibility of capture scope so that be difficult to the image phase getting with being registered in data base Same image, considerably increases coupling and the difficulty of contrast.Therefore, need a kind of new contactless acquisition method of offer badly to solve Certainly this problem.
Summary of the invention
The application one of solves the technical problem that and to be to provide a kind of device for obtaining biometric information, including:
Collecting unit, for gathering the image of the biometric information comprising individual to be identified;Display unit, is used for showing The image that described collecting unit obtains;Feature extraction unit, for from collecting unit obtain picture identify comprise described in treat Identify the characteristic area of the biometric information of individual;Processing unit, for the characteristic area identified according to feature extraction unit Territory generates the guiding mark representing described characteristic area on the display unit;Wherein, described guiding identifies along with collecting unit obtains The change of the picture taken and change.
According to the one aspect of the present invention, it is contactless collection that described collecting unit gathers the method for image, tool Body, described collecting unit is photographic head.
According to the one aspect of the present invention, described biometric information is palm palmmprint, palm blood-vessel image, face Image, iris, palm skeleton, cheilogramma, palm skeleton, cheilogramma.
According to the one aspect of the present invention, described feature extraction unit includes: Picking up geometry information unit, is used for Obtain representing N number of fisrt feature coordinate points of described characteristic area by recognizer, described fisrt feature coordinate points comprises Shape containing described characteristic area and the geological information of position relationship;Wherein, fisrt feature coordinate is obtained by recognizer The method of point is: gathers M palm image pattern, marks true key point position on each palm image pattern;According to True key point position in described M palm image pattern, obtains in each palm image pattern each the most crucial The initial estimated location that some position is corresponding;Each initial estimated location is trained, so that described initial estimated location is forced Nearly corresponding true key point position, obtains a cascade and returns device;Return device according to described cascade, position palm figure to be identified Target critical point position in Xiang, it is thus achieved that fisrt feature coordinate points;Wherein, N, M are the integer more than 2.
According to the one aspect of the present invention, described processing unit also includes: geometrical information processing unit, for obtaining N number of second feature that described N number of fisrt feature coordinate points on the image taken is converted on screen correspondence according to corresponding relation is sat Punctuate, wherein said corresponding relation is the proportionate relationship between capture scope and the size of display unit of collecting unit, and N is big In the integer of 2.
According to the one aspect of the present invention, described processing unit also includes: guide mark to form unit, for basis Described second feature coordinate points generates the guiding mark of shape and the position representing described characteristic area.
According to the one aspect of the present invention, what described guiding identified is shaped as the shape with described characteristic area the most Any one in close polygon, circle, ellipse.
According to the one aspect of the present invention, described biometric information harvester also includes: reference marker generates Unit is for extracting the geological information of the acquisition range comprising described collecting unit and single in display according to described geological information The reference identification representing described acquisition range is generated in unit.
According to the one aspect of the present invention, described biometric information harvester also includes: error calculation unit, For calculating error vector, the starting point of described error vector is the described geometric center point guiding mark, and terminal is described reference The geometric center point of mark;And, Tip element, for showing length and the direction of described error vector on the display unit.
According to the one aspect of the present invention, described biometric information harvester also includes: probe unit, is used for Shelter whether is there is in the range of detecting the capture of described harvester, and when output signal in the presence of shelter being detected.
According to the one aspect of the present invention, described probe unit is reflective infrared detecting unit.
According to the one aspect of the present invention, described probe unit is gradation of image monitoring means, described gradation of image Monitoring means is for measuring the gradation of image average that sensing unit currently obtains.
According to the one aspect of the present invention, described gradation of image monitoring means detects the capture model of described harvester The method that whether there is shelter in enclosing is: when the gradation of image average monitoring current acquisition raises, then judge there is shelter Occur;
When the gradation of image average monitoring current acquisition reduces, then judge have shelter to leave;Wherein, described image ash Degree average can be according to image division net region and arrange different weight coefficient.
According to the one aspect of the present invention, described gradation of image monitoring means detects the capture model of described harvester The method that whether there is shelter in enclosing is: obtain the gradation of image average that sensing unit currently obtains;Calculate sensing unit to exist The average image gray average of the N width image obtained before the current image obtained;When the image ash that sensing unit currently obtains When degree average is X times of described the average image gray average, it is judged that shelter occurs;When the image ash that sensing unit currently obtains When degree average is Y times of described the average image gray average, it is judged that shelter leaves;Wherein, N is the integer more than 2, and X is big In the positive number of 1, Y is the positive number more than 0 less than or equal to 1.
According to the one aspect of the present invention, described gradation of image monitoring means detects the capture model of described harvester The method that whether there is shelter in enclosing is: obtain the gradation of image average that sensing unit currently obtains;When the described figure obtained During as gray average more than judgment threshold, it is judged that shelter occurs.When the described gradation of image average obtained is less than or equal to sentencing During disconnected threshold value, it is judged that shelter leaves.
Accordingly, present invention also offers a kind of visual biometric information acquisition method, including: single by gathering Unit gathers the image of the biometric information comprising individual to be identified;The figure that described collecting unit obtains is shown by display unit Picture;From the picture that collecting unit obtains, the biometric information comprising described individual to be identified is identified by feature extraction unit Characteristic area;Generated on the display unit according to the characteristic area that feature extraction unit identifies by processing unit and represent institute State the guiding mark of characteristic area;Wherein, described guiding identifies the change of the picture along with collecting unit acquisition and changes.
According to the one aspect of the present invention, it is contactless collection that described collecting unit gathers the method for image, tool Body, described collecting unit is photographic head.
According to the one aspect of the present invention, described biometric information is palm palmmprint, palm blood-vessel image, face Image, iris, palm skeleton, cheilogramma.
According to the one aspect of the present invention, described feature extraction unit identifies bag from the picture that collecting unit obtains Method containing the characteristic area of the biometric information of described individual to be identified includes obtaining representing described spy by recognizer Levy N number of fisrt feature coordinate points in region, described fisrt feature coordinate points comprises the shape containing described characteristic area and position Put the geological information of relation;Wherein, by the method for recognizer acquisition fisrt feature coordinate points it is: gather M palm image Sample, marks true key point position on each palm image pattern;True according in described M palm image pattern Key point position, obtains the initial estimated location that in each palm image pattern, each true key point position is corresponding;Right Each initial estimated location is trained, so that described initial estimated location approaches the true key point position of correspondence, obtains One cascade returns device;Return device according to described cascade, position the target critical point position in palm image to be identified, it is thus achieved that the Characteristic coordinates point;Wherein, N, M are the integer more than 2.
According to the one aspect of the present invention, the characteristic area that described processing unit identifies according to feature extraction unit Generate on the display unit and represent the method guiding mark of described characteristic area and be: on image described N number of the will obtained Characteristic coordinates point is converted on screen N number of second feature coordinate points of correspondence, wherein said corresponding relation according to corresponding relation For the proportionate relationship between capture scope and the size of display unit of collecting unit, N is the integer more than 2.
According to the one aspect of the present invention, the characteristic area that described processing unit identifies according to feature extraction unit The method generating the guiding mark representing described characteristic area on the display unit also includes: according to described second feature coordinate points Generate the guiding mark of shape and the position representing described characteristic area.
According to the one aspect of the present invention, what described guiding identified is shaped as the shape with described characteristic area the most Any one in close polygon, circle, ellipse.
According to the one aspect of the present invention, described biometric information acquisition method also includes: extract described in comprising The geological information of the acquisition range of collecting unit, and generate on the display unit according to described geological information and represent described collection model The reference identification enclosed.
According to the one aspect of the present invention, described biometric information acquisition method also includes: calculate error vector, The starting point of described error vector is the described geometric center point guiding mark, and terminal is the geometric center point of described reference identification; And, show length and the direction of described error vector on the display unit.
According to the one aspect of the present invention, described biometric information harvester also includes: probe unit, is used for Shelter whether is there is in the range of detecting the capture of described harvester, and when output signal in the presence of shelter being detected.
According to the one aspect of the present invention, described probe unit is reflective infrared detecting unit.
According to the one aspect of the present invention, described probe unit is gradation of image monitoring means, described gradation of image Monitoring means is for measuring the gradation of image average that sensing unit currently obtains.
According to the one aspect of the present invention, described gradation of image monitoring means detects the capture model of described harvester The method that whether there is shelter in enclosing is: when the gradation of image average monitoring current acquisition raises, then judge there is shelter Occur;When the gradation of image average monitoring current acquisition reduces, then judge have shelter to leave.
According to the one aspect of the present invention, described gradation of image monitoring means detects the capture model of described harvester The method that whether there is shelter in enclosing is: obtaining the gradation of image average that sensing unit currently obtains, wherein, described sensing is single Unit is for obtaining the image in investigative range, and according to the one aspect of the present invention, described sensing unit is photographic head;Calculate The average image gray average of the N width image that sensing unit obtained before the current image obtained;When sensing unit currently obtains When the gradation of image average taken is X times of described the average image gray average, it is judged that shelter occurs;When sensing unit currently obtains When the gradation of image average taken is Y times of described the average image gray average, it is judged that shelter leaves;Wherein, N is more than 2 Integer, X is the positive number more than 1, and Y is the positive number more than 0 less than or equal to 1.
According to the one aspect of the present invention, described gradation of image monitoring means detects the capture model of described harvester The method that whether there is shelter in enclosing is: obtain the gradation of image average that sensing unit currently obtains;Currently obtain when monitoring The gradation of image average taken raises, then judge have shelter to occur;When the gradation of image average monitoring current acquisition reduces, then Judge have shelter to leave;Wherein, described gradation of image average can be according to image division net region and arrange different weight system Number.
The harvester that the present invention provides can extract accurately in the image collected and comprise biometer to be collected The characteristic area of amount information, and the position relationship and shape relation according to characteristic area calculate the position comprising described characteristic area Put the geological information with shape relation, generate according to described geological information and guide mark, and show on the display unit.
The present invention can guide on the display unit and identify and the relative position of reference identification, and by this information intuitively Feed back to user, user naturally understand condition response type according to feedback information self-recision position, adjust position, make to comprise and wait to adopt The target of the biometric information of collection enters the image-acquisition area set.The described picture guiding mark collecting unit to obtain Change and change, reflect the real time position of biological characteristic to be collected.According to geometry image-forming principle, when comprising biology to be collected When the target of metrical information is gradually distance from collecting device, mark is guided to taper into, when described target is beyond harvester During reconnaissance range, mark is guided to disappear;Collecting device is moved closer to the target that should comprise biometric information to be collected, Guiding mark to become larger, when guiding mark beyond image-acquisition area frame, described guiding mark disappears, so that user Recognize the destination object to be collected capture scope beyond collecting unit, thus carry out active accommodation.Guide mark also can follow Palm demonstrates on the display unit moving up and down of horizontal plane, with this kind of display guidance mode, conveniently comprises and treats The target of the biometric information gathered quickly positions, thus gathers optimized image.The method simple and fast that the present invention provides Target for comprising biometric information to be collected provides direction and guides, swift to operate clear, it is achieved mode is simple and becomes This is low, and Consumer's Experience is good.
Although those of ordinary skill in the art are it will be appreciated that referenced in schematic embodiment, accompanying drawing are carried out by detailed description below, But the application is not limited in these embodiments.But, scope of the present application is widely, and is intended to be bound only by appended right Require to limit scope of the present application.
Accompanying drawing explanation
By the detailed description that non-limiting example is made made with reference to the following drawings of reading, other of the application Feature, purpose and advantage will become more apparent upon:
Fig. 1 is the structural representation of the device of the acquisition biometric information in one embodiment of the invention;
Fig. 2 (a) is an application schematic diagram of the device of the acquisition biometric information in one embodiment of the invention;
Fig. 2 (b) is the image obtained under the applicable cases in Fig. 2 (a);
Fig. 3 (a) is an application schematic diagram of the device of the acquisition biometric information in another embodiment of the present invention;
Fig. 3 (b) is the image obtained under the applicable cases in Fig. 3 (a);
Fig. 4 (a) is an application schematic diagram of the device of the acquisition biometric information in another embodiment of the present invention;
Fig. 4 (b) is the image obtained under the applicable cases in Fig. 4 (a);
Fig. 5 is a kind of generation method schematic diagram of the reference identification in another embodiment of the present invention;
Fig. 6 is the schematic diagram of the error vector that the error calculation unit in another embodiment of the present invention calculates;
In accompanying drawing, same or analogous reference represents same or analogous parts.
Detailed description of the invention
For making the object, technical solutions and advantages of the present invention clearer, below in conjunction with the accompanying drawing enforcement to the present invention Example is described in detail.
Embodiments of the invention are described below in detail, and the example of described embodiment is shown in the drawings, the most from start to finish Same or similar label represents same or similar element or has the element of same or like function.Below with reference to attached The embodiment that figure describes is exemplary, is only used for explaining the present invention, and is not construed as limiting the claims.
In the most detailed description, it is enumerated multiple specific details as example thus provides for relevant enforcement Comprehensive understanding of example.But, for a person skilled in the art, these details can be not limited to or together with supplementary features Realize the present invention.In other example, in relatively high aspect without with having details to known method, process, system, Assembly and/or circuit are described, thus avoid causing the present invention unnecessary obfuscation.
The present invention describes harvester based on multi-mode biometric information and person identifier based on this device Apparatus and method.The multi-mode biometric information of individual can include but not limited to the information relevant to the face-image of individual And the information relevant to palm feature.The application of this ID includes safe entrance, such as building, equipment, installation Software application etc. on the computing device.Such as, attempt by (the most mounted on a door or door by security system as individual Near) door that controls is when entering building, security system can notify that finger is placed in user-identification device by individual.User Mark equipment may determine that whether this individual has the identity being allowed to enter the known of this building and/or authorizing.
The device that the present invention is used for obtaining biometric information below is described in detail, and this device includes: gather Unit 110, display unit 120, feature extraction unit 130 and processing unit 140, below in conjunction with accompanying drawing and concrete enforcement The unit of said apparatus is described in detail by example.
Described collecting unit 110 is for gathering the image of the biometric information comprising individual to be identified.Described biometer Amount information can be that palm palmmprint, palm blood-vessel image, face-image, iris, palm skeleton, cheilogramma etc. are adopted by contactless The information of collection, it is also possible to be the information that gathered by contact such as fingerprint, finger blood vessel.In the present embodiment, described collection is single The method that unit 110 gathers image is contactless collection, and concrete, described collecting unit 110 is photographic head.Certainly, the present invention Contact collection is equally applicable.But in practice, owing to the acquisition range of contact collection is fixed, offset less, adopt Collecting an only little part in failed reason is owing to the position of acquisition target does not causes in capture scope, is therefore connecing Touch collection is suitable for the present invention raising of collecting efficiency is the most significantly helped.But the present invention may be equally applicable for contact Formula gathers, and this is that those of skill in the art can understand.
Display unit 120, for showing the image that described collecting unit obtains.The content of described display unit 120 display The content got in real time for collecting unit 110.Meanwhile, described display unit is also not necessarily limited to only show that collecting unit 110 obtains Content.Such as, it is that a part of region of display unit is displayed for time, the title of enterprise, mark and other letters Breath.Described display unit 120 is also not necessarily limited to show the content that described collecting unit 110 obtains at any time, can be arranged by autonomous When the described collecting unit 110 that sets it to that and if only if captures the image comprising biometric information to be identified, described Display unit 120 starts and shows the content that collecting unit 110 obtains in real time, and the most described display unit 120 is in standby shape State, to save energy.How the common knowledge that display content is this area of display unit 120 is set, does not repeats them here.
Described feature extraction unit 130 comprises described individual's to be identified for identifying from the picture that collecting unit obtains The characteristic area of biometric information.Concrete, that first collecting unit 110 is obtained by described feature extraction unit 130 image It is analyzed and verifies, confirming the most whether comprise biometric information to be collected, as a example by palm, if what confirmation collected Image comprises the pattern of palm, then confirms the characteristic area at palm place according to corresponding hand identification algorithm, and extract Go out the geological information of shape and the positional information representing described characteristic area.Concrete, described feature extraction unit includes several What information extraction unit, for obtaining representing N number of fisrt feature coordinate points of described characteristic area by recognizer, described the Characteristic coordinates point comprises the geological information of the shape containing described characteristic area and position relationship;Wherein, calculated by identification Method obtains the method for fisrt feature coordinate points: gather M palm image pattern, and on each palm image pattern, mark is true Real key point position;According to the true key point position in described M palm image pattern, obtain each palm image pattern In initial estimated location corresponding to each true key point position;Each initial estimated location is trained, so that institute State initial estimated location and approach the true key point position of correspondence, obtain a cascade and return device;Device is returned according to described cascade, Position the target critical point position in palm image to be identified, it is thus achieved that fisrt feature coordinate points;Wherein, what N, M were more than 2 is whole Number.
Described processing unit 140 for the characteristic area that identifies according to feature extraction unit 130 at display unit 120 Generate the guiding mark representing described characteristic area;Described processing unit 140 also includes: geometrical information processing unit, and being used for will The described N number of fisrt feature coordinate points on image obtained is converted on display unit 120 N number of the of correspondence according to corresponding relation 2 characteristic coordinates points, described second feature coordinate points is fisrt feature coordinate points projection coordinate's point on display unit 120, and And comprising the whole geological informations about characteristic area that fisrt feature coordinate points comprises, wherein said corresponding relation is single for gathering Proportionate relationship between capture scope and the size of display unit of unit, N is the integer more than 2;And it is single to guide mark to be formed Unit, for generating the guiding mark of shape and the position representing described characteristic area according to described second feature coordinate points.Wherein, Described guiding identifies the change of the picture along with collecting unit 110 acquisition and changes.Wherein, described guide mark be shaped as with The shape of described characteristic area closest to polygon, circle, any one in ellipse, it is also possible to be that other are not closed Geometry, such as shapes such as arc, fold-line-shaped, irregular curves.
Preferably, described biometric information harvester also includes: reference marker signal generating unit, comprises institute for extraction State the geological information of the acquisition range of collecting unit, and generate on the display unit according to described geological information and represent described collection The reference identification of scope.Wherein, described reference marker is positioned on display unit collection content effective showing collecting unit 110 In region (hereinafter referred to as effective coverage), it is used for identifying effective acquisition range.Its object is to as corresponding with guiding mark Constant object of reference, it is possible to clear and definite display guides the change in location of mark, it is simple to guide user according to guiding mark and ginseng Examine the position of the change adjustment sampling target identifying relative position.Generally, described reference has identical with described guiding mark Shape, its geometric center point is positioned at the central authorities of effective coverage.
Fig. 5 shows in one embodiment of the invention, when described reference identification is circular, and the center of circle of described circle The determination method of coordinate and radius.As shown in FIG., wherein, display unit 120 is rectangle, its a length of L, and height is H.Display There are three regions in unit 120, show the region of " Welcome " in region 1 i.e. figure, region 2 is to show time and day in Fig. 5 The region of phase, it can be seen that the two region is for showing the content unrelated with the collection content of collecting unit 110, therefore, aobvious Showing that the effective coverage in unit 120 is the region in addition to region 1 and region 2, the effective length in this region is L-L1, effectively It is highly H-H1.Length according to described effective coverage and height and the coordinate of its four vertex correspondence, can be easy To the position at the circular place, the center of circle of reference zone, at the geometric center point of the most described effective coverage.In practice, in order to enable Well playing the effect of reference, therefore, circular radius is generally less than smaller value in the length of effective coverage and height 1/2nd.In Figure 5, the length of effective coverage is less, and the radius R of this circle is less than 0.5 (H-H1), exists therebetween partially Difference M.Described deviation value M is the correction coefficient relevant to the acquisition range of collecting unit 110, artificially sets according to actual deviation Fixed.
Fig. 5 illustrate only a kind of generation method of described reference identification, and in the application, the shape of described reference identification is not Be limited to circle, it is also possible to for shapes such as polygon, circle, ellipses, it is also possible to be other geometries do not closed, as arc, The shape such as fold-line-shaped, irregular curve.Its generation method can also change accordingly according to the shape of reference figuration, relevant The content be familiar with by those of skill in the art of method, do not repeat them here.It should be noted that the reality shown in Fig. 5 Executing example is only explanation of the invention and explanation, and is not construed as limiting the claims.
Preferably, described biometric information harvester also includes: error calculation unit, is used for calculating error vector, The starting point of described error vector is the described geometric center point guiding mark, and terminal is the geometric center point of described reference identification; And Tip element, for showing length and the direction of described error vector on the display unit.Described error calculation unit Purpose is: calculate represent described guide mark geometric center and the geometric center point of described reference identification between distance and The error vector in direction, and point out described error vector the most on the display unit, it is possible to use family get more information about as What preferably gathers target to be collected.As shown in Figure 6, the arrow in figure clearly designates target to be collected and (i.e. guides mark Know corresponding roundlet) and the center of circle of the acquisition range of collecting unit 110 (great circle that reference identification is corresponding) between position relationship, Can find out intuitively, need to move target to be collected to upper left side and collecting unit 110 just can be made to collect more completely and clear Clear image.
Preferably, described biometric information harvester also includes: probe unit 150.
Whether described probe unit 150 exists shelter in the range of the capture detecting described harvester.Concrete, Probe unit 150 includes colour imagery shot and confirmation unit.Described colour imagery shot is for going out in the image pickup scope of photographic head Existing image is monitored, and the image monitored is sent to confirmation unit.Described confirmation unit is for colour imagery shot The image monitored is analyzed processing, and when the image monitored changes, shelter occurs in confirming image pickup scope, and Output comprises the result of confirmation.
In actual use, described collecting unit 110, feature extraction unit 130 and processing unit 140, it is in closing Closed state.When the image monitored does not changes, each part mentioned above all remains off.Colour imagery shot is located the most at any time In duty, the image in investigative range is monitored in real time, and the image got is sent to confirmation unit.When really Recognize unit and detect that image changes, then export confirmation, make collecting unit 110, feature extraction unit 130 and process Unit 140 successively starts according to pre-set programs.Described confirmation unit can be that controller, one chip microcomputer etc. have logic control The device of function, how realizing confirmation unit is any technique commonly known means, does not repeats them here.
In another embodiment of the present invention, described probe unit 150 is reflective infrared detecting unit.This is reflective Infrared detecting unit according to certain time interval lasting to emission infrared ray, send infrared ray be blocked thing when receiving During the infrared ray being reflected back, it is judged that there is shelter, send confirmation signal, open collecting unit 110, feature extraction unit 130 And processing unit 140.Have as the embodiment of probe unit 150, reflective infrared detecting unit compared to colour imagery shot Having the feature that low cost, integrated level are high, volume is little, its accuracy of identification is the most relatively low simultaneously.Probe unit 150 in practice Can choose according to actual needs.Meanwhile, described probe unit 150 can also be that other are not limited to its of above two device His device.
In another embodiment of the present invention, it is provided that the another kind of implementation method of probe unit 150.Wherein, described Probe unit 150 is gradation of image monitoring means, and described gradation of image monitoring means is for measuring the figure that photographic head currently obtains As gray average, described photographic head can be colour imagery shot, it is also possible to be infrared camera.Wherein, described gradation of image is equal Value is according to image division net region and arranges different weight coefficient.Concrete, the detection of described gradation of image monitoring means is described The method that whether there is shelter in the range of the capture of harvester is: when the gradation of image average liter monitoring current acquisition Height, then judge have shelter to occur;When the gradation of image average monitoring current acquisition reduces, then judge have shelter to leave. This is due in practice, and when have no occluder before photographic head, the gradation of image average that at this moment photographic head is got is relatively low, when taking the photograph During as having shelter suddenly before head, the gradation of image average that at this moment photographic head is got is higher, by gradation of image average from Low to high or change from high to low, we may determine that whether occur before photographic head shelter and shelter whether from Open.
In practice, in order to avoid the ambient light gradation of image average to obtaining produces interference, usual employing repeatedly connects The average image gray average of the continuous picture obtained is as object of reference.Concrete, the detection of described gradation of image monitoring means is described The method that whether there is shelter in the range of the capture of harvester is: obtain the gradation of image average that photographic head currently obtains; Calculate the average image gray average of the N width image that photographic head obtained before the current image obtained;When photographic head currently obtains When the gradation of image average taken is X times of described the average image gray average, it is judged that shelter occurs;When photographic head currently obtains Gradation of image average when being Y times of described the average image gray average, it is judged that shelter leaves;Wherein, N is whole more than 2 Number, X is the positive number more than 1, and Y is the positive number more than 0 less than or equal to 1.
In actual applications, the sampling of photographic head is continuous print, and when have no occluder before photographic head, at this moment photographic head is taken The gradation of image average arrived is relatively low, when having shelter suddenly before photographic head, and light luminescence unit 110 sent due to shelter Line reflection is returned, and now the gradation of image average acquired in photographic head is equal compared to the gradation of image obtained when there is not shelter Value height.Gradation of image monitoring means is exactly based on the change to the gradation of image average that photographic head obtains and judges the appearance of shelter With leave.
Additionally, in actual applications, due to the impact of ambient, the gradation of image average acquired in photographic head is at any time Change, when there is not shelter, its gradation of image average is not a changeless value yet, but in a certain scope Interior fluctuation.Therefore, in order to avoid the erroneous judgement caused due to the fluctuation of gradation of image average, improving the accuracy judged, we are by number The gradation of image mean of mean that sub-sampling obtains is as the reference value judging gradation of image Change in Mean.Such as, photographic head The current gradation of image average obtained is A, and 10 the gradation of image averages obtained before the current image obtained are followed successively by A1 ~A10, its average image gray average isWhen the value of A isX times time, i.e. explanation present image gray average relatively before obtain The gradation of image average taken improves, and in the range of having shelter to enter sampling, wherein X is the positive number more than 1, and the value of usual X is Between 1.5~5, such as 2,3,4.5 etc..Certainly, the needs judging precision can be set by the value of X according in reality, This repeats no more.In other embodiments, amount of images N obtained before the current image obtained can also be that other are any Natural number, concrete quantity can calculate according to the sampling interval duration of photographic head and required average time of once sampling Going out, such as, N may be greater than the random natural number of 2, such as 3,5,8,10,11,18 etc..Data above is exemplary right The present invention illustrates, it is impossible to be construed to limitation of the present invention.
Same, when judging whether shelter leaves photographic head, it would however also be possible to employ identical method: obtain photographic head and work as The gradation of image average of front acquisition;Calculate the average image ash of the N width image that photographic head obtained before the current image obtained Degree average;When the gradation of image average that photographic head currently obtains be described the average image gray average Y times, it is judged that shelter Leave;Wherein, N is the integer more than 2, and Y is the positive number more than 0 less than or equal to 1.Such as, the image that photographic head currently obtains Gray average is B, and 6 the gradation of image averages obtained before the current image obtained are followed successively by B1~B6, its average image ash Degree average isWhen the value of B isY times time, i.e. explanation present image gray average relatively before obtain gradation of image average Reducing, have shelter to leave from sampling scope, wherein Y is the positive number less than or equal to 1, and the value of usual Y is 0.2~0.8 Between, such as 0.3,0.5,0.7 etc..
In certain embodiments, the gradation of image average produced in sampling process in order to avoid the change of ambient The erroneous judgement that fluctuation causes, the preset value of X is the highest, and such as X is 3 the most higher 5, it means that the most current figure obtained As gray average is the average image gray average that front N opens image five times time, just can determine whether that shelter occurs.But in application In, due to the impact of ambient, the ratio that gradation of image average acquired after shelter may be caused to occur raises is relatively Little, for example, 1.5 times or 2 times of the average image gray average, at this moment will be unable to effectively judge that shelter occurs.Because now Although shelter has occurred, and the gradation of image average obtained raises, but does not reaches the threshold value judging that shelter occurs.Therefore Can also set in practice, current gradation of image average C obtained higher than front N open image the average image gray average but During less than the threshold binary image gray average judged, if the gradation of image average of the M pictures next continuously acquired is above working as Gradation of image average C of front acquisition, or the gradation of image average of the M pictures next continuously acquired is above currently obtaining The average image gray average C1 of M-1 pictures, or in the M pictures next continuously acquired, the gray value of each pictures is equal Higher than the gray value of previous pictures, then it also hold that shelter has occurred.Wherein the quantity of M can adopting according to photographic head Sample interval time and required average time of once sampling calculate, and can be the natural numbers arbitrarily more than 2, such as 3,5, 8,9,10 etc., do not repeat them here.
Same, in certain embodiments, the image produced in sampling process in order to avoid the change of ambient The erroneous judgement that gray average fluctuation causes, the preset value of Y is the most relatively low, and such as Y is 0.3 even lower 0.25, it means that only Have the gradation of image average of current acquisition be front N open the average image gray average of image four/for the moment, just can determine whether screening Block material leaves.But in the application, due to the impact of ambient, image graph acquired after shelter may be caused to leave As gray average is the highest, for example, 1/2nd of the average image gray average, at this moment will be unable to effectively judge to block Thing has been moved off.Although because now shelter has been moved off, and the gradation of image average obtained reduces, but does not reaches judgement and hide The threshold value that block material leaves.Can also set the most in practice, open image in current gradation of image average D obtained less than front N The average image gray average, but during higher than the threshold binary image gray average judged, if the L pictures that next continuously acquires Gradation of image average is below gradation of image average D currently obtained, if or the image ash of the L pictures that next continuously acquires When spending average and open gradation of image average meansigma methods less than front N, then it also hold that shelter has been moved off.Wherein the quantity of L can Calculate with the sampling interval duration according to photographic head and required average time of once sampling, can be arbitrarily more than 2 Natural number, such as 3,5,8,9,10 etc., do not repeat them here.
In an alternative embodiment of the invention, described gradation of image monitoring means detects the capture scope of described harvester The most whether there is the method for shelter it may also is that obtain the gradation of image average that sensing unit currently obtains;When the institute obtained When stating gradation of image average more than judgment threshold, it is judged that shelter occurs.When obtain described gradation of image average less than or etc. When judgment threshold, it is judged that shelter leaves.The more previously described method of this method is simple and convenient, it is easy to accomplish, when in reality When the judgement levels of precision whether having shelter to occur is required relatively low by border, realization can be adopted this method.
The purpose of probe unit 150 is to reduce system power dissipation, it is ensured that only just start collecting unit when shelter occurs 110, feature extraction unit 130 and processing unit 140.Described probe unit 150 in addition to previously described embodiment, Other methods that can realize this function may be used to realize the present invention, related embodiment be this area known often Know, do not repeat them here.Above example is only used for explaining and the explanation present invention, and is not construed as limiting the claims. Accordingly, present invention also offers a kind of visual biometric information acquisition method, including:
Step one, gathered the image of biometric information comprising individual to be identified by collecting unit 110;
Step 2, show, by display unit 120, the image that described collecting unit 110 obtains;
Step 3, identified from the picture that collecting unit 110 obtains by feature extraction unit 130 comprise described to be identified The characteristic area of the biometric information of individual;
Step 4, the characteristic area identified according to feature extraction unit 130 by processing unit 140 are at display unit The guiding mark representing described characteristic area is generated on 120;Wherein, the described picture guiding mark to obtain along with collecting unit 110 The change in face and change.
Preferably, before step one, step A is also included: detected the capture of described harvester by probe unit 150 In the range of whether there is shelter, output signal in the presence of shelter being detected.
Above-mentioned each step will be described in detail below.
First, in step, whether described probe unit 150 is deposited in the range of the capture detecting described harvester At shelter.The purpose of probe unit 150 is to reduce system power dissipation, it is ensured that only just start collecting unit when shelter occurs 110, feature extraction unit 130 and processing unit 140.In one embodiment of the invention, above-mentioned probe unit 150 includes Colour imagery shot and confirmation unit.Described colour imagery shot is used for monitoring image and being sent to confirmation unit.When described confirmation form Unit judges that, when the image obtained changes, output comprises the result of confirmation.
In another embodiment of the present invention, described probe unit 150 is reflective infrared detecting unit.This is reflective Infrared detecting unit according to certain time interval lasting to emission infrared ray, send infrared ray be blocked thing when receiving During the infrared ray being reflected back, it is judged that there is shelter, send confirmation signal, open collecting unit 110, feature extraction unit 130 And processing unit 140.Have as the embodiment of probe unit 150, reflective infrared detecting unit compared to colour imagery shot Having the feature that low cost, integrated level are high, volume is little, its accuracy of identification is the most relatively low simultaneously.Probe unit 150 in practice Can choose according to actual needs.Meanwhile, described probe unit 150 can also time other are not limited to its of above two device His device.
In another embodiment of the present invention, it is provided that the another kind of implementation method of probe unit 150.Wherein, described Probe unit 150 is gradation of image monitoring means, and described gradation of image monitoring means is for measuring the figure that photographic head currently obtains As gray average, described photographic head can be colour imagery shot, it is also possible to be infrared camera.Concrete, described gradation of image The method that whether there is shelter in the range of the capture of the monitoring means described harvester of detection is: when monitoring current acquisition Gradation of image average raises, then judge have shelter to occur;When the gradation of image average monitoring current acquisition reduces, then judge Shelter is had to leave.This is owing in practice, the successional capture of infrared camera (sampling), when have no occluder before photographic head Time, the gradation of image average that at this moment photographic head is got is relatively low, and when having shelter suddenly before photographic head, at this moment photographic head is taken The gradation of image average arrived is higher, and by the change from low to high of gradation of image average, we may determine that at photographic head Before let go the palm (or it may be said that palm enter);It has been determined that in the case of palm entrance, if leaving palm, at this moment take the photograph The gradation of image average got as head is relatively low, and at this moment by image graph as gray average from high to low, we may determine that Palm is already out.In practice, owing to the gradation of image average of the taken image of outer infrared camera changes at any time, can be up and down Fluctuation.It is thus desirable to repeatedly capture sampling processing.For example, it is possible to continuously acquire repeatedly image, calculate the flat of gradation of image average Average, compares with the gray average of the lower width image obtained.Instantly the gray average of width image is the certain of its meansigma methods During multiple, it is judged that palm enters.If gradation of image average the most next time is higher than its meansigma methods, but is not up to certain times Number, if but its gradation of image average be persistently higher than meansigma methods time, be also considered as palm enter.Meanwhile can also detect red Whether outer detectable signal there occurs that change carrys out auxiliary judgment, when infrared detectable signal changes, if within the unit interval The amplitude of liter, when exceeding setting threshold values, the most also can determine whether that palm enters.Due to the fluctuation of gradation of image average, judging its hands When the palm leaves, it is also desirable to judge its gradation of image mean of mean, when palm is before photographic head, putting down of its gradation of image average Average also can increase therewith, and when palm leaves, lower width gradation of image average also decreases, when gradation of image average is less than ash During degree mean of mean certain multiple, it is judged that palm leaves.If the most lower width gradation of image average, equal less than gradation of image Value meansigma methods time, and not up to specified multiple time, during if being consistently less than gradation of image mean of mean, be also considered as hands The palm leaves.Can also detect whether infrared acquisition signal there occurs that change carrys out auxiliary judgment with the while of now, infrared when detecting During detectable signal, if within the unit interval, fall, when reaching to set threshold values, at this moment can also judge leaving of palm. Additionally, in practice, add up according to experimental data, when gradation of image average reaches predetermined threshold values, it is possible to think that palm enters Enter.When gradation of image average is less than predetermined threshold values, it is possible to think that palm leaves.
In practice, in order to avoid the ambient light gradation of image average to obtaining produces interference, usual employing repeatedly connects The average image gray average of the continuous picture obtained is as object of reference.Concrete, the detection of described gradation of image monitoring means is described The method that whether there is shelter in the range of the capture of harvester is: obtain the gradation of image average that photographic head currently obtains; Calculate the average image gray average of the N width image that photographic head obtained before the current image obtained;When photographic head currently obtains When the gradation of image average taken is X times of described the average image gray average, it is judged that shelter occurs;When photographic head currently obtains Gradation of image average when being Y times of described the average image gray average, it is judged that shelter leaves;Wherein, N is whole more than 2 Number, X is the positive number more than 1, and Y is the positive number more than 0 less than or equal to 1.
In actual applications, the sampling of photographic head is continuous print, and when have no occluder before photographic head, at this moment photographic head is taken The gradation of image average arrived is relatively low, when having shelter suddenly before photographic head, and light luminescence unit 110 sent due to shelter Line reflection is returned, and now the gradation of image average acquired in photographic head is equal compared to the gradation of image obtained when there is not shelter Value height.Gradation of image monitoring means is exactly based on the change to the gradation of image average that photographic head obtains and judges the appearance of shelter With leave.
Additionally, in actual applications, due to the impact of ambient, the gradation of image average acquired in photographic head is at any time Change, when there is not shelter, its gradation of image average is not a changeless value yet, but in a certain scope Interior fluctuation.Therefore, in order to avoid the erroneous judgement caused due to the fluctuation of gradation of image average, improving the accuracy judged, we are by number The gradation of image mean of mean that sub-sampling obtains is as the reference value judging gradation of image Change in Mean.Such as, photographic head The current gradation of image average obtained is A, and 10 the gradation of image averages obtained before the current image obtained are followed successively by A1 ~A10, its average image gray average isWhen the value of A isX times time, i.e. explanation present image gray average relatively before obtain The gradation of image average taken improves, and in the range of having shelter to enter sampling, wherein X is the positive number more than 1, and the value of usual X is Between 1.5~5, such as 2,3,4.5 etc..Certainly, the needs judging precision can be set by the value of X according in reality, This repeats no more.In other embodiments, amount of images N obtained before the current image obtained can also be that other are any Natural number, concrete quantity can calculate according to the sampling interval duration of photographic head and required average time of once sampling Going out, such as, N may be greater than the random natural number of 2, such as 3,5,8,10,11,18 etc..Data above is exemplary right The present invention illustrates, it is impossible to be construed to limitation of the present invention.
Same, when judging whether shelter leaves photographic head, it would however also be possible to employ identical method: obtain photographic head and work as The gradation of image average of front acquisition;Calculate the average image ash of the N width image that photographic head obtained before the current image obtained Degree average;When the gradation of image average that photographic head currently obtains be described the average image gray average Y times, it is judged that shelter Leave;Wherein, N is the integer more than 2, and Y is the positive number more than 0 less than or equal to 1.Such as, the image that photographic head currently obtains Gray average is B, and 6 the gradation of image averages obtained before the current image obtained are followed successively by B1~B6, its average image ash Degree average isWhen the value of B isY times time, i.e. explanation present image gray average relatively before obtain gradation of image average Reduce, have shelter from sampling in the range of leave, wherein Y is the positive number less than or equal to 1, the value of usual Y be 0.2~0.8 it Between, such as 0.3,0.5,0.7 etc..
In certain embodiments, the gradation of image average produced in sampling process in order to avoid the change of ambient The erroneous judgement that fluctuation causes, the preset value of X is the highest, and such as X is 3 the most higher 5, it means that the most current figure obtained As gray average is the average image gray average that front N opens image five times time, just can determine whether that shelter occurs.But in application In, due to the impact of ambient, the ratio that gradation of image average acquired after shelter may be caused to occur raises is relatively Little, for example, 1.5 times or 2 times of the average image gray average, at this moment will be unable to effectively judge that shelter occurs.Because now Although shelter has occurred, and the gradation of image average obtained raises, but does not reaches the threshold value judging that shelter occurs.Therefore Can also set in practice, current gradation of image average C obtained higher than front N open image the average image gray average but During less than the threshold binary image gray average judged, if the gradation of image average value of the M pictures next continuously acquired is above Currently gradation of image average C of acquisition, or the gradation of image average of the M pictures next continuously acquired is above currently obtaining The average image gray average C2 of M-1 pictures, or the gray value of each pictures in the M pictures next continuously acquired It is above the gray value of previous pictures, then it also hold that shelter has occurred.Wherein the quantity of M can be according to photographic head Sampling interval duration and required average time of once sampling calculate, and can be the natural numbers arbitrarily more than 2, such as 3, 5,8,9,10 etc., do not repeat them here.
Same, in certain embodiments, the image produced in sampling process in order to avoid the change of ambient The erroneous judgement that gray average fluctuation causes, the preset value of Y is the most relatively low, and such as Y is 0.3 even lower 0.25, it means that only Have the gradation of image average of current acquisition be front N open the average image gray average of image four/for the moment, just can determine whether screening Block material leaves.But in the application, due to the impact of ambient, image ash acquired after shelter may be caused to leave Degree average is the highest, and at this moment for example, 1/2nd of the average image gray average will be unable to effectively judge shelter Through leaving.Although because now shelter has been moved off, and the image graph obtained is as gray average reduction, but does not reaches judgement and hide The threshold value that block material leaves.Can also set the most in practice, open image in current gradation of image average D obtained less than front N The average image gray average but during higher than the threshold binary image gray average judged, if the L pictures that next continuously acquires Gradation of image average is respectively less than current gradation of image average D obtained, if or the image ash of L pictures that next continuously acquires Degree average, is below front L-1 when opening gradation of image average meansigma methods, if or the image ash of L pictures that next continuously acquires Degree average, is below front L-1 when opening gradation of image average, then it also hold that shelter has been moved off.Wherein the quantity of L can root Sampling interval duration and required average time of once sampling according to photographic head calculate, can be arbitrarily more than 2 from So number, such as 3,5,8,9,10 etc., do not repeat them here.
In an alternative embodiment of the invention, described gradation of image monitoring means detects the capture scope of described harvester The most whether there is the method for shelter it may also is that obtain the gradation of image average that sensing unit currently obtains;When the institute obtained When stating gradation of image average more than judgment threshold, it is judged that shelter occurs.When obtain described gradation of image average less than or etc. When judgment threshold, it is judged that shelter leaves.The more previously described method of this method is simple and convenient, it is easy to accomplish, when in reality When the judgement levels of precision whether having shelter to occur is required relatively low by border, realization can be adopted this method.
The purpose of probe unit 150 is to reduce system power dissipation, it is ensured that just start collecting unit when only shelter occurs 110, feature extraction unit 130 and processing unit 140.Described probe unit 150 in addition to previously described embodiment, Other methods that can realize this function may be used to realize the present invention, related embodiment be this area known often Know, do not repeat them here.Above example is only used for explaining and the explanation present invention, and is not construed as limiting the claims.
Described biometric information to be obtained can be the facial characteristics of people, the palm print characteristics of palm, palm blood vessel spy Levy and iris feature etc. be in following narration, using the palmprint image of palm as biometric information to be collected as a example by right The present invention illustrates.
In step one, gathered the image of the biometric information comprising individual to be identified by collecting unit 110.Described Biometric information can be that palm palmmprint, palm blood-vessel image, face-image, iris, palm skeleton, cheilogramma etc. are connect by non- The information that touch gathers, it is also possible to be the information that gathered by contact such as fingerprint, finger blood vessel.In the present embodiment, described It is contactless collection that collecting unit 110 gathers the method for image, concrete, and described collecting unit 110 is photographic head.Preferably , described collecting unit can include luminescence unit, for sending the infrared ray irradiating detected object.Preferably, take the photograph described in As head is infrared camera, for obtaining the infrared mapping figure comprising biological characteristic to be identified that described probe portion detects.
Preferably, detection steps can also be included before step one: by the probe unit collection model to collecting unit 110 Enclose and detect, when detect occurred in investigative range by shelter time, output comprises the result of confirmation.Concrete, Probe unit is for including colour imagery shot, and described colour imagery shot is for supervising the image occurred in the image pickup scope of photographic head Surveying, when monitoring image change, output comprises the result of confirmation.The purpose of probe unit is to filter interference signal, fall Low system power dissipation, it is ensured that just start collecting unit 110 when only occurring shelter, thus reduce system power dissipation.
Next, in step 2, show, by display unit 120, the image that described collecting unit 110 obtains.Will adopt The image that collection unit 110 obtains shows on display unit 120 in real time, and this technology is ordinary skill in the art means Do not repeat them here.
It follows that in step 3, described feature extraction unit 130 identifies bag from the picture that collecting unit 110 obtains Method containing the characteristic area of the biometric information of described individual to be identified includes: obtain representing described spy by recognizer Levy N number of fisrt feature coordinate points in region, described fisrt feature coordinate points comprises the shape containing described characteristic area and position Put the geological information of relation;Wherein, by the method for recognizer acquisition fisrt feature coordinate points it is: gather M palm image Sample, marks true key point position on each palm image pattern;True according in described M palm image pattern Key point position, obtains the initial estimated location that in each palm image pattern, each true key point position is corresponding;Right Each initial estimated location is trained, so that described initial estimated location approaches the true key point position of correspondence, obtains One cascade returns device;Return device according to described cascade, position the target critical point position in palm image to be identified, it is thus achieved that the Characteristic coordinates point;Wherein, N, M are the integer more than 2.
Concrete, described true key point position is the concave point in gap, forefinger and middle finger between thumb and forefinger respectively Between the concave point in gap, the concave point in gap between middle finger and the third finger, the concave point in gap between nameless and little finger of toe, little finger of toe is in one's hands The palm concave point, and, the boundary point that between above-mentioned concave point with above-mentioned forefinger and middle finger, the concave point in gap is connected, and with above-mentioned nothing Name refers to the boundary point that the concave point in gap is connected between little finger of toe.In other embodiments of the present invention, described true key point position Being the concave point in gap between thumb and forefinger respectively, the concave point in gap between above-mentioned forefinger and middle finger, above-mentioned middle finger is with unknown The concave point in gap between finger, the concave point in gap between the above-mentioned third finger and little finger of toe, the concave point of above-mentioned little finger of toe to palm, with above-mentioned food Refer to the boundary point being connected with the concave point in gap between middle finger, the border being connected with the concave point in gap between the above-mentioned third finger and little finger of toe Point, the left end point of horizontal main palmmprint and right endpoint.
Concrete, in one embodiment of the invention, each initial estimated location is trained, so that at the beginning of above-mentioned Beginning to estimate the true key point position that correspondence is approached in position, the concrete steps obtaining a cascade recurrence device include: successively with often Initial estimated location corresponding to each true key point position in one palm image pattern is impact point, according to above-mentioned mesh Punctuate obtains a features training point;Combine corresponding initially the estimating in all true key point position in a palm image pattern Meter position, obtains P features training point, and records the positional information of above-mentioned P features training point;According to compound mode from above-mentioned P features training point is chosen two features training points, the first of the grey scale pixel value of two features training points that calculating is chosen Difference, each first difference is a training characteristics, obtains F training characteristics;Calculate in each palm image pattern The second difference between each true key point position and corresponding initial estimated location, using above-mentioned second difference as training Target, is trained F training characteristics in each palm image pattern obtaining a weak recurrence device;According to above-mentioned weak time Return device, above-mentioned F training characteristics is returned the increment size obtaining each initial estimated location;Calculate above-mentioned increment size with The sum of corresponding initial estimated location, obtains new initial estimated location;Judge whether low layer frequency of training meets A1 time, if not having Meet A1 time, then turn to the step performing to choose two features training points from above-mentioned P features training point according to compound mode; If meeting A1 time, then judging whether upper strata frequency of training meets A2 time, if not meeting A2 time, then turning to execution successively with often Initial estimated location corresponding to each true key point position in one palm image pattern is impact point, according to above-mentioned mesh Punctuate obtains the step of a features training point;If meeting A2 time, then completing training, obtaining above-mentioned cascade and returning device.Wherein, institute State P, F, A1, A2 and be the positive integer not less than 2.
Concrete, in one embodiment of the invention, true with each in each palm image pattern successively Initial estimated location corresponding to key point position is impact point, obtains the method bag of a features training point according to above-mentioned impact point Include: initial estimated location corresponding from one true key point position of each palm image pattern selection successively is as target Point;Using above-mentioned impact point as round dot, justify as radius using r, and extract a bit in circle, obtain a training characteristics point.
It follows that in step 4, the characteristic area that described processing unit 140 identifies according to feature extraction unit 130 On display unit 120 generate represent described characteristic area guide mark method be: will obtain image on described N number of Fisrt feature coordinate points is converted on screen N number of second feature coordinate points of correspondence, wherein said corresponding pass according to corresponding relation System is the proportionate relationship between capture scope and the size of display unit of collecting unit, and N is the integer more than 2.
Concrete, the characteristic area that described processing unit 140 identifies according to feature extraction unit 130 is at display unit The method generating the guiding mark representing described characteristic area on 120 includes: generates according to described second feature coordinate points and represents The shape of described characteristic area and the guiding mark of position.Described guide mark be shaped as the shape with described characteristic area For any one in close polygon, circle, ellipse.Illustrate as a example by circular guiding mark.
In one embodiment of the invention, in previous step, generate four second feature coordinate points A (x1, y1), B (x2, y2), C (x3, y3) and D (x4, y4), aforementioned four coordinate points constitutes one in the effective coverage of display unit 120 Represent shape and the position of the characteristic area comprising bio information to be collected.It follows that generate according to circular guiding mark Algorithm generates guiding mark on display unit 120.Wherein, the computational methods of radius are, calculate by four second special respectively Levy coordinate points constitute tetragon in two limits of X-direction and the difference of the length on two limits of Y-direction, and by two differences relatively Little value confirms as guiding the radius of mark, and corresponding algorithm is as follows:
Int w1=0, w2=0, h1=0, h2=0, Radius=0;
W1=x2 x1;
W2=x3 x4;
H1=y4 y1;
H2=y3 y2;
W1=((w1 < w2)?w1:w2);
H1=((h1 < h2)?h1:h2);
Radius=(w1 < h1)?w1:h1;
The computational methods of central coordinate of circle are, calculate the geometric center point of the tetragon that four second feature coordinate points are constituted Coordinate, be guide mark central coordinate of circle, corresponding algorithm is as follows:
X=x1+w1/2;
Y=y1+h1/2;
Preferably, described biometric information acquisition method also includes: extract the acquisition range comprising described collecting unit Geological information, and generate the reference identification representing described acquisition range on the display unit according to described geological information.Preferably , described biometric information acquisition method also includes: calculate error vector, and the starting point of described error vector is that described guiding is marked The geometric center point known, terminal is the geometric center point of described reference identification;And, show that described error is vowed on the display unit The length of amount and direction.The effect of reference identification and generating algorithm thereof see the corresponding description of China above, do not repeat them here.
Fig. 2 shows one embodiment of the present of invention.As shown in Fig. 2 (a), biometric information to be collected is to be identified The palm of individual.It can be seen that the collection distance of harvester is (L-△ L, L+ △ L) in figure, the acquisition range of harvester (region that corresponding reference identification marks on display unit 120) is S.In the present embodiment, the scope residing for palm (draw by correspondence The region of mark on display unit 120 known by beacon) it is S1, within being in S, the distance of palm distance harvester is also at adopting Within the scope of collection distance, therefore as shown in Fig. 2 (b), the image area collected is of moderate size, and position deviation is less, therefore Mark is guided to be positioned at accurately inside reference identification.
Fig. 3 shows an alternative embodiment of the invention.As shown in Fig. 3 (a), it can be seen that the individual to be identified in figure Height short compared with the individual to be identified in Fig. 2, accordingly, this individual naturally lifts the height of arm less than the individual in Fig. 2, The scope S2 part that palm now is corresponding is positioned at outside the region that acquisition range S is corresponding, simultaneously because the distance that this individual stands Distance harvester is relatively near, and therefore, as shown in Fig. 3 (b), the image area collected is relatively big, and position deviation is more apparent, now draws Beacon knowledge only partially overlaps with reference identification, and area dramatically increases compared to the area guiding mark in Fig. 2 (b).
Fig. 4 shows an alternative embodiment of the invention.As shown in Fig. 4 (a), it can be seen that the individual to be identified in figure The distance stood compared with in Fig. 3 to be identified individual station distance closer to, therefore, as shown in Fig. 4 (b), the image area collected is more Greatly, the corresponding area guiding mark substantially increases compared to the area guiding mark in Fig. 3 (b).
The harvester that the present invention provides can extract accurately in the image collected and comprise biometer to be collected The characteristic area of amount information, and the position relationship and shape relation according to characteristic area calculate the position comprising described characteristic area Put the geological information with shape relation, generate according to described geological information and guide mark, and show on the display unit.By aobvious Show the relative position guiding mark and reference identification on unit, guide the target comprising biometric information to be collected to enter Enter the image-acquisition area set.Described guiding identifies the change of the picture that collecting unit obtains and changes, and reflects to be collected The real time position of biological characteristic.According to geometry image-forming principle, when the target comprising biometric information to be collected is gradually distance from During collecting device, guide mark to taper into, when described target is beyond the reconnaissance range of harvester, guide mark to disappear Lose;Move closer to collecting device with the target that should comprise biometric information to be collected, guide mark to become larger, when When guiding mark beyond image-acquisition area frame, meeting warning on the display unit.Guide mark also can follow palm Demonstrate on the display unit moving up and down of horizontal plane, with this kind of display guidance mode, conveniently comprise to be collected The target of biometric information quickly position, thus gather optimized image.The present invention provide method simple and fast for bag Target containing biometric information to be collected provides direction and guides, and swift to operate understands, it is achieved mode is simple and low cost, Consumer's Experience is good.
It is obvious to a person skilled in the art that the application is not limited to the details of above-mentioned one exemplary embodiment, Er Qie In the case of spirit herein or basic feature, it is possible to realize the application in other specific forms.Therefore, no matter From the point of view of which point, all should regard embodiment as exemplary, and be nonrestrictive, scope of the present application is by appended power Profit requires rather than described above limits, it is intended that all by fall in the implication of equivalency and scope of claim Change is included in the application.Should not be considered as limiting involved claim by any reference in claim.This Outward, it is clear that " including ", a word was not excluded for other unit or step, and odd number is not excluded for plural number.In system claims, statement is multiple Unit or device can also be realized by software or hardware by a unit or device.The first, the second word such as grade is used for table Show title, and be not offered as any specific order.

Claims (10)

1. a visual biometric information harvester, including:
Collecting unit, for gathering the image of the biometric information comprising individual to be identified;
Display unit, for showing the image that described collecting unit obtains;
Feature extraction unit, for identifying the biometric letter comprising described individual to be identified from the image that collecting unit obtains The characteristic area of breath;
Processing unit, the characteristic area for identifying according to feature extraction unit generates on the display unit and represents described feature The guiding mark in region;Wherein,
Described guiding identifies the change of the picture along with collecting unit acquisition and changes.
Device the most according to claim 1, it is characterised in that it is contactless that described collecting unit gathers the method for image Gather.
Device the most according to claim 2, it is characterised in that described collecting unit is photographic head.
Device the most according to claim 1, it is characterised in that described biometric information is palm palmmprint, palm blood vessel Image, face-image, iris, palm skeleton, cheilogramma.
Device the most according to claim 1, it is characterised in that described feature extraction unit includes:
Picking up geometry information unit, for obtaining representing N number of fisrt feature coordinate points of described characteristic area by recognizer, Described fisrt feature coordinate points comprises the geological information of the shape containing described characteristic area and position relationship;Wherein, pass through Recognizer obtains the method for fisrt feature coordinate points:
Gather M palm image pattern, each palm image pattern marks true key point position;
According to the true key point position in described M palm image pattern, obtain each in each palm image pattern The initial estimated location that true key point position is corresponding;
Each initial estimated location is trained, so that described initial estimated location approaches the true key point of correspondence Put, obtain a cascade and return device;
Return device according to described cascade, position the target critical point position in palm image to be identified, it is thus achieved that fisrt feature coordinate Point;
Wherein, N, M are the integer more than 2.
Device the most according to claim 5, it is characterised in that described processing unit also includes:
Geometrical information processing unit, for changing the described N number of fisrt feature coordinate points on the image of acquisition according to corresponding relation It is counted as on screen N number of second feature coordinate points of correspondence;Wherein,
Described corresponding relation is the proportionate relationship between capture scope and the size of display unit of collecting unit, and N is more than 2 Integer.
Device the most according to claim 6, it is characterised in that described processing unit also includes:
Mark is guided to form unit, for generating shape and the position representing described characteristic area according to described second feature coordinate points The guiding mark put.
8. according to the device that in claim 1 or 7, any one is described, it is characterised in that being shaped as and institute of described guiding mark State the shape of characteristic area closest to polygon, circle, any one in ellipse.
Device the most according to claim 1, it is characterised in that described biometric information harvester also includes:
Reference marker signal generating unit, for extracting the geological information of the acquisition range comprising described collecting unit, and according to described Geological information generates the reference identification representing described acquisition range on the display unit.
Device the most according to claim 9, it is characterised in that described biometric information harvester also includes:
Error calculation unit, is used for calculating error vector, and the starting point of described error vector is the described geometric center guiding mark Point, terminal is the geometric center point of described reference identification;And,
Tip element, for showing length and the direction of described error vector on the display unit.
CN201610375295.XA 2016-05-30 2016-05-30 A kind of visual biometric information acquisition device and method Active CN106056080B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610375295.XA CN106056080B (en) 2016-05-30 2016-05-30 A kind of visual biometric information acquisition device and method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610375295.XA CN106056080B (en) 2016-05-30 2016-05-30 A kind of visual biometric information acquisition device and method

Publications (2)

Publication Number Publication Date
CN106056080A true CN106056080A (en) 2016-10-26
CN106056080B CN106056080B (en) 2019-11-22

Family

ID=57171602

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610375295.XA Active CN106056080B (en) 2016-05-30 2016-05-30 A kind of visual biometric information acquisition device and method

Country Status (1)

Country Link
CN (1) CN106056080B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107330429A (en) * 2017-05-17 2017-11-07 北京捷通华声科技股份有限公司 A kind of localization method and device of certificate entry
CN109063601A (en) * 2018-07-13 2018-12-21 北京科莱普云技术有限公司 Cheilogramma detection method, device, computer equipment and storage medium
CN112115792A (en) * 2020-08-14 2020-12-22 深圳大学 Access & exit information statistical system, method and computer equipment

Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101986351A (en) * 2010-11-01 2011-03-16 深圳市中控生物识别技术有限公司 Networking type access control device and method based on biometric identification technology
CN102176244A (en) * 2011-02-17 2011-09-07 东方网力科技股份有限公司 Method and device for determining shielding condition of camera head
CN102542242A (en) * 2010-12-27 2012-07-04 北京北科慧识科技股份有限公司 Method and device for positioning biological characteristic region of image acquired in a non-contact way
CN103021138A (en) * 2012-12-28 2013-04-03 广州市浩云安防科技股份有限公司 Method for sensing shielding of video camera and device for implementing method
CN103824050A (en) * 2014-02-17 2014-05-28 北京旷视科技有限公司 Cascade regression-based face key point positioning method
CN104240235A (en) * 2014-08-26 2014-12-24 北京君正集成电路股份有限公司 Method and system for detecting whether camera is covered or not
US20150036893A1 (en) * 2013-07-30 2015-02-05 Fujitsu Limited Authentication device and method
CN104573615A (en) * 2013-10-24 2015-04-29 华为技术有限公司 Palm print acquisition method and device
US20150116086A1 (en) * 2013-10-30 2015-04-30 Samsung Electronics Co., Ltd. Electronic device and method of providing security using complex biometric information
CN104866751A (en) * 2015-04-11 2015-08-26 倪蔚民 Safe biometric feature recognition and image acquisition system for mobile intelligent device
CN104980759A (en) * 2014-04-09 2015-10-14 联发科技股份有限公司 Method for Detecting Occlusion Areas
CN105354531A (en) * 2015-09-22 2016-02-24 成都通甲优博科技有限责任公司 Marking method for facial key points
CN105404861A (en) * 2015-11-13 2016-03-16 中国科学院重庆绿色智能技术研究院 Training and detecting methods and systems for key human facial feature point detection model
CN105488874A (en) * 2015-11-20 2016-04-13 北京天诚盛业科技有限公司 Biological recognition method and apparatus based on multi-thread control
CN105611188A (en) * 2015-12-23 2016-05-25 北京奇虎科技有限公司 Method and device for detecting shielding of camera based on automatic exposure

Patent Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101986351A (en) * 2010-11-01 2011-03-16 深圳市中控生物识别技术有限公司 Networking type access control device and method based on biometric identification technology
CN102542242A (en) * 2010-12-27 2012-07-04 北京北科慧识科技股份有限公司 Method and device for positioning biological characteristic region of image acquired in a non-contact way
CN102176244A (en) * 2011-02-17 2011-09-07 东方网力科技股份有限公司 Method and device for determining shielding condition of camera head
CN103021138A (en) * 2012-12-28 2013-04-03 广州市浩云安防科技股份有限公司 Method for sensing shielding of video camera and device for implementing method
US20150036893A1 (en) * 2013-07-30 2015-02-05 Fujitsu Limited Authentication device and method
CN104573615A (en) * 2013-10-24 2015-04-29 华为技术有限公司 Palm print acquisition method and device
US20150116086A1 (en) * 2013-10-30 2015-04-30 Samsung Electronics Co., Ltd. Electronic device and method of providing security using complex biometric information
CN103824050A (en) * 2014-02-17 2014-05-28 北京旷视科技有限公司 Cascade regression-based face key point positioning method
CN104980759A (en) * 2014-04-09 2015-10-14 联发科技股份有限公司 Method for Detecting Occlusion Areas
CN104240235A (en) * 2014-08-26 2014-12-24 北京君正集成电路股份有限公司 Method and system for detecting whether camera is covered or not
CN104866751A (en) * 2015-04-11 2015-08-26 倪蔚民 Safe biometric feature recognition and image acquisition system for mobile intelligent device
CN105354531A (en) * 2015-09-22 2016-02-24 成都通甲优博科技有限责任公司 Marking method for facial key points
CN105404861A (en) * 2015-11-13 2016-03-16 中国科学院重庆绿色智能技术研究院 Training and detecting methods and systems for key human facial feature point detection model
CN105488874A (en) * 2015-11-20 2016-04-13 北京天诚盛业科技有限公司 Biological recognition method and apparatus based on multi-thread control
CN105611188A (en) * 2015-12-23 2016-05-25 北京奇虎科技有限公司 Method and device for detecting shielding of camera based on automatic exposure

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
XUDONG CAO 等: "Face Alignment by Explicit Shape Regression", 《INTERNATIONAL JOURNAL OF COMPUTER VISION》 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107330429A (en) * 2017-05-17 2017-11-07 北京捷通华声科技股份有限公司 A kind of localization method and device of certificate entry
CN109063601A (en) * 2018-07-13 2018-12-21 北京科莱普云技术有限公司 Cheilogramma detection method, device, computer equipment and storage medium
CN109063601B (en) * 2018-07-13 2020-12-22 广州莱德璞检测技术有限公司 Lip print detection method and device, computer equipment and storage medium
CN112115792A (en) * 2020-08-14 2020-12-22 深圳大学 Access & exit information statistical system, method and computer equipment
CN112115792B (en) * 2020-08-14 2024-02-06 深圳大学 Entrance and exit information statistics system, method and computer equipment

Also Published As

Publication number Publication date
CN106056080B (en) 2019-11-22

Similar Documents

Publication Publication Date Title
CN106780906B (en) A kind of testimony of a witness unification recognition methods and system based on depth convolutional neural networks
EP3308325B1 (en) Liveness detection method and device, and identity authentication method and device
AU2016214084B2 (en) Systems and methods for performing fingerprint based user authentication using imagery captured using mobile devices
US9361507B1 (en) Systems and methods for performing fingerprint based user authentication using imagery captured using mobile devices
US7881524B2 (en) Information processing apparatus and information processing method
TWI754806B (en) System and method for locating iris using deep learning
CN109101871A (en) A kind of living body detection device based on depth and Near Infrared Information, detection method and its application
CN106874894A (en) A kind of human body target detection method based on the full convolutional neural networks in region
CN109461003A (en) Plurality of human faces scene brush face payment risk preventing control method and equipment based on multi-angle of view
CN106407875B (en) Target&#39;s feature-extraction method and device
US8126215B2 (en) Registration and collation of a rolled finger blood vessel image
CN108124486A (en) Face living body detection method based on cloud, electronic device and program product
CN105844128A (en) Method and device for identity identification
CN103839040A (en) Gesture identification method and device based on depth images
CN102262727A (en) Method for monitoring face image quality at client acquisition terminal in real time
JP2002222424A (en) Fingerprint matching system
CN107741231A (en) A kind of multiple mobile object fast ranging method based on machine vision
CN107958217A (en) A kind of fingerprint classification identifying system and method based on deep learning
CN109766785A (en) A kind of biopsy method and device of face
CN111427031A (en) Identity and gesture recognition method based on radar signals
CN103902978A (en) Face detection and identification method
CN108629170A (en) Personal identification method and corresponding device, mobile terminal
CN109325408A (en) A kind of gesture judging method and storage medium
CN106056080A (en) Visualized biometric information acquisition device and acquisition method
CN106611158A (en) Method and equipment for obtaining human body 3D characteristic information

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: 26 No. 523710 Guangdong province Dongguan city Tangxia town Pingshan Industrial Avenue 188

Applicant after: Central Intelligence Polytron Technologies Inc

Address before: 26 No. 523710 Guangdong province Dongguan city Tangxia town Pingshan Industrial Avenue 188

Applicant before: Dongguan ZK Electronic Technology Co., Ltd.

CB02 Change of applicant information
GR01 Patent grant
GR01 Patent grant
CP01 Change in the name or title of a patent holder

Address after: 523710, 26, 188 Industrial Road, Pingshan Town, Guangdong, Dongguan, Tangxia

Patentee after: Entropy Technology Co., Ltd

Address before: 523710, 26, 188 Industrial Road, Pingshan Town, Guangdong, Dongguan, Tangxia

Patentee before: Zhongkong Smart Technology Co.,Ltd.

CP01 Change in the name or title of a patent holder
CP02 Change in the address of a patent holder

Address after: No.32, Pingshan Industrial Road, Tangxia Town, Dongguan City, Guangdong Province, 523710

Patentee after: Entropy Technology Co., Ltd

Address before: 523710 26 Pingshan 188 Industrial Avenue, Tangxia Town, Dongguan, Guangdong

Patentee before: Entropy Technology Co., Ltd

CP02 Change in the address of a patent holder