CN106778578A - Water purifier method for identifying ID - Google Patents
Water purifier method for identifying ID Download PDFInfo
- Publication number
- CN106778578A CN106778578A CN201611112393.0A CN201611112393A CN106778578A CN 106778578 A CN106778578 A CN 106778578A CN 201611112393 A CN201611112393 A CN 201611112393A CN 106778578 A CN106778578 A CN 106778578A
- Authority
- CN
- China
- Prior art keywords
- point
- image
- key point
- feature
- characteristic
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
- G06V10/443—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- General Health & Medical Sciences (AREA)
- Human Computer Interaction (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a kind of water purifier method for identifying ID, including controller on water purifier, memory, infrared temperature sensor, the first video camera and the second video camera for infrared thermal imaging;Controller is electrically connected with infrared temperature sensor, the first video camera, the second camera memory and server respectively;When user is near water purifier, the human body signal of infrared temperature sensor detection is obtained;First video camera and the second camera acquisition user images;The database of set of characteristic points and set of keypoints including all users is provided with memory, controller carries out key point identification and the identification of matching characteristic point and matches, final identifying user.The present invention has that discrimination is high, strong applicability, low cost, the characteristics of improve administrative convenience.
Description
Technical field
The present invention relates to intelligent identification technology field, more particularly, to a kind of discrimination is high, strong applicability water purifier is used
Family personal identification method.
Background technology
Intelligent identification technology is the skill for carrying out real-time analysis to digitized video image using the disposal ability of computer
Art.Using computer image technology and intelligent algorithm, can automatically be detected in the visual field of video camera security incident or
It is potential to threaten event, the background in video can be separated with prospect automatically, the object of motion is cut out, then according to setting
Fixed logic analysis mode is carried out real-time monitoring to moving object, tracks, analyzes its behavior and sent out in most fast and optimal mode
Go out precaution alarm information.
Current intelligent identification Method mainly includes speech recognition, fingerprint recognition, recognition of face, iris recognition and infrared heat
Image recognition lamp method;
Existing intelligent identification technology is widely used in every profession and trade, but has the disadvantage that:
Face recognition technology is widely used, but discrimination easy to use is not high;
Fingerprint identification technology is ripe, but does not apply to everyone simultaneously;
" favorite " of iris recognition biometrics, security occupy the first, but cost is very high cannot to be obtained extensively
Promote;
Application on Voiceprint Recognition is with low cost, acquisition is convenient, and use requirement is strict, and application scenarios are limited.
The content of the invention
Goal of the invention of the invention is that the intelligent identification Method scope of application of the prior art is small, discrimination in order to overcome
Low, high cost deficiency, there is provided a kind of discrimination is high, strong applicability water purifier method for identifying ID.
To achieve these goals, the present invention uses following technical scheme:
A kind of water purifier method for identifying ID, including controller, memory, infrared temperature biography on water purifier
Sensor, the first video camera and the second video camera for infrared thermal imaging;Controller is taken the photograph with infrared temperature sensor, first respectively
The electrical connection of camera, the second camera memory and server;Comprise the following steps:
(1-1), when user is near water purifier, controller obtains the human body signal of infrared temperature sensor detection;
(1-2) controller controls the first video camera and the start-up operation of the second video camera, the first video camera and the second video camera
Collection user images;
The database of set of characteristic points and set of keypoints including all users, controller are provided with (1-3) memory
Each characteristic point of user is obtained in the image shot from the first video camera, by the institute in each characteristic point of user and database
The set of characteristic points for having user is compared, and selectes the characteristic point of correct matching;
Each key point of user is obtained in the image that controller shoots from the second video camera, by each key point of user
Set of keypoints with all users in database is compared, and selectes the key point of correct matching;
(1-4) utilizes formulaCalculate discrimination γ1, wherein, n is the characteristic point and key of accumulative correct matching
Point sum, N is the characteristic point of setting and the sum of key point, and K is each characteristic point and the characteristic of each key point;
Controller finds the user's name of discrimination maximum in database, and user's name is passed into server, server
Storage current time, discrimination γ1And user's name.
The present invention realizes the function of user identity identification based on intelligent water purifier, when user is near water purifier, obtains
The human body signal of infrared temperature sensor detection;First video camera and the second camera acquisition user images;It is provided with memory
The database of set of characteristic points and set of keypoints including all registered users, controller carries out key point identification and matching is special
Levy an identification and match, final identifying user.
With water purifier be combined identifying system by the present invention, is, based on the duration market survey of month, to be sent out by research
It is existing, checked card for 2 minutes number of times are 6 times in the doorway residence time in commercial affairs people's group mean daily working time such as employee, white collar
The residence time is that 2 minutes number of times are 2 times before machine, and the residence time is that 16 minutes number of times are 8 times by water purifier, is stopped on station
Time is that 6 hours number of times are 10 times, is found by investigational data, the off-the-job daily before station is removed, before water purifier
Dwell times and residence time all account for higher proportion, and with water purifier be combined intelligent identifying system, can effectively increase by the present invention
Plus resolution, reduce manual maintenance cost;
The present invention improves the convenience of management work, reduces manual maintenance cost, improves user experience;Have
Resolution higher, can reach percent 99 recognition accuracy in range of application;The present invention is pressed without user
Deng deliberately identification operation, Intelligent Recognition can be just realized in user's water receiving every morning, it is more convenient to recognize;With resolution high
Iris recognition compare, water purifier intelligent identifying system has the advantages that low cost, has a wide range of application, the commercial affairs such as the company that is more convenient for
Occasion is used.
Preferably, scope residing for each key point is user face up to hair line, under to chin minimum point, left and right
To ear edge point;Including 7 regions, 7 regions are respectively forehead region, left eye region, right eye region, nasal area, a left side
Face region, right face region and nose chin area;Crucial point symmetry in left eye region, right eye region is chosen, left face region, the right side
Crucial point symmetry in face region is chosen.
Preferably, each characteristic point is located at face trigonum, characteristic point is 30.
Preferably, each characteristic point of user is obtained in the image that is shot from the first video camera of controller, by user's
The set of characteristic points of all users in each characteristic point and database is compared, and the characteristic point for selecting correct matching is included such as
Lower step:
(4-1) for image I (x, y) that the first video camera shoots, using formula G (i)=| [f (i-1, j-1)+f (i-1,
J)+f (i-1, j+1)]-[f (i+1, j-1)+f (i+1, j)+f (i+1, j+1)] | and G (j)=| [f (i-1, j+1)+f (i, j+1)
+ f (i+1, j+1)]-[f (i-1, j-1)+f (i, j-1)+f (i+1, j-1)] | calculate each pixel (i, j) in image I (x, y)
Neighborhood convolution G (i), G (j), setting P (i, j)=max [G (i), G (j)], it is image border point to select P (i, j);
(4-2) for image I (x, y) that the first video camera shoots, using formula L (x, y, σ)=g (x, y, σ) × I (x,
Y) scale space images L (x, y, σ) are built, g (x, y, σ) is yardstick Gauss variable function,(x, y) is space coordinates, and σ is Image Smoothness;
(4-3) utilizes formula
D (x, y, σ)=(g (x, y, k σ)-g (x, y, σ)) × I (x, y)=L (x, y, k σ)-L (x, y, σ) calculates Gaussian difference
Divide metric space D (x, y, σ);K is the constant of adjacent metric space multiple;
For each pixel in image I (x, y), the sub- octave image that s layers length and width halve respectively is set up successively, its
In, the first straton octave image is artwork;
Be compared for the D (x, y, σ) of D (x, y, σ) pixel adjacent thereto of each pixel by (4-4), if described
When the D (x, y, σ) of pixel is maximum or minimum value in this layer and bilevel every field, it is spy to take the pixel
Levy a little;
(4-5) obtains the dog figures being made up of each selected characteristic point, and LPF is carried out to dog figures;Removal dog figures
Each point outside middle marginal point, obtains two-dimentional point diagram;
(4-6) utilizes formula
With θ (x, y)=arc
((L (x, y+1)-L (x, y-1))/(L (x+1, y)-L (x-1, y))) calculates modulus value m (x, y) and angle, θ of each characteristic point to tan
(x, y), sets the number of plies of sub- octave image of the yardstick of each characteristic point as where it;Set modulus value, the angle of each characteristic point
Degree and yardstick are characterized feature 1 a little, feature 2 and feature 3;L (x+1, y) characteristic point (x+1, yardstick y);
(4-7) is by 3 of each characteristic point of all of set of characteristic points in 3 features of each characteristic point A1 and database
Individual feature is compared respectively, and characteristic point B1 most close with A1 and secondary close characteristic point C1 is found out in set of characteristic points;
The difference for setting the feature 1 of characteristic point A1 and B1 is a11, set characteristic point A1 and C1 feature 1 difference as
b11;
The difference for setting the feature 2 of characteristic point A1 and B1 is a12, set characteristic point A1 and C1 feature 1 difference as
b12;
The difference for setting the feature 32 of characteristic point A1 and B1 is a13, set characteristic point A1 and C1 feature 1 difference as
b13;
WhenAndAndRatio is the rate threshold of setting;
It is correct match point then to select characteristic point B1.
Preferably, each key point of user is obtained in the image that is shot from the second video camera of controller, by user's
The set of keypoints of all users in each key point and database is compared, and the key point for selecting correct matching is included such as
Lower step:
(5-1) sets the gray value that f (i, j) is (i, j) point in the image that the second video camera shoots, in being with (i, j) point
The heart takes a window of N ' × N ' in the picture, the point set of pixel composition in window is set as A ', using formulaBe filtered, obtain it is dry after image g (i, j);
(5-2) is slided with the window of N ' × N ' on image, the gray value of all pixels in window is pressed and rises sequential arrangement,
Take the gray value for being arranged in the gray value of middle as window center pixel;
(5-3) utilizes formulaEdge inspection is carried out to image f (x, y)
Survey, obtain marginal point h (x, y);
(5-4) for image f (x, y) that the second video camera shoots, using formula L ' (x, y, σ)=g (x, y, σ) × f (x,
Y) scale space images L ' (x, y, σ) are built, g (x, y, σ) is yardstick Gauss variable function,(x, y) is space coordinates, and σ is Image Smoothness;
(5-5) utilizes formula
D ' (x, y, σ)=(g (x, y, k σ)-g (x, y, σ)) × f (x, y)=L ' (x, y, k σ)-L ' (x, y, σ) calculates Gauss
Difference scale space D ' (x, y, σ);
For each pixel in image f (x, y), the sub- octave image that s layers of length and width halve respectively is set up successively, wherein,
First straton octave image is artwork;
Be compared for the D ' (x, y, σ) of D ' (x, y, σ) pixel adjacent thereto of each pixel by (5-6), if institute
When the D ' (x, y, σ) for stating pixel is maximum or minimum value in this layer and bilevel each neighborhood, the pixel is taken
It is key point;
(5-7) obtains the dog figures being made up of each selected key point, and LPF is carried out to dog figures;Removal dog figures
Each point outside middle marginal point, obtains two-dimentional point diagram;
(5-8) utilizes formula
With
Calculate each key point
Modulus value m (x, y) and angle, θ (x, y), L (x+1, y) be key point (x+1, yardstick y);Set each key point modulus value,
Angle and feature 1, feature 2 and feature 3 that yardstick is key point;
(5-9) is by 3 of each characteristic point of all of set of keypoints in 3 features of each key point A2 and database
Individual feature is compared respectively, and key point B2 most close with A and secondary close key point C2 is found out in set of keypoints;
The difference for setting the feature 1 of key point A2 and B2 is a21, set key point A2 and C2 feature 1 difference as
b21;
The difference for setting the feature 2 of key point A2 and B2 is a22, set key point A2 and C2 feature 1 difference as
b22;
The difference for setting the feature 32 of key point A2 and B2 is a23, set key point A2 and C2 feature 1 difference as
b23;
WhenAndAndRatio is the rate threshold of setting;
It is correct match point then to select key point B2.
Preferably, also comprising the following steps:Standard discrimination γ is provided with server, if within a period of time, certain
The discrimination continuous decrease of user, and continuous 30 γ1< γ, with the feature of all comparisons failure of last time identification
Corresponding characteristic point, key point in point, key point replacement data storehouse.
Preferably, ratio is 0.4 to 0.5.
Therefore, the present invention has the advantages that:Discrimination is high, strong applicability, and low cost improves administrative convenience
Property.
Brief description of the drawings
Fig. 1 is a kind of flow chart of the invention;
Fig. 2 is a kind of set of keypoints figure of the invention;
Fig. 3 is a kind of set of characteristic points figure of the invention.
Specific embodiment
The present invention will be further described with reference to the accompanying drawings and detailed description.
Embodiment as shown in Figure 1 is a kind of water purifier method for identifying ID, including the control on water purifier
Device, memory, infrared temperature sensor, the first video camera and the second video camera for infrared thermal imaging;Controller respectively with
The electrical connection of infrared temperature sensor, the first video camera, the second camera memory and server;Comprise the following steps:
Step 100, human testing
When user is near water purifier, controller obtains the human body signal of infrared temperature sensor detection;
Step 200, IMAQ
Controller controls the first video camera and the second video camera to start working, and the first video camera and the second camera acquisition are used
Family image;
Step 300, characteristic point, key point identification and matching
Each characteristic point of user is obtained in the image that controller shoots from the first video camera, by each characteristic point of user
Set of characteristic points with all users in database is compared, and selectes the characteristic point of correct matching;
Comprise the following steps that:
Step 310, obtains each characteristic point of user in the image that controller shoots from the first video camera,
Step 311, for the first video camera shoot image I (x, y), using formula G (i)=| [f (i-1, j-1)+f
(i-1, j)+f (i-1, j+1)]-[f (i+1, j-1)+f (i+1, j)+f (i+1, j+1)] | and G (j)=| [f (i-1, j+1)+f
(i, j+1)+f (i+1, j+1)]-[f (i-1, j-1)+f (i, j-1)+f (i+1, j-1)] | calculate each pixel in image I (x, y)
Neighborhood convolution G (i) of point (l, j), G (j), setting P (i, j)=max [G (i), G (j)], it is image border point to select P (i, j);
Step 312, for image I (x, y) that the first video camera shoots, using formula L (x, y, σ)=g (x, y, σ) × I
(x, y) builds scale space images L (x, y, σ), and g (x, y, σ) is yardstick Gauss variable function,(x, y) is space coordinates, and σ is Image Smoothness;
Step 313, using formula
D (x, y, σ)=(g (x, y, k σ)-g (x, y, σ)) × I (x, y)=L (x, y, k σ)-L (x, y, σ) calculates Gaussian difference
Divide metric space D (x, y, σ);K is the constant of adjacent metric space multiple;
For each pixel in image I (x, y), the sub- octave image that s layers length and width halve respectively is set up successively, its
In, the first straton octave image is artwork;
Step 314, the D (x, y, σ) of D (x, y, σ) pixel adjacent thereto of each pixel is compared, if institute
When the D (x, y, σ) for stating pixel is maximum or minimum value in this layer and bilevel every field, taking the pixel is
Characteristic point;
Step 315, the dog figures that acquisition is made up of each selected characteristic point, LPF is carried out to dog figures;Removal dog
Each point in figure outside marginal point, obtains two-dimentional point diagram;
Step 316, using formula
With θ (x, y)=arc
((L (x, y+1)-L (x, y-1))/(L (x+1, y)-L (x-1, y))) calculates modulus value m (x, y) and angle, θ of each characteristic point to tan
(x, y), sets the number of plies of sub- octave image of the yardstick of each characteristic point as where it;Set modulus value, the angle of each characteristic point
Degree and yardstick are characterized feature 1 a little, feature 2 and feature 3;L (x+1, y) characteristic point (x+1, yardstick y);
Step 317, by each characteristic point of all of set of characteristic points in 3 features of each characteristic point A1 and database
3 features be compared respectively, characteristic point B1 most close with A1 and time close characteristic point C1 is found out in set of characteristic points;
The difference for setting the feature 1 of characteristic point A1 and B1 is a11, set characteristic point A1 and C1 feature 1 difference as
b11;
The difference for setting the feature 2 of characteristic point A1 and B1 is a12, set characteristic point A1 and C1 feature 1 difference as
b12;
The difference for setting the feature 32 of characteristic point A1 and B1 is a13, set characteristic point A1 and C1 feature 1 difference as
b13;
WhenAndAndRatio is the rate threshold of setting;
It is correct match point then to select characteristic point B1;
Step 320, key point identification and matching
Each key point of user is obtained in the image that controller shoots from the second video camera, by each key point of user
Set of keypoints with all users in database is compared, and selectes the key point of correct matching;
Comprise the following steps that:
Step 321, setting f (i, j) is the gray value of (i, j) point in the image that the second video camera shoots, and is with (i, j) point
Center takes a window of N ' × N ' in the picture, the point set of pixel composition in window is set as A ', using formulaBe filtered, obtain it is dry after image g (i, j);
Step 322, is slided with the window of N ' × N ' on image, and the gray value of all pixels in window is arranged by order is risen
Row, take the gray value for being arranged in the gray value of middle as window center pixel;
Step 323, using formulaSide is carried out to image f (x, y)
Edge detection, obtains marginal point h (x, y);
Step 324, for image f (x, y) that the second video camera shoots, using formula L ' (x, y, σ)=g (x, y, σ) × f
(x, y) builds scale space images L ' (x, y, σ), and g (x, y, σ) is yardstick Gauss variable function,(x, y) is space coordinates, and σ is Image Smoothness;
Step 325, using formula
D ' (x, y, σ)=(g (x, y, k σ)-g (x, y, σ)) × f (x, y)=L ' (x, y, k σ)-L ' (x, y, σ) calculates Gauss
Difference scale space D ' (x, y, σ);
For each pixel in image f (x, y), the sub- octave image that s layers of length and width halve respectively is set up successively, wherein,
First straton octave image is artwork;
Step 326, the D ' (x, y, σ) of D ' (x, y, σ) pixel adjacent thereto of each pixel is compared, if
When the D ' (x, y, σ) of the pixel is maximum or minimum value in this layer and bilevel each neighborhood, the pixel is taken
Point is key point;
Step 327, the dog figures that acquisition is made up of each selected key point, LPF is carried out to dog figures;Removal dog
Each point in figure outside marginal point, obtains two-dimentional point diagram;
Step 328, using formula
With θ (x, y)=arc
((L (x, y+1)-L (x, y-1))/(L (x+1, y)-L (x-1, y))) calculates modulus value m (x, y) and angle of each key point to tan2
(x+1 y) is key point (x+1, yardstick y) for θ (x, y), L;Modulus value, angle and the yardstick for setting each key point are key point
Feature 1, feature 2 and feature 3;
Step 329, by each characteristic point of all of set of keypoints in 3 features of each key point A2 and database
3 features be compared respectively, key point B2 most close with A and time close key point C2 is found out in set of keypoints;
The difference for setting the feature 1 of key point A2 and B2 is a21, set key point A2 and C2 feature 1 difference as
b21;
The difference for setting the feature 2 of key point A2 and B2 is a22, set key point A2 and C2 feature 1 difference as
b22;
The difference for setting the feature 32 of key point A2 and B2 is a23, set key point A2 and C2 feature 1 difference as
b23;
WhenAndAndRatio is the rate threshold of setting;
It is correct match point then to select key point B2.
Step 400, identifying user
Using formulaCalculate discrimination γ1, wherein, n is that the characteristic point and key point of accumulative correct matching are total
Number, N is the characteristic point of setting and the sum of key point, and K is 3;
Controller finds the user's name of discrimination maximum in database, and user's name is passed into server, server
Storage current time, discrimination γ1And user's name.
As shown in Fig. 2 scope residing for each face key point is user face up to hair line, under it is minimum to chin
Point, left and right takes ear edge point, and four direction is face maximum frame, and data sampling is carried out in facial maximum frame region, is removed
Go characteristic point region take 86 key points (hair line to characteristic area brow portion take 22 key points (by 5 points of upper volume hair line, on
Each 5 foreheads middle part of 5 points of eyebrow marginal portion, left and right hair line takes at 2 points with reference to cross), the left key point of cheek part 26
(left cheek takes 2 points of upper canthus distance below by 16 points of ear boundary line, 8 points of side face limit, cheek middle part), right cheek portion
(right cheek takes upper canthus distance below 2 by 16 points of ear boundary line, 8 points of side face limit, cheek middle part point to take 26 key points
Point), chin portion take 12 key points (chin portion by 6 points of chin border, 2 points of lip border, chengjiang and surrounding 3 points).
As shown in figure 3, human face characteristic point be located at the trigonum that the eyebrow intermediate point of face trigonum two and chengjiang constitute and
The border of shoulder two to face borderline region, wherein, 16 characteristic points of eyes, 4 characteristic points of face, 4 characteristic points of nose, forehead
30 points of compositions of 4 characteristic points and face's shoulder.
Standard discrimination γ is provided with server, if in 6 days, the discrimination continuous decrease of certain user, and continuously
30 γ1< γ, with corresponding feature in characteristic point, the key point replacement data storehouse of all comparisons failure of last time identification
Point, key point.Ratio is 0.4.
It should be understood that the present embodiment is only illustrative of the invention and is not intended to limit the scope of the invention.In addition, it is to be understood that
Read after the content of instruction of the present invention, those skilled in the art can make various changes or modifications to the present invention, these etc.
Valency form equally falls within the application appended claims limited range.
Claims (7)
1. a kind of water purifier method for identifying ID, it is characterized in that, including it is controller on water purifier, memory, red
Outer temperature sensor, the first video camera and the second video camera for infrared thermal imaging;Controller is sensed with infrared temperature respectively
The electrical connection of device, the first video camera, the second camera memory and server;Comprise the following steps:
(1-1), when user is near water purifier, controller obtains the human body signal of infrared temperature sensor detection;
(1-2) controller controls the first video camera and the start-up operation of the second video camera, the first video camera and the second camera acquisition
User images;
It is provided with the database of set of characteristic points and set of keypoints including all users in (1-3) memory, controller is from
Each characteristic point of user is obtained in the image that one video camera shoots, by each characteristic point of user and database it is useful
The set of characteristic points at family is compared, and selectes the characteristic point of correct matching;
Each key point of user is obtained in the image that controller shoots from the second video camera, by each key point and number of user
Set of keypoints according to all users in storehouse is compared, and selectes the key point of correct matching;
(1-4) utilizes formulaCalculate discrimination γ1, wherein, n is that the characteristic point and key point of accumulative correct matching are total
Number, N is the characteristic point of setting and the sum of key point, and K is each characteristic point and the characteristic of each key point;
Controller finds the user's name of discrimination maximum in database, and user's name is passed into server, server storage
Current time, discrimination γ1And user's name.
2. water purifier method for identifying ID according to claim 1, it is characterized in that, the scope residing for each key point
Be user face up to hair line, under to chin minimum point, left and right to ear edge point;Including 7 regions, 7 region difference
It is forehead region, left eye region, right eye region, nasal area, left face region, right face region and nose chin area;Left eye area
Crucial point symmetry in domain, right eye region is chosen, and the crucial point symmetry in left face region, right face region is chosen.
3. water purifier method for identifying ID according to claim 1, it is characterized in that, each characteristic point is located at face three
Angular region, characteristic point is 30.
4. water purifier method for identifying ID according to claim 1, it is characterized in that, the controller is from the first shooting
Each characteristic point of user is obtained in the image that machine shoots, by the spy of all users in each characteristic point of user and database
Levy point set to compare, the characteristic point of selected correct matching comprises the following steps:
(4-1) for image I (x, y) that the first video camera shoots, using formula G (i)=| [f (i-1, j-1)+f (i-1, j)+f
(i-1, j+1)]-[f (i+1, j-1)+f (i+1, j)+f (i+1, j+1)] | and
G (j)=| [f (i-1, j+1)+f (i, j+1)+f (i+1, j+1)]-[f (i-1, j-1)+f (i, j-1)+f (i+1, j-1)] |
Calculate in image I (x, y) each pixel (I, neighborhood convolution G (i) j), G (j),
Setting P (i, j)=max [G (i), G (j)], it is image border point to select P (i, j);
(4-2) for image I (x, y) that the first video camera shoots, using formula L (x, y, σ)=g (x, y, σ) × I (x, y) structure
Scale space images L (x, y, σ) is built, g (x, y, σ) is yardstick Gauss variable function,
(x, y) is space coordinates, and σ is Image Smoothness;
(4-3) utilizes formula
D (x, y, σ)=(g (x, y, k σ)-g (x, y, σ)) × I (x, y)=L (x, y, k σ)-L (x, y, σ) calculates difference of Gaussian chi
Degree space D (x, y, σ);K is the constant of adjacent metric space multiple;
For each pixel in image I (x, y), the sub- octave image that s layers length and width halve respectively is set up successively, wherein, the
One straton octave image is artwork;
Be compared for the D (x, y, σ) of D (x, y, σ) pixel adjacent thereto of each pixel by (4-4), if the pixel
When the D (x, y, σ) of point is maximum or minimum value in this layer and bilevel every field, takes the pixel and be characterized
Point;
(4-5) obtains the dog figures being made up of each selected characteristic point, and LPF is carried out to dog figures;Side in removal dog figures
Each point outside edge point, obtains two-dimentional point diagram;
(4-6) utilizes formula
With θ (x, y)=arctan ((L
(x, y+1)- L (x, y-1))/(L (x+1, y)-L (x-1, y))) calculates modulus value m (x, y) and angle, θ (x, y) of each characteristic point,
Set the number of plies of sub- octave image of the yardstick of each characteristic point as where it;Set modulus value, angle and the chi of each characteristic point
Degree is characterized feature 1 a little, feature 2 and feature 3;L (x+1, y) characteristic point (x+1, yardstick y);
(4-7) is by 3 spies of each characteristic point of all of set of characteristic points in 3 features of each characteristic point A1 and database
Levy and be compared respectively, characteristic point B1 most close with A1 and secondary close characteristic point C1 is found out in set of characteristic points;
The difference for setting the feature 1 of characteristic point A1 and B1 is a11, and the difference for setting the feature 1 of characteristic point A1 and C1 is b11;
The difference for setting the feature 2 of characteristic point A1 and B1 is a12, and the difference for setting the feature 1 of characteristic point A1 and C1 is b12;
The difference for setting the feature 32 of characteristic point A1 and B1 is a13, and the difference for setting the feature 1 of characteristic point A1 and C1 is b13;
WhenAndAndRatio is the rate threshold of setting;
It is correct match point then to select characteristic point B1.
5. water purifier method for identifying ID according to claim 2, it is characterized in that, controller is clapped from the second video camera
Each key point of user is obtained in the image taken the photograph, by the key point of all users in each key point of user and database
Set is compared, and the key point of selected correct matching comprises the following steps:
(5-1) set f (i, j) be the second video camera shoot image in (i, j) point gray value, with (i, j) put centered on
A window of N ' × N ' is taken in image, the point set of pixel composition in window is set as A ', using formulaBe filtered, obtain it is dry after image g (i, j);
(5-2) is slided with the window of N ' × N ' on image, the gray value of all pixels in window is pressed and rises sequential arrangement, the row of taking
The gray value of middle is listed in as the gray value of window center pixel;
(5-3) utilizes formulaRim detection is carried out to image f (x, y), is obtained
To marginal point h (x, y);
(5-4) for image f (x, y) that the second video camera shoots, using formula L ' (x, y, σ)=g (x, y, σ) × f (x, y) structure
Scale space images L ' (x, y, σ) is built, g (x, y, σ) is yardstick Gauss variable function,
(x, y) is space coordinates, and σ is Image Smoothness;
(5-5) utilizes formula
D ' (x, y, σ)=(g (x, y, k σ)-g (x, y, σ)) × f (x, y)=L ' (x, y, k σ)-L ' (x, y, σ) calculates difference of Gaussian
Metric space D ' (x, y, σ);
For each pixel in image f (x, y), the sub- octave image that s layers of length and width halve respectively is set up successively, wherein, first
Straton octave image is artwork;
Be compared for the D ' (x, y, σ) of D ' (x, y, σ) pixel adjacent thereto of each pixel by (5-6), if the picture
When the D ' (x, y, σ) of vegetarian refreshments is maximum or minimum value in this layer and bilevel each neighborhood, the pixel is taken to close
Key point;
(5-7) obtains the dog figures being made up of each selected key point, and LPF is carried out to dog figures;Side in removal dog figures
Each point outside edge point, obtains two-dimentional point diagram;
(5-8) utilizes formula
With θ (x, y)=arctan2 ((L
(x, y+1)-L (x, y-1))/(L (x+1, y)-L (x-1, y))) calculates modulus value m (x, y) and angle, θ (x, y) of each key point,
(x+1 y) is key point (x+1, yardstick y) to L;Set modulus value, angle and the yardstick of each key point as the feature 1 of key point,
Feature 2 and feature 3;
(5-9) is by 3 spies of each characteristic point of all of set of keypoints in 3 features of each key point A2 and database
Levy and be compared respectively, key point B2 most close with A and secondary close key point C2 is found out in set of keypoints;
The difference for setting the feature 1 of key point A2 and B2 is a21, and the difference for setting the feature 1 of key point A2 and C2 is b21;
The difference for setting the feature 2 of key point A2 and B2 is a22, and the difference for setting the feature 1 of key point A2 and C2 is b22;
The difference for setting the feature 32 of key point A2 and B2 is a23, and the difference for setting the feature 1 of key point A2 and C2 is b23;
WhenAndAndRatio is the rate threshold of setting;
It is correct match point then to select key point B2.
6. the water purifier method for identifying ID according to claim 1 or 2 or 3 or 4, it is characterized in that, also including as follows
Step:Standard discrimination γ is provided with server, if within a period of time, the discrimination continuous decrease of certain user, and even
Continue 30 γ1< γ, it is corresponding special in characteristic point, the key point replacement data storehouse with all comparisons failure of last time identification
Levy point, key point.
7. the water purifier method for identifying ID according to claim 4 or 5, it is characterized in that, ratio is 0.4 to 0.5.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201611112393.0A CN106778578A (en) | 2016-12-06 | 2016-12-06 | Water purifier method for identifying ID |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201611112393.0A CN106778578A (en) | 2016-12-06 | 2016-12-06 | Water purifier method for identifying ID |
Publications (1)
Publication Number | Publication Date |
---|---|
CN106778578A true CN106778578A (en) | 2017-05-31 |
Family
ID=58879238
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201611112393.0A Pending CN106778578A (en) | 2016-12-06 | 2016-12-06 | Water purifier method for identifying ID |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106778578A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117079377A (en) * | 2023-06-07 | 2023-11-17 | 南通新旋利机械科技有限公司 | Method and system for improving induction recognition rate of automatic door |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101504761A (en) * | 2009-01-21 | 2009-08-12 | 北京中星微电子有限公司 | Image splicing method and apparatus |
US8229178B2 (en) * | 2008-08-19 | 2012-07-24 | The Hong Kong Polytechnic University | Method and apparatus for personal identification using palmprint and palm vein |
CN102663361A (en) * | 2012-04-01 | 2012-09-12 | 北京工业大学 | Face image reversible geometric normalization method facing overall characteristics analysis |
CN102779274A (en) * | 2012-07-19 | 2012-11-14 | 冠捷显示科技(厦门)有限公司 | Intelligent television face recognition method based on binocular camera |
CN103235942A (en) * | 2013-05-14 | 2013-08-07 | 苏州福丰科技有限公司 | Facial recognition method applied to entrance guard |
CN105894287A (en) * | 2016-04-01 | 2016-08-24 | 王涛 | Face payment platform based on iris-assisted identity authentication |
CN105956518A (en) * | 2016-04-21 | 2016-09-21 | 腾讯科技(深圳)有限公司 | Face identification method, device and system |
-
2016
- 2016-12-06 CN CN201611112393.0A patent/CN106778578A/en active Pending
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8229178B2 (en) * | 2008-08-19 | 2012-07-24 | The Hong Kong Polytechnic University | Method and apparatus for personal identification using palmprint and palm vein |
CN101504761A (en) * | 2009-01-21 | 2009-08-12 | 北京中星微电子有限公司 | Image splicing method and apparatus |
CN102663361A (en) * | 2012-04-01 | 2012-09-12 | 北京工业大学 | Face image reversible geometric normalization method facing overall characteristics analysis |
CN102779274A (en) * | 2012-07-19 | 2012-11-14 | 冠捷显示科技(厦门)有限公司 | Intelligent television face recognition method based on binocular camera |
CN103235942A (en) * | 2013-05-14 | 2013-08-07 | 苏州福丰科技有限公司 | Facial recognition method applied to entrance guard |
CN105894287A (en) * | 2016-04-01 | 2016-08-24 | 王涛 | Face payment platform based on iris-assisted identity authentication |
CN105956518A (en) * | 2016-04-21 | 2016-09-21 | 腾讯科技(深圳)有限公司 | Face identification method, device and system |
Non-Patent Citations (1)
Title |
---|
王晋年等: "《北京一号小卫星数据处理技术及应用》", 31 October 2010, 武汉:武汉大学出版社 * |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117079377A (en) * | 2023-06-07 | 2023-11-17 | 南通新旋利机械科技有限公司 | Method and system for improving induction recognition rate of automatic door |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Steiner et al. | Reliable face anti-spoofing using multispectral swir imaging | |
Jee et al. | Liveness detection for embedded face recognition system | |
US6920236B2 (en) | Dual band biometric identification system | |
Zhang et al. | Face liveness detection by learning multispectral reflectance distributions | |
US8761458B2 (en) | System for iris detection, tracking and recognition at a distance | |
CN109359634B (en) | Face living body detection method based on binocular camera | |
US20060110014A1 (en) | Expression invariant face recognition | |
Kashem et al. | Face recognition system based on principal component analysis (PCA) with back propagation neural networks (BPNN) | |
US20080212849A1 (en) | Method and Apparatus For Facial Image Acquisition and Recognition | |
US20060222212A1 (en) | One-dimensional iris signature generation system and method | |
JP2004118627A (en) | Figure identification device and method | |
JP2001331799A (en) | Image processor and image processing method | |
JP2000259814A (en) | Image processor and method therefor | |
CN104680128B (en) | Biological feature recognition method and system based on four-dimensional analysis | |
US9449217B1 (en) | Image authentication | |
Prokoski et al. | Infrared identification of faces and body parts | |
TW200905577A (en) | Iris recognition system | |
WO2013075497A1 (en) | Information acquisition device and method, and identification system and method | |
JP2004030564A (en) | Personal identification method, personal identification apparatus, and photographing device | |
Arandjelovic et al. | On person authentication by fusing visual and thermal face biometrics | |
CN205644823U (en) | Social security self -service terminal device | |
CN109543521A (en) | The In vivo detection and face identification method that main side view combines | |
Syambas et al. | Image processing and face detection analysis on face verification based on the age stages | |
US10621419B2 (en) | Method and system for increasing biometric acceptance rates and reducing false accept rates and false rates | |
Mary et al. | Human identification using periocular biometrics |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20170531 |