CN106778577A - Water purifier user's personal identification method - Google Patents
Water purifier user's personal identification method Download PDFInfo
- Publication number
- CN106778577A CN106778577A CN201611112299.5A CN201611112299A CN106778577A CN 106778577 A CN106778577 A CN 106778577A CN 201611112299 A CN201611112299 A CN 201611112299A CN 106778577 A CN106778577 A CN 106778577A
- Authority
- CN
- China
- Prior art keywords
- point
- image
- user
- feature
- characteristic
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
- G06V40/171—Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- General Health & Medical Sciences (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a kind of water purifier user personal identification method, including controller on water purifier, memory, infrared temperature sensor, the first video camera and the second video camera for infrared thermal imaging;Controller is electrically connected with infrared temperature sensor, the first video camera, the second camera memory and server respectively;When user is near water purifier, the human body signal of infrared temperature sensor detection is obtained;First video camera and the second camera acquisition user images;Controller carries out key point identification and the identification of matching characteristic point and matches, final identifying user.The present invention has that discrimination is high, strong applicability, low cost, the characteristics of improve administrative convenience.
Description
Technical field
The present invention relates to intelligent identification technology field, more particularly, to a kind of discrimination is high, strong applicability, low cost it is net
Hydrophone user's personal identification method.
Background technology
Intelligent identification technology is the skill for carrying out real-time analysis to digitized video image using the disposal ability of computer
Art.Using computer image technology and intelligent algorithm, can automatically be detected in the visual field of video camera security incident or
It is potential to threaten event, the background in video can be separated with prospect automatically, the object of motion is cut out, then according to setting
Fixed logic analysis mode is carried out real-time monitoring to moving object, tracks, analyzes its behavior and sent out in most fast and optimal mode
Go out precaution alarm information.
Current intelligent identification Method mainly includes speech recognition, fingerprint recognition, recognition of face, iris recognition and infrared heat
Image recognition lamp method;
Existing intelligent identification technology is widely used in every profession and trade, but has the disadvantage that:
Face recognition technology is widely used, but discrimination easy to use is not high;
Fingerprint identification technology is ripe, but does not apply to everyone simultaneously;
" favorite " of iris recognition biometrics, security occupy the first, but cost is very high cannot to be obtained extensively
Promote;
Application on Voiceprint Recognition is with low cost, acquisition is convenient, and use requirement is strict, and application scenarios are limited.
The content of the invention
Goal of the invention of the invention is that the intelligent identification Method scope of application of the prior art is small, discrimination in order to overcome
Low, high cost deficiency, there is provided a kind of discrimination is high, water purifier user's personal identification method of strong applicability, low cost.
To achieve these goals, the present invention uses following technical scheme:
A kind of water purifier user personal identification method, including controller, memory, infrared temperature on water purifier
Sensor, the first video camera and the second video camera for infrared thermal imaging;Controller respectively with infrared temperature sensor, first
The electrical connection of video camera, the second camera memory and server;Comprise the following steps:
(1-1), when user is near water purifier, controller obtains the human body signal of infrared temperature sensor detection;
(1-2) controller controls the first video camera and the start-up operation of the second video camera, the first video camera and the second video camera
Collection user's image;
The database of set of characteristic points and set of keypoints including all users is provided with (1-3) memory, is controlled
Each characteristic point of user is obtained in the image that device shoots from the first video camera, by each characteristic point and database of user
In the set of characteristic points of all users compare, select the characteristic point of correct matching;
Each key point of user is obtained in the image that controller shoots from the second video camera, by each pass of user
The set of keypoints of all users in key point and database is compared, and selectes the key point of correct matching;
(1-4) utilizes formulaCalculate Feature point recognition rate γ1, wherein, n1 is the feature of accumulative correct matching
Points, N1 is characterized the characteristic point sum of point set, and K1 is the characteristic of each characteristic point;
Using formulaCalculate key point discrimination γ2, wherein, n2 is the feature points of accumulative correct matching,
N2 is the total number of keypoints of set of keypoints, and K2 is the characteristic of each characteristic point;
(1-5) controller utilizes formulaCalculate comprehensive discrimination γ;
As γ >=W, then controller finds the title of user corresponding with γ in database, and user's title is passed to
Server, server storage current time, synthesis discrimination γ and user's title;Wherein, k1、k2It is the weight coefficient of setting,
W is the standard discrimination of setting.
The present invention realizes the function of user identity identification based on intelligent water purifier, when user is near water purifier, obtains
The human body signal of infrared temperature sensor detection;First video camera and the second camera acquisition user images;It is provided with memory
The database of set of characteristic points and set of keypoints including all registered users, controller carries out key point identification and matching is special
Levy an identification and match, final identifying user.
With water purifier be combined identifying system by the present invention, is, based on the duration market survey of month, to be sent out by research
It is existing, checked card for 2 minutes number of times are 6 times in the doorway residence time in commercial affairs people's group mean daily working time such as employee, white collar
The residence time is that 2 minutes number of times are 2 times before machine, and the residence time is that 16 minutes number of times are 8 times by water purifier, is stopped on station
Time is that 6 hours number of times are 10 times, is found by investigational data, the off-the-job daily before station is removed, before water purifier
Dwell times and residence time all account for higher proportion, and with water purifier be combined intelligent identifying system, can effectively increase by the present invention
Plus resolution, reduce manual maintenance cost;
The present invention improves the convenience of management work, reduces manual maintenance cost, improves user experience;Have
Resolution higher, can reach percent 99 recognition accuracy in range of application;The present invention is pressed without user
Deng deliberately identification operation, Intelligent Recognition can be just realized in user's water receiving every morning, it is more convenient to recognize;With resolution high
Iris recognition compare, water purifier intelligent identifying system has the advantages that low cost, has a wide range of application, the commercial affairs such as the company that is more convenient for
Occasion is used.
Preferably, scope residing for each key point is user face up to hair line, under to chin minimum point, it is left
It is right to ear edge point;Including 7 regions, 7 regions be respectively forehead region, left eye region, right eye region, nasal area,
Left face region, right face region and nose chin area;Crucial point symmetry in left eye region, right eye region is chosen, left face region,
Crucial point symmetry in right face region is chosen.
Preferably, each characteristic point is located at face trigonum, characteristic point is 30.
Preferably, obtaining each characteristic point of user in the image that is shot from the first video camera of controller, will use
The set of characteristic points of all users in each characteristic point of person and database is compared, and selectes the characteristic point of correct matching
Comprise the following steps:
(4-1) for image I (x, y) that the first video camera shoots, using formula G (i)=| [f (i-1, j-1)+f (i-1,
J)+f (i-1, j+1)]-[f (i+1, j-1)+f (i+1, j)+f (i+1, j+1)] | and G (j)=| [f (i-1, j+1)+f (i, j+1)
+ f (i+1, j+1)]-[f (i-1, j-1)+f (i, j-1)+f (i+1, j-1)] | calculate each pixel (l, j) in image I (x, y)
Neighborhood convolution G (i), G (j),
Setting P (i, j)=max [G (i), G (j)], it is image border point to select P (i, j);
(4-2) for image I (x, y) that the first video camera shoots, using formula L (x, y, σ)=g (x, y, σ) × I (x,
Y) scale space images L (x, y, σ) are built, g (x, y, σ) is yardstick Gauss variable function,(x, y) is space coordinates, and σ is Image Smoothness;
(4-3) utilizes formula
D (x, y, σ)=(g (x, y, k σ)-g (x, y, σ)) × I (x, y)=L (x, y, k σ)-L (x, y, σ) calculates Gaussian difference
Divide metric space D (x, y, σ);K is the constant of adjacent metric space multiple;
For each pixel in image I (x, y), the sub- octave image that s layers length and width halve respectively is set up successively, its
In, the first straton octave image is artwork;
Be compared for the D (x, y, σ) of D (x, y, σ) pixel adjacent thereto of each pixel by (4-4), if described
When the D (x, y, σ) of pixel is maximum or minimum value in this layer and bilevel every field, it is spy to take the pixel
Levy a little;
(4-5) obtains the dog figures being made up of each selected characteristic point, and LPF is carried out to dog figures;Removal dog figures
Each point outside middle marginal point, obtains two-dimentional point diagram;
(4-6) utilizes formula
With θ (x, y)=arctan
((L (x, y+1)-L (x, y-1))/(L (x+1, y)-L (x-1, y))) calculate each characteristic point modulus value m (x, y) and angle, θ (x,
Y), the number of plies of sub- octave image of the yardstick of each characteristic point as where it is set;Set the modulus value of each characteristic point, angle and
Yardstick is characterized feature 1 a little, feature 2 and feature 3;L (x+1, y) characteristic point (x+1, yardstick y);
(4-7) is by 3 of each characteristic point of all of set of characteristic points in 3 features of each characteristic point A1 and database
Individual feature is compared respectively, and characteristic point B1 most close with A1 and secondary close characteristic point C1 is found out in set of characteristic points;
The difference for setting the feature 1 of characteristic point A1 and B1 is a11, set characteristic point A1 and C1 feature 1 difference as
b11;
The difference for setting the feature 2 of characteristic point A1 and B1 is a12, set characteristic point A1 and C1 feature 1 difference as
b12;
The difference for setting the feature 32 of characteristic point A1 and B1 is a13, set characteristic point A1 and C1 feature 1 difference as
b13;
WhenAndAndRatio is the rate threshold of setting;
It is correct match point then to select characteristic point B1.
Preferably, obtaining each key point of user in the image that is shot from the second video camera of controller, will use
The set of keypoints of all users in each key point of person and database is compared, and selectes the key point of correct matching
Comprise the following steps:
(5-1) sets the gray value that f (i, j) is (i, j) point in the image that the second video camera shoots, in being with (i, j) point
The heart takes a window of N ' × N ' in the picture, the point set of pixel composition in window is set as A ', using formulaBe filtered, obtain it is dry after image g (i, j);
(5-2) is slided with the window of N ' × N ' on image, the gray value of all pixels in window is pressed and rises sequential arrangement,
Take the gray value for being arranged in the gray value of middle as window center pixel;
(5-3) utilizes formulaEdge inspection is carried out to image f (x, y)
Survey, obtain marginal point h (x, y);
(5-4) for image f (x, y) that the second video camera shoots, using formula L ' (x, y, σ)=g (x, y, σ) × f (x,
Y) scale space images L ' (x, y, σ) are built, g (x, y, σ) is yardstick Gauss variable function,(x, y) is space coordinates, and σ is Image Smoothness;
(5-5) utilizes formula
D ' (x, y, σ)=(g (x, y, k σ)-g (x, y, σ)) × f (x, y)=L ' (x, y, k σ)-L ' (x, y, σ) calculates Gauss
Difference scale space D ' (x, y, σ);
For each pixel in image f (x, y), the sub- octave image that s layers of length and width halve respectively is set up successively, wherein,
First straton octave image is artwork;
Be compared for the D ' (x, y, σ) of D ' (x, y, σ) pixel adjacent thereto of each pixel by (5-6), if institute
When the D ' (x, y, σ) for stating pixel is maximum or minimum value in this layer and bilevel each neighborhood, the pixel is taken
It is key point;
(5-7) obtains the dog figures being made up of each selected key point, and LPF is carried out to dog figures;Removal dog figures
Each point outside middle marginal point, obtains two-dimentional point diagram;
(5-8) utilizes formula
With θ (x, y)=
Arctan2 ((L (x, y+1)-L (x, y-1))/(L (x+1, y)-L (x-1, y))) calculate each key point modulus value m (x, y) and
Angle, θ (x, y), (x+1 y) is key point (x+1, yardstick y) to L;Modulus value, angle and the yardstick of each key point are set to close
The feature 1 of key point, feature 2 and feature 3;
(5-9) is by 3 of each characteristic point of all of set of keypoints in 3 features of each key point A2 and database
Individual feature is compared respectively, and key point B2 most close with A and secondary close key point C2 is found out in set of keypoints;
The difference for setting the feature 1 of key point A2 and B2 is a21, set key point A2 and C2 feature 1 difference as
b21;
The difference for setting the feature 2 of key point A2 and B2 is a22, set key point A2 and C2 feature 1 difference as
b22;
The difference for setting the feature 32 of key point A2 and B2 is a23, set key point A2 and C2 feature 1 difference as
b23;
WhenAndAndRatio is the rate threshold of setting;
It is correct match point then to select key point B2.
Preferably, also comprising the following steps:If within a period of time, under the comprehensive discrimination γ of certain user continues
Drop reaches 30 times, then corresponding special in characteristic point, the key point replacement data storehouse of all comparisons failure for being recognized with last time
Levy point, key point.
Preferably, ratio is 0.4 to 0.5.
Therefore, the present invention has the advantages that:Discrimination is high, strong applicability, and low cost improves administrative convenience
Property.
Brief description of the drawings
Fig. 1 is a kind of flow chart of the invention;
Fig. 2 is a kind of set of keypoints figure of the invention;
Fig. 3 is a kind of set of characteristic points figure of the invention.
Specific embodiment
The present invention will be further described with reference to the accompanying drawings and detailed description.
Embodiment as shown in Figure 1 is a kind of water purifier user personal identification method, including the control on water purifier
Device processed, memory, infrared temperature sensor, the first video camera and the second video camera for infrared thermal imaging;Controller is distinguished
Electrically connected with infrared temperature sensor, the first video camera, the second camera memory and server;Comprise the following steps:
Step 100, human testing
When user is near water purifier, controller obtains the human body signal of infrared temperature sensor detection;
Step 200, IMAQ
Controller controls the first video camera and the second video camera to start working, and the first video camera and the second camera acquisition make
User's image;
Step 300, characteristic point, the identification and matching of key point
Each characteristic point of user is obtained in the image that controller shoots from the first video camera, each by user is special
Levy and a little compare with the set of characteristic points of all users in database, select the characteristic point of correct matching;
Comprise the following steps that:
Step 310, obtains each characteristic point of user in the image that controller shoots from the first video camera,
Step 311, for the first video camera shoot image I (x, y), using formula G (i)=| [f (i-1, j-1)+f
(i-1, j)+f (i-1, j+1)]-[f (i+1, j-1)+f (i+1, j)+f (i+1, j+1)] | and G (j)=| [f (i-1, j+1)+f
(i, j+1)+f (i+1, j+1)]-[f (i-1, j-1)+f (i, j-1)+f (i+1, j-1)] | calculate each pixel in image I (x, y)
Neighborhood convolution G (i) of point (l, j), G (j),
Setting P (i, j)=max [G (i), G (j)], it is image border point to select P (i, j);
Step 312, for image I (x, y) that the first video camera shoots, using formula L (x, y, σ)=g (x, y, σ) × I
(x, y) builds scale space images L (x, y, σ), and g (x, y, σ) is yardstick Gauss variable function,(x, y) is space coordinates, and σ is Image Smoothness;
Step 313, using formula
D (x, y, σ)=(g (x, y, k σ)-g (x, y, σ)) × I (x, y)=L (x, y, k σ)-L (x, y, σ) calculates Gaussian difference
Divide metric space D (x, y, σ);K is the constant of adjacent metric space multiple;
For each pixel in image I (x, y), the sub- octave image that s layers length and width halve respectively is set up successively, its
In, the first straton octave image is artwork;
Step 314, the D (x, y, σ) of D (x, y, σ) pixel adjacent thereto of each pixel is compared, if institute
When the D (x, y, σ) for stating pixel is maximum or minimum value in this layer and bilevel every field, taking the pixel is
Characteristic point;
Step 315, the dog figures that acquisition is made up of each selected characteristic point, LPF is carried out to dog figures;Removal dog
Each point in figure outside marginal point, obtains two-dimentional point diagram;
Step 316, using formula
With θ (x, y)=arctan
((L (x, y+1)-L (x, y-1))/(L (x+1, y)-L (x-1, y))) calculate each characteristic point modulus value m (x, y) and angle, θ (x,
Y), the number of plies of sub- octave image of the yardstick of each characteristic point as where it is set;Set the modulus value of each characteristic point, angle and
Yardstick is characterized feature 1 a little, feature 2 and feature 3;L (x+1, y) characteristic point (x+1, yardstick y);
Step 317, by each characteristic point of all of set of characteristic points in 3 features of each characteristic point A1 and database
3 features be compared respectively, characteristic point B1 most close with A1 and time close characteristic point C1 is found out in set of characteristic points;
The difference for setting the feature 1 of characteristic point A1 and B1 is a11, set characteristic point A1 and C1 feature 1 difference as
b11;
The difference for setting the feature 2 of characteristic point A1 and B1 is a12, set characteristic point A1 and C1 feature 1 difference as
b12;
The difference for setting the feature 32 of characteristic point A1 and B1 is a13, set characteristic point A1 and C1 feature 1 difference as
b13;
WhenAndAndRatio is the rate threshold of setting;
It is correct match point then to select characteristic point B1;
Step 320, key point identification and matching
Each key point of user is obtained in the image that controller shoots from the second video camera, by each pass of user
The set of keypoints of all users in key point and database is compared, and selectes the key point of correct matching;
Comprise the following steps that:
Step 321, setting f (i, j) is the gray value of (i, j) point in the image that the second video camera shoots, and is with (i, j) point
Center takes a window of N ' × N ' in the picture, the point set of pixel composition in window is set as A ', using formulaBe filtered, obtain it is dry after image g (i, j);
Step 322, is slided with the window of N ' × N ' on image, and the gray value of all pixels in window is arranged by order is risen
Row, take the gray value for being arranged in the gray value of middle as window center pixel;
Step 323, using formulaSide is carried out to image f (x, y)
Edge detection, obtains marginal point h (x, y);
Step 324, for image f (x, y) that the second video camera shoots, using formula L ' (x, y, σ)=g (x, y, σ) × f
(x, y) builds scale space images L ' (x, y, σ), and g (x, y, σ) is yardstick Gauss variable function,(x, y) is space coordinates, and σ is Image Smoothness;
Step 325, using formula
D ' (x, y, σ)=(g (x, y, k σ)-g (x, y, σ)) × f (x, y)=L ' (x, y, k σ)-L ' (x, y, σ) calculates Gauss
Difference scale space D ' (x, y, σ);
For each pixel in image f (x, y), the sub- octave image that s layers of length and width halve respectively is set up successively, wherein,
First straton octave image is artwork;
Step 326, the D ' (x, y, σ) of D ' (x, y, σ) pixel adjacent thereto of each pixel is compared, if
When the D ' (x, y, σ) of the pixel is maximum or minimum value in this layer and bilevel each neighborhood, the pixel is taken
Point is key point;
Step 327, the dog figures that acquisition is made up of each selected key point, LPF is carried out to dog figures;Removal dog
Each point in figure outside marginal point, obtains two-dimentional point diagram;
Step 328, using formula
With θ (x, y)=
Arctan2 ((L (x, y+1)-L (x, y-1))/(L (x+1, y)-L (x-1, y))) calculate each key point modulus value m (x, y) and
Angle, θ (x, y), (x+1 y) is key point (x+1, yardstick y) to L;Modulus value, angle and the yardstick of each key point are set to close
The feature 1 of key point, feature 2 and feature 3;
Step 329, by each characteristic point of all of set of keypoints in 3 features of each key point A2 and database
3 features be compared respectively, key point B2 most close with A and time close key point C2 is found out in set of keypoints;
The difference for setting the feature 1 of key point A2 and B2 is a21, set key point A2 and C2 feature 1 difference as
b21;
The difference for setting the feature 2 of key point A2 and B2 is a22, set key point A2 and C2 feature 1 difference as
b22;
The difference for setting the feature 32 of key point A2 and B2 is a23, set key point A2 and C2 feature 1 difference as
b23;
WhenAndAndRatio is the rate threshold of setting;
It is correct match point then to select key point B2.
Step 400, calculates comprehensive discrimination
Using formulaCalculate Feature point recognition rate γ1, wherein, n1 is the feature points of accumulative correct matching,
N1 is characterized the characteristic point sum of point set, and K1 is 3;
Using formulaCalculate key point discrimination γ2, wherein, n2 is the feature points of accumulative correct matching,
N2 is the total number of keypoints of set of keypoints, and K2 is 3;
Controller utilizes formulaCalculate comprehensive discrimination γ;
Step 500, recognizes user
As γ >=W, then controller finds the title of user corresponding with γ in database, and user's title is passed to
Server, server storage current time, synthesis discrimination γ and user's title;Wherein, k1、k2It is the weight coefficient of setting,
W is the standard discrimination of setting.W=90%, k1=0.6, k2=0.4.
As shown in Fig. 2 scope residing for each face key point is user face up to hair line, under it is minimum to chin
Point, left and right takes ear edge point, and four direction is face maximum frame, and data sampling is carried out in facial maximum frame region, is removed
Go characteristic point region take 86 key points (hair line to characteristic area brow portion take 22 key points (by 5 points of upper volume hair line, on
Each 5 foreheads middle part of 5 points of eyebrow marginal portion, left and right hair line takes at 2 points with reference to cross), the left key point of cheek part 26
(left cheek takes 2 points of upper canthus distance below by 16 points of ear boundary line, 8 points of side face limit, cheek middle part), right cheek portion
(right cheek takes upper canthus distance below 2 by 16 points of ear boundary line, 8 points of side face limit, cheek middle part point to take 26 key points
Point), chin portion take 12 key points (chin portion by 6 points of chin border, 2 points of lip border, chengjiang and surrounding 3 points).
As shown in figure 3, human face characteristic point be located at the trigonum that the eyebrow intermediate point of face trigonum two and chengjiang constitute and
The border of shoulder two to face borderline region, wherein, 16 characteristic points of eyes, 4 characteristic points of face, 4 characteristic points of nose, forehead
30 points of compositions of 4 characteristic points and face's shoulder.
If within the time in seven days, the comprehensive discrimination γ continuous decreases of certain user reach 30 times, then use last
Corresponding characteristic point, key point in characteristic point, the key point replacement data storehouse of all comparisons failure of secondary identification.Ratio is
0.4。
It should be understood that the present embodiment is only illustrative of the invention and is not intended to limit the scope of the invention.In addition, it is to be understood that
Read after the content of instruction of the present invention, those skilled in the art can make various changes or modifications to the present invention, these etc.
Valency form equally falls within the application appended claims limited range.
Claims (7)
1. a kind of water purifier user personal identification method, it is characterized in that, including controller on water purifier, memory,
Infrared temperature sensor, the first video camera and the second video camera for infrared thermal imaging;Controller is passed with infrared temperature respectively
The electrical connection of sensor, the first video camera, the second camera memory and server;Comprise the following steps:
(1-1), when user is near water purifier, controller obtains the human body signal of infrared temperature sensor detection;
(1-2) controller controls the first video camera and the start-up operation of the second video camera, the first video camera and the second camera acquisition
User's image;
Be provided with the database of set of characteristic points and set of keypoints including all users in (1-3) memory, controller from
Each characteristic point of user is obtained in the image that first video camera shoots, by each characteristic point of user and database
The set of characteristic points of all users is compared, and selectes the characteristic point of correct matching;
Each key point of user is obtained in the image that controller shoots from the second video camera, by each key point of user
Set of keypoints with all users in database is compared, and selectes the key point of correct matching;
(1-4) utilizes formulaCalculate Feature point recognition rate γ1, wherein, n1 is the feature points of accumulative correct matching,
N1 is characterized the characteristic point sum of point set, and K1 is the characteristic of each characteristic point;
Using formulaCalculate key point discrimination γ2, wherein, n2 is the feature points of accumulative correct matching, and N2 is
The total number of keypoints of set of keypoints, K2 is the characteristic of each characteristic point;
(1-5) controller utilizes formulaCalculate comprehensive discrimination γ;
As γ >=W, then controller finds the title of user corresponding with γ in database, and user's title is passed into service
Device, server storage current time, synthesis discrimination γ and user's title;Wherein, k1、k2It is the weight coefficient of setting, W is
The standard discrimination of setting.
2. water purifier user personal identification method according to claim 1, it is characterized in that, the model residing for each key point
It is user face up to hair line to enclose, under to chin minimum point, left and right to ear edge point;Including 7 regions, 7 regions
Respectively forehead region, left eye region, right eye region, nasal area, left face region, right face region and nose chin area;It is left
Crucial point symmetry in Vitrea eye domain, right eye region is chosen, and the crucial point symmetry in left face region, right face region is chosen.
3. water purifier user personal identification method according to claim 1, it is characterized in that, each characteristic point is located at face
Trigonum, characteristic point is 30.
4. water purifier user personal identification method according to claim 1, it is characterized in that, the controller is taken the photograph from first
Each characteristic point of user is obtained in the image that camera shoots, all in each characteristic point of user and database are made
The set of characteristic points of user is compared, and the characteristic point of selected correct matching comprises the following steps:
(4-1) for image I (x, y) that the first video camera shoots, using formula G (i)=| [f (i-1, j-1)+f (i-1, j)+f
(i-1, j+1)]-[f (i+1, j-1)+f (i+1, j)+f (i+1, j+1)] | and
G (j)=| [f (i-1, j+1)+f (i, j+1)+f (i+1, j+1)]-[f (i-1, j-1)+f (i, j-1)+f (i+1, j-1)] |
Calculate in image I (x, y) each pixel (I, neighborhood convolution G (i) j), G (j),
Setting P (i, j)=max [G (i), G (j)], it is image border point to select P (i, j);
(4-2) for image I (x, y) that the first video camera shoots, using formula L (x, y, σ)=g (x, y, σ) × I (x, y) structure
Scale space images L (x, y, σ) is built, g (x, y, σ) is yardstick Gauss variable function,
(x, y) is space coordinates, and σ is Image Smoothness;
(4-3) utilizes formula
D (x, y, σ)=(g (x, y, k σ)-g (x, y, σ)) × I (x, y)=L (x, y, k σ)-L (x, y, σ) calculates difference of Gaussian chi
Degree space D (x, y, σ);K is the constant of adjacent metric space multiple;
For each pixel in image I (x, y), the sub- octave image that s layers length and width halve respectively is set up successively, wherein, the
One straton octave image is artwork;
Be compared for the D (x, y, σ) of D (x, y, σ) pixel adjacent thereto of each pixel by (4-4), if the pixel
When the D (x, y, σ) of point is maximum or minimum value in this layer and bilevel every field, takes the pixel and be characterized
Point;
(4-5) obtains the dog figures being made up of each selected characteristic point, and LPF is carried out to dog figures;Side in removal dog figures
Each point outside edge point, obtains two-dimentional point diagram;
(4-6) utilizes formula
With θ (x, y)=arc tan ((L
(x, y+1)- L (x, y-1))/(L (x+1, y)-L (x-1, y))) calculates modulus value m (x, y) and angle, θ (x, y) of each characteristic point,
Set the number of plies of sub- octave image of the yardstick of each characteristic point as where it;Set modulus value, angle and the chi of each characteristic point
Degree is characterized feature 1 a little, feature 2 and feature 3;L (x+1, y) characteristic point (x+1, yardstick y);
(4-7) is by 3 spies of each characteristic point of all of set of characteristic points in 3 features of each characteristic point A1 and database
Levy and be compared respectively, characteristic point B1 most close with A1 and secondary close characteristic point C1 is found out in set of characteristic points;
The difference for setting the feature 1 of characteristic point A1 and B1 is a11, and the difference for setting the feature 1 of characteristic point A1 and C1 is b11;
The difference for setting the feature 2 of characteristic point A1 and B1 is a12, and the difference for setting the feature 1 of characteristic point A1 and C1 is b12;
The difference for setting the feature 32 of characteristic point A1 and B1 is a13, and the difference for setting the feature 1 of characteristic point A1 and C1 is b13;
WhenAndAndRatio is the rate threshold of setting;
It is correct match point then to select characteristic point B1.
5. water purifier user personal identification method according to claim 1, it is characterized in that, controller is from the second video camera
Each key point of user is obtained in the image of shooting, by all users in each key point of user and database
Set of keypoints compare, the key point of selected correct matching comprises the following steps:
(5-1) set f (i, j) be the second video camera shoot image in (i, j) point gray value, with (i, j) put centered on
A window of N ' × N ' is taken in image, the point set of pixel composition in window is set as A ', using formulaBe filtered, obtain it is dry after image g (i, j);
(5-2) is slided with the window of N ' × N ' on image, the gray value of all pixels in window is pressed and rises sequential arrangement, the row of taking
The gray value of middle is listed in as the gray value of window center pixel;
(5-3) utilizes formulaRim detection is carried out to image f (x, y), is obtained
To marginal point h (x, y);
(5-4) for image f (x, y) that the second video camera shoots, using formula L ' (x, y, σ)=g (x, y, σ) × f (x, y) structure
Scale space images L ' (x, y, σ) is built, g (x, y, σ) is yardstick Gauss variable function,
(x, y) is space coordinates, and σ is Image Smoothness;
(5-5) utilizes formula
D ' (x, y, σ)=(g (x, y, k σ)-g (x, y, σ)) × f (x, y)=L ' (x, y, k σ)-L ' (x, y, σ) calculates difference of Gaussian
Metric space D ' (x, y, σ);
For each pixel in image f (x, y), the sub- octave image that s layers of length and width halve respectively is set up successively, wherein, first
Straton octave image is artwork;
Be compared for the D ' (x, y, σ) of D ' (x, y, σ) pixel adjacent thereto of each pixel by (5-6), if the picture
When the D ' (x, y, σ) of vegetarian refreshments is maximum or minimum value in this layer and bilevel each neighborhood, the pixel is taken to close
Key point;
(5-7) obtains the dog figures being made up of each selected key point, and LPF is carried out to dog figures;Side in removal dog figures
Each point outside edge point, obtains two-dimentional point diagram;
(5-8) utilizes formula
With θ (x, y)=arc tan2
((L (x, y+1)-L (x, y-1))/(L (x+1, y)-L (x-1, y))) calculate each key point modulus value m (x, y) and angle, θ (x,
Y), (x+1 y) is key point (x+1, yardstick y) to L;Modulus value, angle and the yardstick for setting each key point are the spy of key point
Levy 1, feature 2 and feature 3;
(5-9) is by 3 spies of each characteristic point of all of set of keypoints in 3 features of each key point A2 and database
Levy and be compared respectively, key point B2 most close with A and secondary close key point C2 is found out in set of keypoints;
The difference for setting the feature 1 of key point A2 and B2 is a21, and the difference for setting the feature 1 of key point A2 and C2 is b21;
The difference for setting the feature 2 of key point A2 and B2 is a22, and the difference for setting the feature 1 of key point A2 and C2 is b22;
The difference for setting the feature 32 of key point A2 and B2 is a23, and the difference for setting the feature 1 of key point A2 and C2 is b23;
WhenAndAndRatio is the rate threshold of setting;
It is correct match point then to select key point B2.
6. the water purifier user's personal identification method according to claim 1 or 2 or 3 or 4, it is characterized in that, also including such as
Lower step:If within a period of time, the comprehensive discrimination γ continuous decreases of certain user reach 30 times, then known with last time
Corresponding characteristic point, key point in other all characteristic point, key point replacement data storehouses for comparing failure.
7. the water purifier user's personal identification method according to claim 4 or 5, it is characterized in that, ratio be 0.4 to
0.5。
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201611112299.5A CN106778577A (en) | 2016-12-06 | 2016-12-06 | Water purifier user's personal identification method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201611112299.5A CN106778577A (en) | 2016-12-06 | 2016-12-06 | Water purifier user's personal identification method |
Publications (1)
Publication Number | Publication Date |
---|---|
CN106778577A true CN106778577A (en) | 2017-05-31 |
Family
ID=58874621
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201611112299.5A Pending CN106778577A (en) | 2016-12-06 | 2016-12-06 | Water purifier user's personal identification method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106778577A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112241700A (en) * | 2020-10-15 | 2021-01-19 | 希望银蕨智能科技有限公司 | Multi-target forehead temperature measurement method for forehead accurate positioning |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101504761A (en) * | 2009-01-21 | 2009-08-12 | 北京中星微电子有限公司 | Image splicing method and apparatus |
US8229178B2 (en) * | 2008-08-19 | 2012-07-24 | The Hong Kong Polytechnic University | Method and apparatus for personal identification using palmprint and palm vein |
CN102663361A (en) * | 2012-04-01 | 2012-09-12 | 北京工业大学 | Face image reversible geometric normalization method facing overall characteristics analysis |
CN102779274A (en) * | 2012-07-19 | 2012-11-14 | 冠捷显示科技(厦门)有限公司 | Intelligent television face recognition method based on binocular camera |
CN103235942A (en) * | 2013-05-14 | 2013-08-07 | 苏州福丰科技有限公司 | Facial recognition method applied to entrance guard |
CN105894287A (en) * | 2016-04-01 | 2016-08-24 | 王涛 | Face payment platform based on iris-assisted identity authentication |
CN105956518A (en) * | 2016-04-21 | 2016-09-21 | 腾讯科技(深圳)有限公司 | Face identification method, device and system |
-
2016
- 2016-12-06 CN CN201611112299.5A patent/CN106778577A/en active Pending
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8229178B2 (en) * | 2008-08-19 | 2012-07-24 | The Hong Kong Polytechnic University | Method and apparatus for personal identification using palmprint and palm vein |
CN101504761A (en) * | 2009-01-21 | 2009-08-12 | 北京中星微电子有限公司 | Image splicing method and apparatus |
CN102663361A (en) * | 2012-04-01 | 2012-09-12 | 北京工业大学 | Face image reversible geometric normalization method facing overall characteristics analysis |
CN102779274A (en) * | 2012-07-19 | 2012-11-14 | 冠捷显示科技(厦门)有限公司 | Intelligent television face recognition method based on binocular camera |
CN103235942A (en) * | 2013-05-14 | 2013-08-07 | 苏州福丰科技有限公司 | Facial recognition method applied to entrance guard |
CN105894287A (en) * | 2016-04-01 | 2016-08-24 | 王涛 | Face payment platform based on iris-assisted identity authentication |
CN105956518A (en) * | 2016-04-21 | 2016-09-21 | 腾讯科技(深圳)有限公司 | Face identification method, device and system |
Non-Patent Citations (1)
Title |
---|
王晋年等: "《北京一号小卫星数据处理技术及应用》", 31 October 2010, 武汉:武汉大学出版社 * |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112241700A (en) * | 2020-10-15 | 2021-01-19 | 希望银蕨智能科技有限公司 | Multi-target forehead temperature measurement method for forehead accurate positioning |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Jee et al. | Liveness detection for embedded face recognition system | |
KR101901591B1 (en) | Face recognition apparatus and control method for the same | |
CN104143086B (en) | Portrait compares the application process on mobile terminal operating system | |
CN108470169A (en) | Face identification system and method | |
US20060110014A1 (en) | Expression invariant face recognition | |
Kashem et al. | Face recognition system based on principal component analysis (PCA) with back propagation neural networks (BPNN) | |
US20060222212A1 (en) | One-dimensional iris signature generation system and method | |
WO2005057472A1 (en) | A face recognition method and system of getting face images | |
JP2001331799A (en) | Image processor and image processing method | |
JP2000259814A (en) | Image processor and method therefor | |
CN105975938A (en) | Smart community manager service system with dynamic face identification function | |
Prokoski et al. | Infrared identification of faces and body parts | |
CN103123690B (en) | Information acquisition device, information acquisition method, identification system and identification method | |
CN105335691A (en) | Smiling face identification and encouragement system | |
CN208351494U (en) | Face identification system | |
CN106485232A (en) | A kind of personal identification method based on nose image feature in respiratory | |
Arandjelovic et al. | On person authentication by fusing visual and thermal face biometrics | |
CN205644823U (en) | Social security self -service terminal device | |
Syambas et al. | Image processing and face detection analysis on face verification based on the age stages | |
US10621419B2 (en) | Method and system for increasing biometric acceptance rates and reducing false accept rates and false rates | |
Mary et al. | Human identification using periocular biometrics | |
CN110069962A (en) | A kind of biopsy method and system | |
CN106778577A (en) | Water purifier user's personal identification method | |
CN106778578A (en) | Water purifier method for identifying ID | |
Tzeng et al. | The design of isotherm face recognition technique based on nostril localization |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20170531 |
|
RJ01 | Rejection of invention patent application after publication |