CN107316023A - A kind of face identification system for being used to share equipment - Google Patents
A kind of face identification system for being used to share equipment Download PDFInfo
- Publication number
- CN107316023A CN107316023A CN201710501944.0A CN201710501944A CN107316023A CN 107316023 A CN107316023 A CN 107316023A CN 201710501944 A CN201710501944 A CN 201710501944A CN 107316023 A CN107316023 A CN 107316023A
- Authority
- CN
- China
- Prior art keywords
- mrow
- msub
- user
- classifier
- sample
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000001514 detection method Methods 0.000 claims abstract description 24
- 238000012549 training Methods 0.000 claims abstract description 19
- 238000000034 method Methods 0.000 claims description 8
- 238000012937 correction Methods 0.000 claims description 3
- 230000003247 decreasing effect Effects 0.000 claims description 3
- 239000000284 extract Substances 0.000 claims description 3
- 238000003909 pattern recognition Methods 0.000 abstract description 2
- 238000010586 diagram Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000006399 behavior Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F21/00—Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
- G06F21/30—Authentication, i.e. establishing the identity or authorisation of security principals
- G06F21/31—User authentication
- G06F21/32—User authentication using biometric data, e.g. fingerprints, iris scans or voiceprints
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q20/00—Payment architectures, schemes or protocols
- G06Q20/08—Payment architectures
- G06Q20/14—Payment architectures specially adapted for billing systems
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
-
- G—PHYSICS
- G07—CHECKING-DEVICES
- G07F—COIN-FREED OR LIKE APPARATUS
- G07F17/00—Coin-freed apparatus for hiring articles; Coin-freed facilities or services
- G07F17/0042—Coin-freed apparatus for hiring articles; Coin-freed facilities or services for hiring of objects
-
- G—PHYSICS
- G07—CHECKING-DEVICES
- G07F—COIN-FREED OR LIKE APPARATUS
- G07F17/00—Coin-freed apparatus for hiring articles; Coin-freed facilities or services
- G07F17/10—Coin-freed apparatus for hiring articles; Coin-freed facilities or services for means for safe-keeping of property, left temporarily, e.g. by fastening the property
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Business, Economics & Management (AREA)
- Health & Medical Sciences (AREA)
- Accounting & Taxation (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Multimedia (AREA)
- Finance (AREA)
- Human Computer Interaction (AREA)
- General Health & Medical Sciences (AREA)
- Computer Security & Cryptography (AREA)
- Development Economics (AREA)
- General Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Computer Hardware Design (AREA)
- Strategic Management (AREA)
- Economics (AREA)
- General Business, Economics & Management (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a kind of face identification system for being used to share equipment, belong to area of pattern recognition.The system obtains user images, constitutes positive and negative sample training user identity detection module, user identity is identified based on user identity detection module, determines that the related of the user preserved in vending system is extracted after user identity to be set and perform.Mode of the invention based on user biological feature recognition carries out authenticating user identification, when shared equipment is done shopping, it directly can on a sharing device be done shopping without using mobile phone terminal and pay expense, accurate authentication can be carried out, cost is reduced, and can further react the usage experience of user.
Description
[ technical field ] A method for producing a semiconductor device
The invention relates to the field of pattern recognition, in particular to a face recognition system for sharing equipment.
[ background of the invention ]
In recent years, with the rapid development of the computer industry, computer technology has deepened into people's lives, and has started to be gradually combined with our living environment, and the concept of sharing shared devices appears. The sharing and sharing device is a device that connects various sharing devices in a home together by using technologies such as computers, communication, sensors, home appliances and the like, and is controlled by one terminal, thereby providing a very convenient living environment for people.
In the shared sharing device system, in order to better provide services to users, the usage behaviors of the users are generally collected and analyzed, and personalized services are provided to enhance the user experience. Therefore, how to identify the user becomes particularly important. In the prior art, the user identity authentication can be carried out in a mode that the user carries an RFID card or other electronic devices, but if the hardware device is acquired by other people other than the user, the identity authentication can be carried out, and the corresponding hardware device needs to be carried with the user, so that the user use experience is reduced, and meanwhile, the cost of the vending system is improved. In the prior art, the user identification, such as user face identification and fingerprint identification, can also be performed through a mode identification system, such as detecting a user biological characteristic, but the above systems require a user to perform a fixed operation, or perform an authentication step required by the identification system, such as the user needs to stand in front of a face identification device, and the fingerprint identification requires the user to place a finger on a collection device, so that the systems limit the user operation to a certain extent, and reduce the user experience.
[ summary of the invention ]
In order to solve the problem of identity authentication of the existing sharing equipment, the invention provides a face recognition system for the sharing equipment.
The system comprises a cargo equipment main body, a cargo scanner, an identity authentication device, a gateway, a cargo cabinet and a charging unit;
the cargo scanner and the charging unit are both connected with a control unit; goods are placed on the goods cabinet, and the goods scanner scans the goods, acquires goods information and transmits the goods information to the control unit; after the user confirms the identity through the identity authentication device, the sharing equipment is opened, and after the user finishes taking the goods and closes the cabinet door of the sharing equipment, the goods scanner scans the goods, obtains the goods information and transmits the goods information to the control unit; the charging unit charges through the control unit and deducts the fee from the user registration account.
Preferably, the identity authentication device includes: the system comprises a cloud server, an image acquisition unit, a detection unit and a result output unit, wherein the cloud server comprises a database, used for storing user training samples, the image acquisition unit is arranged on the sharing equipment terminal, the detection unit and the result output unit are arranged in the gateway, the detection unit is provided with a characteristic classifier, the feature classifier is trained based on the training sample, the detection unit identifies the user image acquired by the image acquisition unit to obtain an identity authentication result, the result output unit transmits the user identity authentication result to the gateway, the gateway extracts the use habit of the sharing equipment terminal of the user and transmits the use habit to each sharing equipment terminal, and the sharing equipment terminal automatically executes the operation according with the use habit of the user or gives corresponding options for the user to select.
Preferably, the feature classifier is trained based on an improved Boost system, and the training process of the feature classifier in the detection unit includes the following steps:
A1. the user training samples are pre-collected whole-body images of continuous movement of a user, wherein the training samples comprise N positive samples and L negative samples, wherein the N positive samples exist in the user, and the L negative samples do not exist in the user;
A2. obtaining a feature vector X of the training sample, i.e. X ═ f1(x),f2(x),...fk(x))T(x) represents image sample features; the posterior probability that X is a positive sample can be expressed as y, y 1 represents a positive sample label, y 0 represents a negative sample label
Wherein the function (z) is defined as
Thereby establishing a classifier model
Combining the formula (1) and the formula (2), the posterior probability can be expressed as
P(y=1|X)=(Hk(X)) (4)
Classifier H for feature vector Xk(X) can be represented as
Wherein,hk(fk(x) Represents a weak classifier, and the strong classifier H can be composed of K weak classifiersk(X)。
Positive and negative samples were placed in two sets respectively: set of positive samples { X1jN-1, and a set of negative samples { X ═ 00jN + L-1, continuously selecting weak classifiers from positive and negative sample sets by using multiple groups of samples, and further constructing a combined classifier with the highest recognition rate, wherein the posterior probability of a known single sample is represented as
Pij=(Hk(Xij)) (6)
Where the value of i represents the number of the sample set, i-1 represents a positive sample set, i-0 represents a negative sample set, j is the sample number, and a classifier h is setk(fk(xij) The conditional probability in) is Gaussian distributed, i.e. the conditional probability is
p(fk(xij)|y=1)~N(μ1,σ1)
p(fk(xij)|y=0)~N(μ0,σ0) (7)
Wherein, mu1,σ1,μ0,σ0Will make incremental updates
μ0,σ0The updating of (a) is the same as the above formula; p can be obtained by the following equations (7) and (8)ijThus, the posterior probability of the sample set i can be expressed as
A3. Manually marking a face area on the first frame of positive sample image, then obtaining the face area in each frame of image by adopting a Kalman filter, and carrying out manual marking correction on every 10 frames of images so as to reduce the accumulated error of the Kalman filter;
A4. for a face region output by a Kalman filter, extracting features of the face region, generating a feature vector, forming a feature vector set, setting weight, and improving the posterior probability of the feature vector set:
wherein, wj0Is a weight function, monotonically decreasing, represented asWherein, | d (X)1j)-d(X10) I denotes sample x1jThe Euclidean distance from the first frame of artificial calibration area, and c is a constant;
A5. after determining the posterior probability of the sample set, the classifier is selected, the selected system is expressed as
Wherein,is a strong classifier comprising k-1 weak classifiers; l is a log-likelihood function of the set, defined as
l=∑i(yilogPi+(1-yi)log(1-Pi)) (11)
Preferably, the extracted user image features are Haar-Like features.
Preferably, the identifying of the user image acquired by the image acquisition unit by the detection unit is specifically to acquire the user image, generate a feature vector, and input the feature vector into a feature classifier in the detection unit for identifying;
the identification process further comprises the step of carrying out nearest neighbor matching on the feature vector identified as the user and the feature vector of the face region output by the Kalman filter, wherein the nearest neighbor matching is based on Euclidean distance, and the feature vector with the distance value larger than a preset threshold value is added into the feature vector set output by the Kalman filter.
The purpose of this is that the feature vector identified as the user passes the authentication of the feature classifier, i.e. is identified as the user, and then is matched with the feature vector of the face region, and assuming that the euclidean distance between two samples is smaller than a preset threshold, the two samples belong to the same class, i.e. belong to the same class as the samples in the feature vector set; and if the Euclidean distance between the two samples is larger than a preset threshold value, the new sample can be considered to be currently appeared, the new sample can be added into the feature vector set, and the feature classifier is trained again.
The beneficial effects achieved by the invention are as follows: the user identity authentication system in the vending system does not need to carry extra hardware equipment, and the user can identify the identity of the user only by appearing in the vending system without executing a specific identification step, so that the user can use the system conveniently, and the use experience of the user is improved. In addition, the output result of Kalman filtering is adopted to add weight to the selection of the weak classifier, so that the classification accuracy is improved. .
[ description of the drawings ]
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this application, and are not to be considered limiting of the invention, in which:
fig. 1 is a general block diagram of a face recognition system of the present sharing device.
Fig. 2 is a flow chart of the system of the present invention.
[ detailed description ] embodiments
Reference will now be made in detail to embodiments of the present invention, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the accompanying drawings are illustrative only for the purpose of explaining the present invention, and are not to be construed as limiting the present invention. On the contrary, the embodiments of the invention include all changes, modifications and equivalents coming within the spirit and terms of the claims appended hereto.
In the description of the present invention, it is to be understood that the terms "first," "second," and the like are used for descriptive purposes only and are not to be construed as indicating or implying relative importance. In the description of the present invention, it is to be noted that, unless otherwise explicitly specified or limited, the terms "connected" and "connected" are to be interpreted broadly, e.g., as being fixed or detachable or integrally connected; can be mechanically or electrically connected; may be directly connected or indirectly connected through an intermediate. The specific meanings of the above terms in the present invention can be understood in specific cases to those skilled in the art. In addition, in the description of the present invention, "a plurality" means two or more unless otherwise specified.
Any process or system descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing specific logical functions or steps of the process, and alternate implementations are included within the scope of the preferred embodiment of the present invention in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the present invention.
The system applied by the invention comprises a sharing device main body, a cargo scanner, an identity authentication device, a gateway, a cargo cabinet and a charging unit;
the cargo scanner and the charging unit are both connected with a control unit; goods are placed on the goods cabinet, and the goods scanner scans the goods, acquires goods information and transmits the goods information to the control unit; after the user confirms the identity through the identity authentication device, the sharing equipment is opened, and after the user finishes taking the goods and closes the cabinet door of the sharing equipment, the goods scanner scans the goods, obtains the goods information and transmits the goods information to the control unit; the charging unit charges through the control unit and deducts the fee from the user registration account.
In the embodiment provided by the invention, an identity authentication device is arranged on a sharing equipment main body, and a cabinet door and a charging unit of the sharing equipment are controlled by the identity authentication device, wherein the identity authentication device in the application is a face recognition device, the face image of a consumer is collected by the identity authentication device, the identity of the consumer is judged, firstly, the consumer registers identity information, a background of the sharing equipment stores the identity information of the consumer, when the consumer uses the sharing equipment, the face image information of the consumer is collected by the identity authentication device arranged on the sharing equipment and is matched with the identity information of the consumer stored in the background of the sharing equipment, after the identity of the user is determined, a cabinet door of the sharing equipment is unlocked, and after the consumer finishes taking goods and closes the cabinet door of the sharing equipment, a goods scanner of the sharing equipment scans goods, acquires goods information and transmits the goods information to the control unit; the charging unit charges through the control unit and deducts the fee from the user registration account.
The user registration account can be bound with payment modes such as a bank card or a payment treasure WeChat through corresponding setting, and payment is carried out after the consumer uses the sharing device.
Referring to fig. 1, the identity authentication apparatus is shown, comprising: the system comprises a cloud server, an image acquisition unit, a detection unit and a result output unit, wherein the cloud server comprises a database, used for storing user training samples, the image acquisition unit is arranged on the sharing equipment terminal, the detection unit and the result output unit are arranged in the gateway, the detection unit is provided with a characteristic classifier, the feature classifier is trained based on the training sample, the detection unit identifies the user image acquired by the image acquisition unit to obtain an identity authentication result, the result output unit transmits the user identity authentication result to the gateway, the gateway extracts the use habit of the sharing equipment terminal of the user and transmits the use habit to each sharing equipment terminal, and the sharing equipment terminal automatically executes the operation according with the use habit of the user or gives corresponding options for the user to select.
Referring to fig. 2, a system flow diagram of the present invention is shown. The feature classifier is trained based on an improved Boost system, and the training process of the feature classifier in the detection unit comprises the following steps:
A1. the user training samples are pre-collected whole-body images of continuous movement of a user, wherein the training samples comprise N positive samples and L negative samples, wherein the N positive samples exist in the user, and the L negative samples do not exist in the user;
A2. obtaining a feature vector X of the training sample, i.e. X ═ f1(x),f2(x),...fk(x))T(x) represents image sample features; the posterior probability that X is a positive sample can be expressed as y, y 1 represents a positive sample label, y 0 represents a negative sample label
Wherein the function (z) is defined as
Thereby establishing a classifier model
Combining the formula (1) and the formula (2), the posterior probability can be expressed as
P(y=1|X)=(Hk(X)) (4)
Classifier H for feature vector Xk(X) can be represented as
Wherein,hk(fk(x) Represents a weak classifier, and the strong classifier H can be composed of K weak classifiersk(X)。
Positive and negative samples were placed in two sets respectively: set of positive samples { X1jN-1, and a set of negative samples { X ═ 00jN + L-1, continuously selecting weak classifiers from positive and negative sample sets by using multiple groups of samples, and further constructing a combined classifier with the highest recognition rate, wherein the posterior probability of a known single sample is represented as
Pij=(Hk(Xij)) (6)
Where the value of i represents the number of the sample set, i-1 represents a positive sample set, i-0 represents a negative sample set, j is the sample number, and a classifier h is setk(fk(xij) The conditional probability in) is Gaussian distributed, i.e. the conditional probability is
p(fk(xij)|y=1)~N(μ1,σ1)
p(fk(xij)|y=0)~N(μ0,σ0) (7)
Wherein, mu1,σ1,μ0,σ0Will make incremental updates
μ0,σ0The updating of (a) is the same as the above formula; p can be obtained by the following equations (7) and (8)ijThus, the posterior probability of the sample set i can be expressed as
A3. Manually marking a face area on the first frame of positive sample image, then obtaining the face area in each frame of image by adopting a Kalman filter, and carrying out manual marking correction on every 10 frames of images so as to reduce the accumulated error of the Kalman filter;
A4. for a face region output by a Kalman filter, extracting features of the face region, generating a feature vector, forming a feature vector set, setting weight, and improving the posterior probability of the feature vector set:
wherein, wj0Is a weight function, monotonically decreasing, represented asWherein, | d (X)1j)-d(X10) I denotes sample x1jThe Euclidean distance from the first frame of artificial calibration area, and c is a constant;
A5. after determining the posterior probability of the sample set, the classifier is selected, the selected system is expressed as
Wherein,is a strong classifier comprising k-1 weak classifiers; l is a log-likelihood function of the set, defined as
l=∑i(yilogPi+(1-yi)log(1-Pi)) (11)
Preferably, the extracted user image features are Haar-Li ke features.
Preferably, the identifying of the user image acquired by the image acquisition unit by the detection unit is specifically to acquire the user image, generate a feature vector, and input the feature vector into a feature classifier in the detection unit for identifying;
the identification process further comprises the step of carrying out nearest neighbor matching on the feature vector identified as the user and the feature vector of the face region output by the Kalman filter, wherein the nearest neighbor matching is based on Euclidean distance, and the feature vector with the distance value larger than a preset threshold value is added into the feature vector set output by the Kalman filter.
The purpose of this is that the feature vector identified as the user passes the authentication of the feature classifier, i.e. is identified as the user, and then is matched with the feature vector of the face region, and assuming that the euclidean distance between two samples is smaller than a preset threshold, the two samples belong to the same class, i.e. belong to the same class as the samples in the feature vector set; and if the Euclidean distance between the two samples is larger than a preset threshold value, the new sample can be considered to be currently appeared, the new sample can be added into the feature vector set, and the feature classifier is trained again.
The above description is only a preferred embodiment of the present invention, and all equivalent changes or modifications of the structure, characteristics and principles described in the present invention are included in the scope of the present invention.
Claims (6)
1. A face recognition system for a shared device, the system comprising: the system comprises a sharing equipment main body, a cargo scanner, an identity authentication device, a gateway, a cargo cabinet and a charging unit;
the cargo scanner and the charging unit are both connected with a control unit; goods are placed on the goods cabinet, and the goods scanner scans the goods, acquires goods information and transmits the goods information to the control unit; after the user confirms the identity through the identity authentication device, the sharing equipment is opened, and after the user finishes taking the goods and closes the cabinet door of the sharing equipment, the goods scanner scans the goods, obtains the goods information and transmits the goods information to the control unit; the charging unit charges through the control unit and deducts the fee from the user registration account.
2. The system of claim 1, wherein: the identity authentication device includes: the system comprises a cloud server, an image acquisition unit, a detection unit and a result output unit, wherein the cloud server comprises a database, used for storing user training samples, the image acquisition unit is arranged on the sharing equipment terminal, the detection unit and the result output unit are arranged in the gateway, the detection unit is provided with a characteristic classifier, the feature classifier is trained based on the training sample, the detection unit identifies the user image acquired by the image acquisition unit to obtain an identity authentication result, the result output unit transmits the user identity authentication result to the gateway, the gateway extracts the use habit of the sharing equipment terminal of the user and transmits the use habit to each sharing equipment terminal, and the sharing equipment terminal automatically executes the operation according with the use habit of the user or gives corresponding options for the user to select.
3. The system of claim 2, wherein: the feature classifier is trained based on an improved Boost system, and the training process of the feature classifier in the detection unit comprises the following steps:
A1. the user training samples are pre-collected whole-body images of continuous movement of a user, wherein the training samples comprise N positive samples and L negative samples, wherein the N positive samples exist in the user, and the L negative samples do not exist in the user;
A2. obtaining a feature vector X of the training sample, i.e. X ═ f1(x),f2(x),...fk(x))T(x) represents image sample features; the posterior probability that X is a positive sample can be expressed as y, y 1 represents a positive sample label, y 0 represents a negative sample label
<mrow> <mi>P</mi> <mrow> <mo>(</mo> <mi>y</mi> <mo>=</mo> <mn>1</mn> <mo>|</mo> <mi>X</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mrow> <mi>P</mi> <mrow> <mo>(</mo> <mi>X</mi> <mo>|</mo> <mi>y</mi> <mo>=</mo> <mn>1</mn> <mo>)</mo> </mrow> <mi>P</mi> <mrow> <mo>(</mo> <mi>Y</mi> <mo>=</mo> <mn>1</mn> <mo>)</mo> </mrow> </mrow> <mrow> <msub> <mi>&Sigma;</mi> <mrow> <mi>y</mi> <mo>&Element;</mo> <mo>{</mo> <mn>0</mn> <mo>,</mo> <mn>1</mn> <mo>}</mo> </mrow> </msub> <mi>P</mi> <mrow> <mo>(</mo> <mi>X</mi> <mo>|</mo> <mi>y</mi> <mo>)</mo> </mrow> <mi>P</mi> <mrow> <mo>(</mo> <mi>y</mi> <mo>)</mo> </mrow> </mrow> </mfrac> <mo>=</mo> <mi>&delta;</mi> <mrow> <mo>(</mo> <mi>l</mi> <mi>n</mi> <mo>(</mo> <mfrac> <mrow> <mi>P</mi> <mrow> <mo>(</mo> <mi>X</mi> <mo>|</mo> <mi>y</mi> <mo>=</mo> <mn>1</mn> <mo>)</mo> </mrow> <mi>P</mi> <mrow> <mo>(</mo> <mi>y</mi> <mo>=</mo> <mn>1</mn> <mo>)</mo> </mrow> </mrow> <mrow> <mi>P</mi> <mrow> <mo>(</mo> <mi>X</mi> <mo>|</mo> <mi>y</mi> <mo>=</mo> <mn>0</mn> <mo>)</mo> </mrow> <mi>P</mi> <mrow> <mo>(</mo> <mi>y</mi> <mo>=</mo> <mn>0</mn> <mo>)</mo> </mrow> </mrow> </mfrac> <mo>)</mo> <mo>)</mo> </mrow> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>1</mn> <mo>)</mo> </mrow> </mrow>
Wherein the function (z) is defined as
<mrow> <mi>&delta;</mi> <mrow> <mo>(</mo> <mi>z</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mn>1</mn> <mrow> <mn>1</mn> <mo>+</mo> <msup> <mi>e</mi> <mrow> <mo>-</mo> <mi>z</mi> </mrow> </msup> </mrow> </mfrac> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>2</mn> <mo>)</mo> </mrow> </mrow>
Thereby establishing a classifier model
<mrow> <msub> <mi>H</mi> <mi>k</mi> </msub> <mrow> <mo>(</mo> <mi>X</mi> <mo>)</mo> </mrow> <mo>=</mo> <mi>l</mi> <mi>n</mi> <mrow> <mo>(</mo> <mfrac> <mrow> <mi>P</mi> <mrow> <mo>(</mo> <mi>X</mi> <mo>|</mo> <mi>y</mi> <mo>=</mo> <mn>1</mn> <mo>)</mo> </mrow> <mi>P</mi> <mrow> <mo>(</mo> <mi>y</mi> <mo>=</mo> <mn>1</mn> <mo>)</mo> </mrow> </mrow> <mrow> <mi>P</mi> <mrow> <mo>(</mo> <mi>X</mi> <mo>|</mo> <mi>y</mi> <mo>=</mo> <mn>0</mn> <mo>)</mo> </mrow> <mi>P</mi> <mrow> <mo>(</mo> <mi>y</mi> <mo>=</mo> <mn>0</mn> <mo>)</mo> </mrow> </mrow> </mfrac> <mo>)</mo> </mrow> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>3</mn> <mo>)</mo> </mrow> </mrow>
Combining the formula (1) and the formula (2), the posterior probability can be expressed as
P(y=1|X)=(Hk(X)) (4)
Classifier H for feature vector Xk(X) can be represented as
<mrow> <msub> <mi>H</mi> <mi>k</mi> </msub> <mrow> <mo>(</mo> <mi>X</mi> <mo>)</mo> </mrow> <mo>=</mo> <mi>l</mi> <mi>n</mi> <mrow> <mo>(</mo> <mfrac> <mrow> <mi>P</mi> <mrow> <mo>(</mo> <mi>X</mi> <mo>|</mo> <mi>y</mi> <mo>=</mo> <mn>1</mn> <mo>)</mo> </mrow> <mi>P</mi> <mrow> <mo>(</mo> <mi>y</mi> <mo>=</mo> <mn>1</mn> <mo>)</mo> </mrow> </mrow> <mrow> <mi>P</mi> <mrow> <mo>(</mo> <mi>X</mi> <mo>|</mo> <mi>y</mi> <mo>=</mo> <mn>0</mn> <mo>)</mo> </mrow> <mi>P</mi> <mrow> <mo>(</mo> <mi>y</mi> <mo>=</mo> <mn>0</mn> <mo>)</mo> </mrow> </mrow> </mfrac> <mo>)</mo> </mrow> <mo>=</mo> <munderover> <mo>&Sigma;</mo> <mrow> <mi>k</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>K</mi> </munderover> <msub> <mi>h</mi> <mi>k</mi> </msub> <mrow> <mo>(</mo> <msub> <mi>f</mi> <mi>k</mi> </msub> <mo>(</mo> <mi>x</mi> <mo>)</mo> <mo>)</mo> </mrow> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>5</mn> <mo>)</mo> </mrow> </mrow>
Wherein,hk(fk(x) Represents a weak classifier, and the strong classifier H can be composed of K weak classifiersk(X);
Positive and negative samples were placed in two sets respectively: set of positive samples { X1jN-1, and a set of negative samples { X ═ 00jN + L-1, continuously selecting weak classifiers from positive and negative sample sets by using multiple groups of samples, and further constructing a combined classifier with the highest recognition rate, wherein the posterior probability of a known single sample is represented as
Pij=(Hk(Xij)) (6)
Where the value of i represents the number of the sample set, i-1 represents a positive sample set, i-0 represents a negative sample set, j is the sample number, and a classifier h is setk(fk(xij) The conditional probability in) is Gaussian distributed, i.e. the conditional probability is
p(fk(xij)|y=1)~N(μ1,σ1)
p(fk(xij)|y=0)~N(μ0,σ0) (7)
Wherein, mu1,σ1,μ0,σ0Will make incremental updates
<mrow> <msub> <mi>&mu;</mi> <mn>1</mn> </msub> <mo>&LeftArrow;</mo> <msub> <mi>&eta;&mu;</mi> <mn>1</mn> </msub> <mo>+</mo> <mrow> <mo>(</mo> <mn>1</mn> <mo>-</mo> <mi>&eta;</mi> <mo>)</mo> </mrow> <mfrac> <mn>1</mn> <mi>N</mi> </mfrac> <munder> <mo>&Sigma;</mo> <mrow> <mi>j</mi> <mo>|</mo> <msub> <mi>y</mi> <mi>i</mi> </msub> <mo>=</mo> <mn>1</mn> </mrow> </munder> <msub> <mi>f</mi> <mi>k</mi> </msub> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mrow> <mi>i</mi> <mi>j</mi> </mrow> </msub> <mo>)</mo> </mrow> </mrow>
<mrow> <msub> <mi>&sigma;</mi> <mn>1</mn> </msub> <mo>&LeftArrow;</mo> <msub> <mi>&eta;&sigma;</mi> <mn>1</mn> </msub> <mo>+</mo> <mrow> <mo>(</mo> <mn>1</mn> <mo>-</mo> <mi>&eta;</mi> <mo>)</mo> </mrow> <msqrt> <mrow> <mfrac> <mn>1</mn> <mi>N</mi> </mfrac> <munder> <mo>&Sigma;</mo> <mrow> <mi>j</mi> <mo>|</mo> <msub> <mi>y</mi> <mi>i</mi> </msub> <mo>=</mo> <mn>1</mn> </mrow> </munder> <msup> <mrow> <mo>(</mo> <msub> <mi>f</mi> <mi>k</mi> </msub> <mo>(</mo> <msub> <mi>x</mi> <mrow> <mi>i</mi> <mi>j</mi> </mrow> </msub> <mo>)</mo> <mo>-</mo> <msub> <mi>&mu;</mi> <mn>1</mn> </msub> <mo>)</mo> </mrow> <mn>2</mn> </msup> </mrow> </msqrt> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>8</mn> <mo>)</mo> </mrow> </mrow>
μ0,σ0The updating of (a) is the same as the above formula; p can be obtained by the following equations (7) and (8)ijThus, the posterior probability of the sample set i can be expressed as
<mrow> <msub> <mi>P</mi> <mi>i</mi> </msub> <mo>=</mo> <mn>1</mn> <mo>-</mo> <munder> <mi>&Pi;</mi> <mi>j</mi> </munder> <mrow> <mo>(</mo> <mn>1</mn> <mo>-</mo> <msub> <mi>P</mi> <mrow> <mi>i</mi> <mi>j</mi> </mrow> </msub> <mo>)</mo> </mrow> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>9</mn> <mo>)</mo> </mrow> </mrow>
A3. Manually marking a face area on the first frame of positive sample image, then obtaining the face area in each frame of image by adopting a Kalman filter, and carrying out manual marking correction on every 10 frames of images so as to reduce the accumulated error of the Kalman filter;
A4. for a face region output by a Kalman filter, extracting features of the face region, generating a feature vector, forming a feature vector set, setting weight, and improving the posterior probability of the feature vector set:
<mrow> <msub> <mi>P</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> </msub> <mo>=</mo> <mi>P</mi> <mrow> <mo>(</mo> <mi>y</mi> <mo>=</mo> <mn>1</mn> <mo>|</mo> <msup> <mi>X</mi> <mo>+</mo> </msup> <mo>)</mo> </mrow> <mo>=</mo> <munderover> <mo>&Sigma;</mo> <mrow> <mi>j</mi> <mo>=</mo> <mn>0</mn> </mrow> <mrow> <mi>N</mi> <mo>-</mo> <mn>1</mn> </mrow> </munderover> <msub> <mi>w</mi> <mrow> <mi>j</mi> <mn>0</mn> </mrow> </msub> <msub> <mi>P</mi> <mi>i</mi> </msub> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>12</mn> <mo>)</mo> </mrow> </mrow>
wherein, wj0Is a weight function, monotonically decreasing, represented asWherein, | d (X)1j)-d(X10) I denotes sample x1jThe Euclidean distance from the first frame of artificial calibration area, and c is a constant;
A5. after determining the posterior probability of the sample set, the classifier is selected, the selected system is expressed as
<mrow> <msub> <mi>h</mi> <mi>k</mi> </msub> <mo>=</mo> <munder> <mi>argmax</mi> <mrow> <mi>h</mi> <mo>&Element;</mo> <mrow> <mo>{</mo> <mrow> <msub> <mi>h</mi> <mn>1</mn> </msub> <mn>...</mn> <msub> <mi>h</mi> <mi>M</mi> </msub> </mrow> <mo>}</mo> </mrow> </mrow> </munder> <mi>l</mi> <mrow> <mo>(</mo> <msub> <mi>H</mi> <mrow> <mi>k</mi> <mo>-</mo> <mn>1</mn> </mrow> </msub> <mo>+</mo> <mi>h</mi> <mo>)</mo> </mrow> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>10</mn> <mo>)</mo> </mrow> </mrow>
Wherein,is a strong classifier comprising k-1 weak classifiers(ii) a l is a log-likelihood function of the set, defined as
l=∑i(yilog Pi+(1-yi)log(1-Pi)) (11)。
4. The system of claim 2, wherein: the extracted user image features are Haar-Like features.
5. A system according to claim 2 or 3, characterized in that: the detection unit identifies the user image acquired by the image acquisition unit, namely acquiring the user image, generating a feature vector, and inputting the feature vector into a feature classifier in the detection unit for identification;
the identification process further comprises the step of carrying out nearest neighbor matching on the feature vector identified as the user and the feature vector of the face region output by the Kalman filter, wherein the nearest neighbor matching is based on Euclidean distance, and the feature vector with the distance value larger than a preset threshold value is added into the feature vector set output by the Kalman filter.
6. The system of claim 2, wherein: the detection unit identifies the face of the user collected by the image collection unit.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710501944.0A CN107316023A (en) | 2017-06-27 | 2017-06-27 | A kind of face identification system for being used to share equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710501944.0A CN107316023A (en) | 2017-06-27 | 2017-06-27 | A kind of face identification system for being used to share equipment |
Publications (1)
Publication Number | Publication Date |
---|---|
CN107316023A true CN107316023A (en) | 2017-11-03 |
Family
ID=60180370
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710501944.0A Pending CN107316023A (en) | 2017-06-27 | 2017-06-27 | A kind of face identification system for being used to share equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107316023A (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109147199A (en) * | 2018-07-09 | 2019-01-04 | 保山市质量技术监督综合检测中心 | A kind of device management method and system based on fingerprint recognition |
CN109711473A (en) * | 2018-12-29 | 2019-05-03 | 北京沃东天骏信息技术有限公司 | Item identification method, equipment and system |
CN111080917A (en) * | 2019-12-04 | 2020-04-28 | 万翼科技有限公司 | Self-help article borrowing and returning method and related equipment |
CN111209567A (en) * | 2019-12-30 | 2020-05-29 | 北京邮电大学 | Method and device for judging perceptibility of improving robustness of detection model |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102609678A (en) * | 2011-01-24 | 2012-07-25 | 台湾色彩与影像科技股份有限公司 | Intelligent self-service system for face recognition |
CN103544775A (en) * | 2013-10-25 | 2014-01-29 | 上海煦荣信息技术有限公司 | Vending machines, vending system and vending method for same |
CN203773629U (en) * | 2013-11-06 | 2014-08-13 | 上海煦荣信息技术有限公司 | Intelligent self-service vending system |
CN105427459A (en) * | 2014-09-17 | 2016-03-23 | 黄浩庭 | Self-help vending method and vending machine |
CN205722151U (en) * | 2016-04-06 | 2016-11-23 | 上海英内物联网科技股份有限公司 | A kind of self-selecting type Intelligent vending machine with face identification functions |
CN106204918A (en) * | 2016-03-08 | 2016-12-07 | 青岛海尔特种电冰柜有限公司 | Intelligence selling cabinet and intelligence selling system |
CN106469369A (en) * | 2016-11-03 | 2017-03-01 | 林杰 | A kind of automatic selling method based on Internet of Things and free supermarket |
CN206097289U (en) * | 2016-04-26 | 2017-04-12 | 上海英内物联网科技股份有限公司 | Intelligent vending machine who has face identification function based on internet of things |
-
2017
- 2017-06-27 CN CN201710501944.0A patent/CN107316023A/en active Pending
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102609678A (en) * | 2011-01-24 | 2012-07-25 | 台湾色彩与影像科技股份有限公司 | Intelligent self-service system for face recognition |
CN103544775A (en) * | 2013-10-25 | 2014-01-29 | 上海煦荣信息技术有限公司 | Vending machines, vending system and vending method for same |
CN203773629U (en) * | 2013-11-06 | 2014-08-13 | 上海煦荣信息技术有限公司 | Intelligent self-service vending system |
CN105427459A (en) * | 2014-09-17 | 2016-03-23 | 黄浩庭 | Self-help vending method and vending machine |
CN106204918A (en) * | 2016-03-08 | 2016-12-07 | 青岛海尔特种电冰柜有限公司 | Intelligence selling cabinet and intelligence selling system |
CN205722151U (en) * | 2016-04-06 | 2016-11-23 | 上海英内物联网科技股份有限公司 | A kind of self-selecting type Intelligent vending machine with face identification functions |
CN206097289U (en) * | 2016-04-26 | 2017-04-12 | 上海英内物联网科技股份有限公司 | Intelligent vending machine who has face identification function based on internet of things |
CN106469369A (en) * | 2016-11-03 | 2017-03-01 | 林杰 | A kind of automatic selling method based on Internet of Things and free supermarket |
Non-Patent Citations (1)
Title |
---|
孟繁杰 等: "基于球形摄像机模型的全景三维跟踪算法的研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 * |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109147199A (en) * | 2018-07-09 | 2019-01-04 | 保山市质量技术监督综合检测中心 | A kind of device management method and system based on fingerprint recognition |
CN109711473A (en) * | 2018-12-29 | 2019-05-03 | 北京沃东天骏信息技术有限公司 | Item identification method, equipment and system |
CN111080917A (en) * | 2019-12-04 | 2020-04-28 | 万翼科技有限公司 | Self-help article borrowing and returning method and related equipment |
CN111080917B (en) * | 2019-12-04 | 2021-12-17 | 万翼科技有限公司 | Self-help article borrowing and returning method and related equipment |
CN111209567A (en) * | 2019-12-30 | 2020-05-29 | 北京邮电大学 | Method and device for judging perceptibility of improving robustness of detection model |
CN111209567B (en) * | 2019-12-30 | 2022-05-03 | 北京邮电大学 | Method and device for judging perceptibility of improving robustness of detection model |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Ferrari et al. | On the personalization of classification models for human activity recognition | |
WO2020098250A1 (en) | Character recognition method, server, and computer readable storage medium | |
CN107316023A (en) | A kind of face identification system for being used to share equipment | |
CN110362677B (en) | Text data category identification method and device, storage medium and computer equipment | |
CN111626371B (en) | Image classification method, device, equipment and readable storage medium | |
CN107220582A (en) | Recognize the driver of vehicle | |
CN111126573A (en) | Model distillation improvement method and device based on individual learning and storage medium | |
CN106295482A (en) | The update method of a kind of face database and device | |
CN107766403B (en) | Photo album processing method, mobile terminal and computer readable storage medium | |
JP7089045B2 (en) | Media processing methods, related equipment and computer programs | |
CN107103218A (en) | A kind of service implementation method and device | |
Mekhalfi et al. | A compressive sensing approach to describe indoor scenes for blind people | |
CN106303599A (en) | A kind of information processing method, system and server | |
JP7238902B2 (en) | Information processing device, information processing method, and program | |
JP7036401B2 (en) | Learning server, image collection support system for insufficient learning, and image estimation program for insufficient learning | |
CN106471440A (en) | Eye tracking based on efficient forest sensing | |
CN110717399A (en) | Face recognition method and electronic terminal equipment | |
US11574492B2 (en) | Efficient location and identification of documents in images | |
CN109740417A (en) | Invoice type recognition methods, device, storage medium and computer equipment | |
CN108171208A (en) | Information acquisition method and device | |
Mekhalfi et al. | Toward an assisted indoor scene perception for blind people with image multilabeling strategies | |
Sangavi et al. | Human Activity Recognition for Ambient Assisted Living | |
CN110717407A (en) | Human face recognition method, device and storage medium based on lip language password | |
KR101743169B1 (en) | System and Method for Searching Missing Family Using Facial Information and Storage Medium of Executing The Program | |
Abate et al. | Smartphone enabled person authentication based on ear biometrics and arm gesture |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20171103 |
|
RJ01 | Rejection of invention patent application after publication |