CN109766844B - Mobile terminal identity authentication and memory method based on brooch equipment - Google Patents

Mobile terminal identity authentication and memory method based on brooch equipment Download PDF

Info

Publication number
CN109766844B
CN109766844B CN201910031147.XA CN201910031147A CN109766844B CN 109766844 B CN109766844 B CN 109766844B CN 201910031147 A CN201910031147 A CN 201910031147A CN 109766844 B CN109766844 B CN 109766844B
Authority
CN
China
Prior art keywords
face image
neural network
mobile terminal
brooch
user
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201910031147.XA
Other languages
Chinese (zh)
Other versions
CN109766844A (en
Inventor
于永斌
张欢
黄航
唐浩文
王向向
雷飞
刘�英
邓东现
郭雨欣
王铭骁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Electronic Science and Technology of China
Original Assignee
University of Electronic Science and Technology of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Electronic Science and Technology of China filed Critical University of Electronic Science and Technology of China
Priority to CN201910031147.XA priority Critical patent/CN109766844B/en
Publication of CN109766844A publication Critical patent/CN109766844A/en
Application granted granted Critical
Publication of CN109766844B publication Critical patent/CN109766844B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The invention provides a mobile terminal identity authentication and memory method based on brooch equipment, which can facilitate a user to know identity information of the other party in real time. The method mainly comprises the following steps: step 1: a user downloads and installs an intelligent companion memory client at a mobile terminal; and 2, step: the user starts the intelligent partner memory client and transmits the face image collected by the brooch device to the mobile terminal through the Bluetooth of the mobile terminal; and 3, step 3: the mobile terminal transmits the received face image collected by the brooch equipment to the server, and the trained preset neural network is used for classifying and identifying the face image collected by the brooch equipment. The method can rapidly identify the identity information of the other party by using simple brooch equipment and a mobile terminal, display the identification and memory result of the identity of the other party in real time, and automatically prompt the name and related information of the other party of a user, thereby greatly reducing the probability of a wrong person or a wrong name.

Description

Mobile terminal identity authentication and memory method based on brooch equipment
Technical Field
The invention relates to the technical field of electronic equipment, in particular to a mobile terminal identity authentication and memory method based on a brooch device.
Background
In the big data era, data has grown explosively. Increasingly, more data needs to be processed. Business people have a lot of business and have a very wide communication range. This makes it difficult for them to accurately remember the name and its related information of everyone in the business conversation. Business people often mistake people or call names in the communication, causing unnecessary embarrassment and misunderstanding, and also leaving a bad impression to the other side. This directly affects the subsequent communication, the establishment and maintenance of interpersonal relationship. Therefore, how to assist the business people to quickly and accurately identify the identity of the communication object in the communication process becomes an increasingly urgent need for the business people nowadays.
With age, the memory of people gradually deteriorates. It is also a difficult task for the middle-aged and elderly people to remember relatives and friends around accurately. Therefore, the elderly also need a tool to assist them in accurately identifying the identities of friends and relatives.
Disclosure of Invention
The invention aims to provide a mobile terminal identity identification and memory method based on a brooch device, which is characterized in that the brooch device is used for collecting a face image, the face image is sent to a mobile terminal through Bluetooth, the face is identified by classification through a neural network, an identity identification and memory result is displayed on the mobile terminal and a user is automatically prompted, the user can know identity information of the other party in real time conveniently, and therefore the probability of mistaken identification or name calling is greatly reduced.
In order to achieve the above purpose, the invention provides a mobile terminal identity authentication and memory method based on a brooch device, which comprises the following steps:
s1) a user downloads and installs an intelligent companion memory client at a mobile terminal.
S2) the user starts the intelligent partner memory client: if a user starts the client for the first time, the user is required to register at the client, a plurality of groups of face images and related contact information corresponding to the face images are led in advance, the led-in groups of face images and the related contact information corresponding to the face images are stored in a database at a server, a preset neural network is trained at the server as a training set, the client starts a brooch device after the preset neural network training is finished, the face images collected by the brooch device are received, finally, bluetooth of the mobile terminal is opened, the mobile terminal is matched with the brooch device, and the face images collected by the brooch device are transmitted to the mobile terminal through the Bluetooth of the mobile terminal; if the user does not start the client for the first time, the user needs to log in the client firstly, then the client starts the brooch device, receives the face image collected by the brooch device, finally the Bluetooth of the mobile terminal is opened, the mobile terminal is matched with the brooch device, and the face image collected by the brooch device is transmitted to the mobile terminal through the Bluetooth of the mobile terminal.
The brooch equipment acquires the face image by using a photographing button on the brooch equipment.
S3) the mobile terminal transmits the received face image acquired by the brooch equipment to the server, the face image acquired by the brooch equipment is classified and recognized at the server by using the preset neural network trained in the step S2), and finally the face image is matched with the related contact information corresponding to the face image imported in advance in the database at the server, and if the matching is successful, the matching success information is displayed on the interface of the mobile terminal; if the matching fails, displaying matching failure information on a mobile terminal interface, storing a face image acquired by the brooch device with the matching failure and contact information corresponding to the face image to an unmatched library in a server-side database, and adding the contact information corresponding to the face image to a contact information library at the server side after the face image is stored.
In addition, the intelligent partner memory client also comprises a history record page for carrying out addition, deletion, modification and check on the contact information in the contact information base, checking the successfully matched contact, a display page for storing the face image and the contact information corresponding to the face image, a client setting page and the like.
The mobile terminal is a smart phone, a tablet computer and the like.
The invention provides a mobile terminal identity identification and memory method based on a brooch device, which can quickly identify the identity information of the other party by using simple brooch devices and mobile terminals, an intelligent partner memory client in the mobile terminal can display the identification and memory result of the other party in real time at the mobile terminal, automatically prompt the name and related information of the other party of a user, and facilitate the user to know the identity information of the other party in real time, so as to develop deeper and more active communication and maintain good interpersonal relationship, thereby greatly reducing the probability of identifying wrong people or calling wrong names.
Drawings
FIG. 1 is a flow chart of the mobile terminal identity authentication and memorizing method of the present invention.
Fig. 2 is a detailed operation flowchart of the mobile terminal matching the face image according to the present invention.
FIG. 3 is a detailed operation flowchart of training a face image by a neural network algorithm at a mobile terminal server side according to the present invention.
FIG. 4 is a user registration and login interface of an intelligent companion memory client of the present invention.
Fig. 5 is a display page showing successful matching of the intelligent companion memory client according to the present invention.
Fig. 6 is a display page showing the failure of matching of the intelligent companion memory client according to the present invention.
FIG. 7 is a unmatched library page of the intelligent companion memory client of the present invention.
FIG. 8 is a new contacts added page for an intelligent companion memory client user in accordance with the present invention.
Fig. 9 is a page showing the history of the successfully matched contacts viewed by the intelligent companion memory client according to the present invention.
Fig. 10 is a related information display page of the intelligent companion memory client that stores the face image of the contact according to the present invention.
Fig. 11 is an intelligent companion memory client setup page of the present invention.
FIG. 12 is a diagram of a neural network used in the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention clearer and clearer, the technical solutions of the present invention will be described in further detail below with reference to the accompanying drawings and specific embodiments.
The invention provides a mobile terminal identity authentication and memory method based on a brooch device, a flow chart of which is shown in figure 1, and the method comprises the following steps:
s1) a user downloads and installs an intelligent companion memory client at a mobile terminal.
S2) the user starts the intelligent partner memory client: as shown in fig. 4, if the user starts the client for the first time, clicking a [ register ] button to register the user and pre-import a plurality of groups of face images and related contact information corresponding to the face images, storing the pre-imported groups of face images and the related contact information corresponding to the face images in a database at the server end, and using the images as a training set to train a Back Propagation (BP) neural network at the server end, after the BP neural network training is completed, starting the brooch device at the client end, receiving the face images collected by the brooch device, and finally opening the bluetooth of the mobile terminal to pair the mobile terminal and the brooch device, and transmitting the face images collected by the brooch device to the mobile terminal through the bluetooth of the mobile terminal; if the user does not start the client for the first time, the user needs to input an account password and click a [ login ] button to log in the client, then the client starts the brooch device, receives the face image acquired by the brooch device, and finally the Bluetooth of the mobile terminal is opened to enable the mobile terminal and the brooch device to be paired, and the face image acquired by the brooch device is transmitted to the mobile terminal through the Bluetooth of the mobile terminal. The brooch equipment acquires the face image by using a photographing button on the brooch equipment.
The training set comprises a plurality of groups of samples, wherein each group of samples comprises a plurality of attributes, one part of the attributes are related to the face image, and the rest of the attributes are related to the related contact information corresponding to the face image.
S3) the mobile terminal transmits the received face image acquired by the brooch device to the server, the BP neural network trained in the step S2) is utilized at the server to classify and recognize the face image acquired by the brooch device, and finally the face image is matched with the related contact information corresponding to the face image pre-imported into the database at the server, and the detailed operation steps of matching the face image at the intelligent partner memory client of the mobile terminal are shown in FIG 2:
if the matching is successful, displaying matching success information on a mobile terminal interface, as shown in fig. 5, the display image is a current face image acquired by the brooch device, the display information is description of relevant contact information corresponding to the current face image by a user, such as 'a certain place, a certain place' and the like, the user can click an icon in a client to select whether to replace a face image which is pre-imported from a server-end database with relevant contact information same as the current face image by using the current face image acquired by the brooch device, if the user selects to replace, the current face image acquired by the brooch device is stored in the server-end database, and meanwhile, the replaced pre-imported face image is deleted, and the server end takes the current face image acquired by the replaced brooch device and the contact information corresponding to the replaced pre-imported face image as a group of new samples in a training set for subsequent use of a BP neural network training, if the user selects not to replace, the current face image acquired by the brooch device is stored in a history recording page in the client;
if the matching fails, displaying matching failure information on a mobile terminal interface, storing a current face image acquired by the brooch device with the matching failure and contact information corresponding to the current face image acquired by the brooch device to an unmatched library in a server-side database, as shown in fig. 6, enabling a user to enter the unmatched library to check the face image with the matching failure, and performing contact information adding and deleting operations on the face image with the matching failure, as shown in fig. 7, specifically: the user clicks the (add) icon to add contact information, such as names, positions and the like, of the face images in the unmatched library, as shown in fig. 8; the user clicks the (storage) icon to store the current face image and the corresponding contact information acquired by the brooch device to a server-side database, and the stored face image and the corresponding contact information are used as a group of new samples in a training set for subsequent training of the BP neural network; the user clicks the (delete) icon, and the face image in the unmatched library is deleted; if the user wants to view the deleted face image, the user can enter a history page in the client to view the face image, as shown in fig. 9.
Furthermore, before a plurality of groups of pre-imported face images and relevant contact information corresponding to the face images are used as a training set, the plurality of groups of pre-imported face images need to be preprocessed, and the preprocessing method comprises the following steps:
a1 Firstly, denoising a pre-imported human face image;
a2 Carrying out geometric correction processing on the face image subjected to noise elimination processing in the step a 1), so as to realize position calibration and eliminate the influence caused by face scale change and rotation;
a3 Because the face recognition rate is greatly influenced by the illumination condition and the average gray value of each face image is different, the face image subjected to geometric correction processing in the step a 2) is subjected to gray amplitude normalization operation;
a4 Judging whether a face exists in the face image subjected to the gray scale value normalization operation of the step a 3): if yes, entering step a 41); if not, entering step a 42);
a41 Carrying out key point detection and alignment calibration on the face image subjected to the gray scale amplitude normalization operation in the step a 3), converting the face image into a characteristic column vector with 1 x 20 dimensions by utilizing a Principal Component Analysis (PCA) algorithm, sending the characteristic column vector and related contact information corresponding to the face image to a BP (back propagation) neural network together to be used as a group of samples to be input, training the BP neural network, and entering the step a 5);
a42 Prompting the user that the face image is an invalid image on a mobile terminal interface, and re-importing a new pre-imported face image, and then entering the step a 1);
a5 Importing the next pre-imported face image and entering the step a 1) until all pre-imported face images are preprocessed.
Further, as shown in fig. 3, the method for training a BP neural network by using the training set specifically includes the following steps:
b1 Initialize the BP neural network: randomly assigning a non-zero value to each weight value and bias value in the BP neural network, and then entering the step b 2);
b2 Input a group of samples for learning the BP neural network, and forward-computing input values and output values of neurons in each layer of the BP neural network;
b3 Judging whether the actual output of the final output layer of the BP neural network processed by the step b 2) is consistent with the expected output: if they match, the procedure goes to step b 31), and if they do not match, the procedure goes to step b 32);
b31 Input next group sample BP neural network learning, then enter step b 4);
b32 Calculating a local gradient value of each layer of the BP neural network according to a back propagation algorithm, and then entering a step b 33);
b33 According to the calculated local gradient value of each layer of the BP neural network, correcting each weight value and bias value in the BP neural network, and then entering the step b 2).
b4 Judge whether the BP neural network has learned all samples: if yes, entering step b 5); if not, entering the step b 2);
b5 Finish training;
the layer 3 BP neural network is used here because it can implement arbitrary non-curve mapping, and the neural network with a more multi-layer structure is not used because the neural network with a more multi-layer structure tends to fall into local minima of weight values and bias values when updated with a back-propagation algorithm, as shown in fig. 12, compared to the neural network with a 3-layer structure. The back propagation neural network (BP) has 20 neurons in an input layer, the calculation of the number of neurons in an implicit layer is shown as the formula 1.7, and the number of neurons in an output layer depends on the category number of related contact information in a server-side database, namely the attribute number of the related contact information.
To reduce the error rate, we use the modified Sigmoid function as the activation function of the BP neural network, as shown in equation 1.1. The activation function used by the BP neural network is shown as formula (1.1),
Figure BDA0001944275920000051
wherein χ = w 1 x 2 +w 2 x 2 +w 3 x 3 +...+w d x d + b, b are offset values and the mth group of samples is denoted as (X) m ,Y m ) Wherein m =1,2, 3.., q, q is the total number of samples, X m =(x 1 ,x 2 ,...,x d ) Representing the attribute, Y, associated with the face image in the mth group of samples m Representing an attribute, x, relating to relevant contact information corresponding to the face images in the mth set of samples p (p =1,2,3,. Ang., d) is the m-th group of samples at X m D is the value of the sample at X m The number of attributes in (1), i.e., the number of input layer neurons of the BP neural network, d =20 p (p =1,2, 3.., d) is x p And (4) corresponding weight values.
BP neural network is in the sample (X) m ,Y m ) The error function of (a) is shown in equation (1.2):
Figure BDA0001944275920000061
wherein n is the sample in Y m I.e., the number of BP neural network output neurons, n =1024,
Figure BDA0001944275920000062
the actual output at the ith output neuron for the mth sample,
Figure BDA0001944275920000063
representing the expected output of the mth sample at the ith output neuron.
For each set of samples, each weight and bias value modification rule in the BP neural network is as shown in equations (1.3) - (1.4), (1.5) and (1.6):
Figure BDA0001944275920000064
Figure BDA0001944275920000065
Figure BDA0001944275920000066
Figure BDA0001944275920000067
Figure BDA0001944275920000068
wherein, for each set of samples,
Figure BDA0001944275920000069
each represents a weight value pointing from the jth neuron in the l-1 th layer of the BP neural network to the ith neuron in the l-1 th layer,
Figure BDA00019442759200000610
all represent the calculation of the weight adjustment value of the ith neuron pointing from the jth neuron in the l-1 layer of the BP neural network to the ith neuron, E is an error function, and for each m groups of samples, the error function is calculated by adopting the method shown in the formula (1.2),
Figure BDA00019442759200000611
to calculateE pair
Figure BDA00019442759200000612
Partial derivatives, and, similarly, for each set of samples,
Figure BDA00019442759200000613
is the bias value of the ith neuron of the ith layer,
Figure BDA00019442759200000614
the calculation of the ith neuron bias adjustment value at the l-th layer,
Figure BDA00019442759200000615
to calculate E pairs
Figure BDA00019442759200000616
Partial derivative, rho is learning rate, the value is 0-1, N is the number of hidden layer neurons,
Figure BDA00019442759200000617
the rounding is shown, d is the number of the neurons of the input layer of the BP neural network, n is the number of the neurons of the output layer of the BP neural network, the constant a has a value ranging from 1 to 10, where a =10 because of more input characteristics, the number of layers of the BP neural network is 3, including the input layer at the 1 st layer, the hidden layer at the 2 nd layer and the output layer at the 3 rd layer, i.e. l =2,3.
As shown in fig. 10, in the embodiment of the present invention, the [ contact ] icon is clicked to view the contact information corresponding to the stored face image and the grouping display of the contact information corresponding to the stored face image, for example, in the embodiment, the contact information corresponding to the stored face image is grouped according to the shooting location, such as shanghai, chengdu, wuhan, and the like, and in addition, the user may perform grouping in other manners according to the needs of the user.
As shown in fig. 11, in the embodiment of the present invention, setting information of the client, such as position information of the face image during classification and identification, specific time of the face image during classification and identification, and specific identity information description of the face image, can be viewed by clicking the [ setting ] icon.
And clicking the [ Bluetooth ] icon to check Bluetooth matching information.
And (4) when the icon is clicked (log-out), the current account is logged out, and the user needs to log in again when entering next time.
In addition, the intelligent companion memory client also comprises a history record page for viewing the successfully matched contact.
The mobile terminal is a smart phone, a tablet computer and the like.
The invention provides a mobile terminal identity identification and memory method based on a brooch device, which is characterized in that a simple brooch device is used for collecting a face image, the face image is sent to a mobile terminal through Bluetooth, then the face is identified by classification through a neural network, the identification and memory result of the other side is displayed on the mobile terminal in real time, the name and related information of the other side of a user are automatically prompted, the user can know the identity information of the other side in real time, so that deeper and more positive communication can be carried out, a good interpersonal relationship can be maintained, and the probability of mistaken people or wrong names can be greatly reduced.
The specific embodiments of the present invention are described above, but the description is only illustrative, and the relevant interface diagrams are all schematic diagrams. Many modifications may be made to adapt a particular situation to the teachings of the present invention without departing from its essential scope. Such variations are obvious and all the inventions utilizing the concepts of the present invention are intended to be protected.

Claims (6)

1. A mobile terminal identity authentication and memory method based on a brooch device is characterized by comprising the following steps:
s1) a user downloads and installs an intelligent companion memory client on a mobile terminal;
s2) the user starts the intelligent partner memory client: if a user starts the client for the first time, the user is required to register at the client, a plurality of groups of face images and related contact information corresponding to the face images are led in advance, the led-in groups of face images and the related contact information corresponding to the face images are stored in a database at a server, a preset neural network is trained at the server as a training set, the client starts a brooch device after the preset neural network training is finished, the face images collected by the brooch device are received, finally, bluetooth of the mobile terminal is opened, the mobile terminal is matched with the brooch device, and the face images collected by the brooch device are transmitted to the mobile terminal through the Bluetooth of the mobile terminal; if the user does not start the client for the first time, the user needs to log in the client firstly, then the client starts the brooch device, receives the face image collected by the brooch device, finally opens the Bluetooth of the mobile terminal to enable the mobile terminal to be matched with the brooch device, and transmits the face image collected by the brooch device to the mobile terminal through the Bluetooth of the mobile terminal;
s3) the mobile terminal transmits the received face image collected by the brooch equipment to the server, and the server classifies and identifies the face image collected by the brooch equipment by using the preset neural network trained in the step S2), and finally matches the face image with the related contact information corresponding to the face image imported in advance in the database of the server: if the matching is successful, displaying matching success information on a mobile terminal interface, wherein the display image is a current face image acquired by the brooch device, and the display information is description of relevant contact information corresponding to the current face image by the user; if the matching fails, displaying matching failure information on a mobile terminal interface, and storing a current face image acquired by the brooch device with the matching failure and contact information corresponding to the current face image acquired by the brooch device to an unmatched library in a server-side database;
the preset neural network is a back propagation BP neural network, and the method for training the BP neural network by using the training set specifically comprises the following steps:
b1 Initializing the BP neural network: randomly assigning a non-zero value to each weight value and bias value in the BP neural network, and then entering the step b 2);
b2 Input a group of samples for preset neural network learning, and forward-computing input values and output values of neurons in each layer of the preset neural network;
b3 Judging whether the actual output of the final output layer of the preset neural network processed by the step b 2) is consistent with the expected output: if they match, the procedure goes to step b 31), and if they do not match, the procedure goes to step b 32);
b31 Input the next set of samples to preset neural network learning, then go to step b 4);
b32 Calculating a local gradient value of each layer of the preset neural network according to a back propagation algorithm, and then entering the step b 33);
b33 Correcting each weight value and bias value in the preset neural network according to the calculated local gradient value of each layer of the preset neural network, and then entering the step b 2);
b4 ) determine whether the preset neural network has learned all samples: if yes, entering step b 5); if not, entering the step b 2);
b5 Finish training;
the activation function of the BP neural network is a modified Sigmoid function expressed as
Figure FDA0003690118060000021
Wherein χ = w 1 x 2 +w 2 x 2 +w 3 x 3 +…+w d x d + b, b are offset values, and the m-th group of samples is denoted as (X) m ,Y m ) Wherein m = l,2, 3.., q, q is the total number of samples, X m =(x 1 ,x 2 ,...,x d ) Representing attributes associated with the face image in the mth group of samples, Y m Representing an attribute, x, associated with the associated contact information corresponding to the face image in the mth group of samples p P =1,2, 3.. D is the m-th group of samples at X m D is the value of the sample at X m I.e. the number of input layer neurons of the BP neural network, d =20 p P =1,2, 3.. D is x p A corresponding weight value;
for each group of samples, each weight and bias value modification rule in the BP neural network is as follows:
Figure FDA0003690118060000022
Figure FDA0003690118060000023
the number of hidden layer neurons is calculated as
Figure FDA0003690118060000024
Meaning that the rounding is up, wherein,
Figure FDA0003690118060000025
represents a weight value pointing from the jth neuron in layer l-1 of the BP neural network to the ith neuron in layer l,
Figure FDA0003690118060000026
representing the calculation of the weight adjustment value of the ith neuron pointing from the jth neuron in the l-1 layer of the BP neural network to the ith neuron, E is an error function, and for the mth group of samples, the error function
Figure FDA0003690118060000027
n is the attribute number of the sample in Ym, namely the number of the output neurons of the BP neural network, n =1024,
Figure FDA0003690118060000028
the actual output at the ith output neuron for the mth sample,
Figure FDA0003690118060000029
representing the expected output of the mth sample at the ith output neuron,
Figure FDA00036901180600000210
to calculate E pairs
Figure FDA00036901180600000211
The partial derivatives, in the same way,
Figure FDA00036901180600000212
is a firstThe bias value of the ith neuron of layer l,
Figure FDA00036901180600000213
the calculation of the ith neuron bias adjustment value at the l-th layer,
Figure FDA00036901180600000214
to calculate E pairs
Figure FDA00036901180600000215
The partial derivative, rho is the learning rate, the value is 0-1, d is the number of neurons in the input layer of the BP neural network, the value range of the constant a is 1-10, the number of layers of the BP neural network is 3, l =2,3, the input layer is positioned at the 1 st layer, the hidden layer is positioned at the 2 nd layer, and the output layer is positioned at the 3 rd layer.
2. The method for authenticating and memorizing the identity of the mobile terminal according to claim 1, wherein in the step S2), before the pre-imported groups of facial images and the related contact information corresponding to the facial images are taken as the training set, the pre-imported groups of facial images need to be pre-processed, and the pre-processing method comprises the following steps:
a1 Firstly, denoising a pre-imported human face image;
a2 Performing geometric correction processing on the face image subjected to denoising processing in the step a 1);
a3 Carrying out gray scale amplitude normalization operation on the face image subjected to the geometric correction processing in the step a 2);
a4 Judging whether a face exists in the face image subjected to the gray scale amplitude normalization operation of the step a 3): if yes, entering step a 41); if not, entering step a 42);
a41 Carrying out key point detection and alignment calibration on the face image subjected to the gray scale amplitude normalization operation in the step a 3), converting the face image into a characteristic column vector with 1 x 20 dimensions by utilizing a Principal Component Analysis (PCA) algorithm, sending the characteristic column vector and related contact information corresponding to the face image to a Back Propagation (BP) neural network together to be used as a group of samples for input, training the BP neural network, and entering the step a 5);
a42 Prompting the user that the face image is an invalid image on a mobile terminal interface, and re-importing a new pre-imported face image, and then entering the step a 1);
a5 Importing the next pre-imported face image and entering the step a 1) until all pre-imported face images are preprocessed.
3. The method for authenticating and memorizing the identity of a mobile terminal according to claim 2, wherein the step S2) of capturing the face image by the brooch device is implemented by taking a picture by using a picture button on the brooch device.
4. The mobile terminal identity authentication and memorizing method according to claim 3, wherein in the step S3), when the matching is successful, the user can further select whether to replace the face image with the same related contact information as the current face image, which is previously imported in the server-side database, with the current face image acquired by the brooch device, and if the user selects the replacement, the current face image acquired by the brooch device is saved in the server-side database, and the replaced previously imported face image is deleted at the same time, and the server side uses the current face image acquired by the replaced brooch device and the contact information corresponding to the replaced previously imported face image as a new set of samples in a training set for subsequent training of the BP neural network; and if the user selects not to replace the face image, storing the current face image acquired by the brooch device to a history page in the client.
5. The method for authenticating and memorizing the identity of the mobile terminal according to claim 3, wherein in the step S3), when the matching fails, the user can enter the unmatched library to check the face image with the failed matching, and perform the operation of adding and deleting the contact information to the face image with the failed matching; the user can also store the current face image and the corresponding contact information acquired by the brooch equipment to a server-side database to be used as a group of new samples in a training set for subsequent training of the BP neural network; and face images in the unmatched library can be deleted; and can view the deleted face image.
6. The method as claimed in claim 4 or 5, wherein the constant a is a =10.
CN201910031147.XA 2019-01-14 2019-01-14 Mobile terminal identity authentication and memory method based on brooch equipment Expired - Fee Related CN109766844B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910031147.XA CN109766844B (en) 2019-01-14 2019-01-14 Mobile terminal identity authentication and memory method based on brooch equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910031147.XA CN109766844B (en) 2019-01-14 2019-01-14 Mobile terminal identity authentication and memory method based on brooch equipment

Publications (2)

Publication Number Publication Date
CN109766844A CN109766844A (en) 2019-05-17
CN109766844B true CN109766844B (en) 2022-10-14

Family

ID=66454004

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910031147.XA Expired - Fee Related CN109766844B (en) 2019-01-14 2019-01-14 Mobile terminal identity authentication and memory method based on brooch equipment

Country Status (1)

Country Link
CN (1) CN109766844B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110163436A (en) * 2019-05-23 2019-08-23 西北工业大学 Intelligent workshop production optimization method based on bottleneck prediction
CN111899035B (en) * 2020-07-31 2024-04-30 西安加安信息科技有限公司 High-end wine authentication method, mobile terminal and computer storage medium
CN114359287A (en) * 2022-03-21 2022-04-15 青岛正信德宇信息科技有限公司 Image data processing method and device

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107133576A (en) * 2017-04-17 2017-09-05 北京小米移动软件有限公司 Age of user recognition methods and device
CN107590141A (en) * 2017-10-19 2018-01-16 崔玉桂 A kind of user's meet prompt terminal and method
CN109117801A (en) * 2018-08-20 2019-01-01 深圳壹账通智能科技有限公司 Method, apparatus, terminal and the computer readable storage medium of recognition of face

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9042596B2 (en) * 2012-06-14 2015-05-26 Medibotics Llc Willpower watch (TM)—a wearable food consumption monitor
CN108596140A (en) * 2018-05-08 2018-09-28 青岛海信移动通信技术股份有限公司 A kind of mobile terminal face identification method and system

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107133576A (en) * 2017-04-17 2017-09-05 北京小米移动软件有限公司 Age of user recognition methods and device
CN107590141A (en) * 2017-10-19 2018-01-16 崔玉桂 A kind of user's meet prompt terminal and method
CN109117801A (en) * 2018-08-20 2019-01-01 深圳壹账通智能科技有限公司 Method, apparatus, terminal and the computer readable storage medium of recognition of face

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
BP神经网络在人脸识别中的应用研究;冯玉涵;《计算机光盘软件与应用》;20140115;第17卷(第2期);第152、154页 *

Also Published As

Publication number Publication date
CN109766844A (en) 2019-05-17

Similar Documents

Publication Publication Date Title
CN110543815B (en) Training method of face recognition model, face recognition method, device, equipment and storage medium
CN108009528B (en) Triple Loss-based face authentication method and device, computer equipment and storage medium
CN109766844B (en) Mobile terminal identity authentication and memory method based on brooch equipment
US20190034746A1 (en) System and method for identifying re-photographed images
US20160314347A1 (en) Image classification and information retrieval over wireless digital networks and the internet
CN109508694A (en) A kind of face identification method and identification device
US20060210119A1 (en) Multimodal biometric analysis
CN112651841B (en) Online business handling method, online business handling device, server and computer readable storage medium
CN109039671A (en) Group message display methods, device, terminal and storage medium
US20190379742A1 (en) Session-based information exchange
CN111626371A (en) Image classification method, device and equipment and readable storage medium
CN110991249A (en) Face detection method, face detection device, electronic equipment and medium
US20200302897A1 (en) Business card management system and card case
CN113515988A (en) Palm print recognition method, feature extraction model training method, device and medium
CN113656761A (en) Service processing method and device based on biological recognition technology and computer equipment
CN114553838A (en) Method, system and server for implementing remote service handling
CN110472509B (en) Fat-lean recognition method and device based on face image and electronic equipment
CN116305076A (en) Signature-based identity information registration sample online updating method, system and storage medium
CN113591603A (en) Certificate verification method and device, electronic equipment and storage medium
CN110546638A (en) Improvements in biometric authentication
CN110084142B (en) Age privacy protection method and system for face recognition
CN111507289A (en) Video matching method, computer device and storage medium
JP7176158B1 (en) LEARNING MODEL EVALUATION SYSTEM, LEARNING MODEL EVALUATION METHOD, AND PROGRAM
CN114065163A (en) Display mainboard and terminal with face identification and identity verification functions
CN113837169A (en) Text data processing method and device, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20221014