CN110399763B - Face recognition method and system - Google Patents

Face recognition method and system Download PDF

Info

Publication number
CN110399763B
CN110399763B CN201810375426.3A CN201810375426A CN110399763B CN 110399763 B CN110399763 B CN 110399763B CN 201810375426 A CN201810375426 A CN 201810375426A CN 110399763 B CN110399763 B CN 110399763B
Authority
CN
China
Prior art keywords
mobile terminal
face image
face
terminal information
local
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810375426.3A
Other languages
Chinese (zh)
Other versions
CN110399763A (en
Inventor
黄源浩
江隆业
彭勋录
司马潇
许星
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Orbbec Inc
Original Assignee
Orbbec Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Orbbec Inc filed Critical Orbbec Inc
Priority to CN201810375426.3A priority Critical patent/CN110399763B/en
Publication of CN110399763A publication Critical patent/CN110399763A/en
Application granted granted Critical
Publication of CN110399763B publication Critical patent/CN110399763B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/30Authentication, i.e. establishing the identity or authorisation of security principals
    • G06F21/31User authentication
    • G06F21/32User authentication using biometric data, e.g. fingerprints, iris scans or voiceprints
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q20/00Payment architectures, schemes or protocols
    • G06Q20/38Payment protocols; Details thereof
    • G06Q20/40Authorisation, e.g. identification of payer or payee, verification of customer or shop credentials; Review and approval of payers, e.g. check credit lines or negative lists
    • G06Q20/401Transaction verification
    • G06Q20/4014Identity check for transactions
    • G06Q20/40145Biometric identity checks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/166Detection; Localisation; Normalisation using acquisition arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Computer Security & Cryptography (AREA)
  • Business, Economics & Management (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • General Health & Medical Sciences (AREA)
  • Accounting & Taxation (AREA)
  • Strategic Management (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Computer Hardware Design (AREA)
  • General Business, Economics & Management (AREA)
  • Finance (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Collating Specific Patterns (AREA)

Abstract

The invention discloses a face recognition method and a face recognition system, wherein the method comprises the following steps: acquiring a face image of a current user; acquiring a local mobile terminal information set; acquiring a local reference face image set corresponding to the local mobile terminal information set according to a global mobile terminal information set and a global reference face image set corresponding to the global mobile terminal information set; and matching the face image of the current user with the local reference face image to perform face recognition. The method comprises the steps of obtaining a local mobile terminal information set, obtaining a local reference face image set corresponding to the local mobile terminal information set according to a global mobile terminal information set and a global reference face image set corresponding to the global mobile terminal information set, and reducing the global reference face image set with huge data volume to the local reference face image set with small data volume. The method can greatly improve the face recognition efficiency and precision, and meanwhile, a user does not need to actively input corresponding identity information, so that the user experience is improved.

Description

Face recognition method and system
Technical Field
The invention relates to the technical field of computer and electronic equipment application, in particular to a face recognition method and system.
Background
Face recognition has gradually come into daily life, and is applied to a plurality of fields such as security inspection, traffic and the like. The core problem of face recognition is how to improve recognition accuracy and recognition rate, and with the continuous development of recognition hardware and recognition algorithm, the application field of face recognition is more and more extensive, for example, a consumption-level deep camera and a 3D face payment scheme with extremely high safety is brought to payment by the maturity of a deep learning algorithm.
The face recognition can be divided into 1v1 recognition and 1v N recognition, wherein the 1v1 recognition is to judge that the current face (1) is compared with a single prestored reference face (1), and the 1v N recognition is to judge that the current face (1) is compared with N prestored reference faces (N) to find out a target face which is the same as the current face. For 1v1 recognition, the recognition accuracy and the recognition rate can reach commercial levels at present, and for 1vN recognition, when N reaches hundreds of thousands or millions or even higher, the recognition accuracy and the recognition rate can not reach commercial levels.
Disclosure of Invention
Aiming at the recognition of 1v N, the invention provides a face recognition method and system, which can greatly improve the face recognition rate and precision and have high user experience.
The face recognition method provided by the invention comprises the following steps: collecting a face image of a current user; acquiring a local mobile terminal information set; acquiring a local reference face image set corresponding to the local mobile terminal information set according to a global mobile terminal information set and a global reference face image set corresponding to the global mobile terminal information set; and matching the face image of the current user with the local reference face image to perform face recognition.
In some embodiments, the apparatus for obtaining a local mobile terminal information set includes: one or more of a base station, a WIFI device, a Bluetooth device and a GPS device; the face image comprises one or more of a color image, an infrared image, a gray image and a depth image.
In some embodiments, the method further comprises: and executing one or more tasks including face unlocking, face payment, face security inspection and the like according to the face recognition result.
The present invention also provides a face recognition system, comprising: the face image acquisition terminal is used for acquiring a face image of a current user; the positioning terminal is used for acquiring a local mobile terminal information set containing a current user in a coverage range; the server is used for storing a global mobile terminal information set and a global reference face image set corresponding to the global mobile terminal information set; a processor configured to: and acquiring a local reference face image set corresponding to the local mobile terminal information set according to the global mobile terminal information set and the global reference face image set, and matching the face image of the current user with the local reference face image to perform face recognition.
In other embodiments, the present invention provides a face recognition system, comprising: positioning a terminal; the face image acquisition terminal is used for acquiring a face image of a current user and extracting ID information of the positioning terminal; the server is used for storing a global mobile terminal information set and a global reference face image set corresponding to the global mobile terminal information set; a processor to: and according to the ID information of the positioning terminal, positioning mobile terminal information with the same ID information as the positioning terminal from the global mobile terminal information set to form a local mobile terminal information set, simultaneously forming a local reference face image set by all reference face images corresponding to the mobile terminal information in the local mobile terminal information set, and matching the face image of the current user with the local reference face image to carry out face recognition.
In some embodiments, the processor is further configured to perform one or more of tasks including face payment, face unlocking, face security check, and the like based on the face recognition result.
The invention has the beneficial effects that: the method comprises the steps that a local mobile terminal information set is obtained, and a local reference face image set corresponding to the local mobile terminal information set is obtained according to a global mobile terminal information set and a global reference face image set corresponding to the global mobile terminal information set, so that the global reference face image set with huge data volume is reduced to the local reference face image set with small data volume; and then matching the face image of the current user with the local reference face image set to perform face recognition, so that the face recognition efficiency and precision can be greatly improved, and meanwhile, the user does not need to actively input corresponding identity information, so that the user experience is improved.
Drawings
Fig. 1 is a schematic diagram of a face recognition system in the prior art.
FIG. 2 is a schematic diagram of a face recognition system according to an embodiment of the invention.
Fig. 3 is a schematic diagram of a face recognition method according to an embodiment of the invention.
Detailed Description
The present invention will be described in further detail with reference to the following detailed description and the accompanying drawings, it being emphasized that the following description is illustrative only and is not intended to limit the scope and application of the present invention.
A face recognition system in the prior art is shown in fig. 1, and mainly includes a server and a face image acquisition terminal. The server stores a large number (for example, M) of reference face images and corresponding identity information thereof, where the identity information may include: account number, name, identification card number, and mobile terminal information (such as mobile phone number). The face image acquisition terminal comprises a camera for acquiring a face image of a current user. In addition, the server or the face image acquisition terminal further comprises a processor, a memory for storing an application program of a face recognition task and the like so as to perform face recognition. In one embodiment, the server is a Payment cloud server, and the face image acquisition terminal is intelligent hardware provided with a Payment payment terminal application program.
When a user needs to perform a face recognition task, such as face-brushing payment/unlocking, security clearance and the like, in the prior art, in order to improve the recognition rate and the recognition accuracy of a face recognition system, after identity information is input into the system by the user to be recognized, the 1v N recognition is converted into 1v1 recognition, and the method mainly comprises the following steps:
firstly, when a user brings a face (hereinafter referred to as a current face) close to a face image acquisition terminal, an internal camera thereof acquires a face image of the current face, where the camera may be a two-dimensional camera or a three-dimensional depth camera, the face image may be one or more of a color image, an infrared image, a grayscale image, a depth image, and the like, in one embodiment, the camera is an RGBD camera, and the acquired current face image includes a depth image and a color image, that is, an RGBD image.
Secondly, the user inputs corresponding identity information such as an account number, a name, an identification card number, mobile terminal information (such as a mobile phone number) and the like to the face image acquisition terminal. After the identity information is acquired, the face image acquisition terminal feeds the identity information back to the server, and a user reference face image corresponding to the identity information is extracted from the server. It should be noted that, the server is used as a backend, and mass (assumed as M) identity information and corresponding reference face images thereof have been stored in advance through account registration, real-name authentication, and the like, for example, the face images are uploaded through a personal mobile terminal; the number of M may be in the tens or even hundreds of millions.
And finally, executing a face recognition task of the current face image and the reference face image by a processor in the server or the face image acquisition terminal, wherein the face recognition task comprises the steps of extracting the features of the current face image and the reference face image and matching the features with the features.
When the identification is successful, the face image acquisition terminal can further execute a subsequent task, for example, for a security check task, the security check equipment is controlled to allow a user to pass through; and for the payment task, executing corresponding deduction operation and prompting the success of payment. The user can also be prompted to further input a password to ensure the payment safety when the deduction operation is executed; and controlling unlocking for the unlocking task.
In the above method, the user is required to actively input the identity information so as to facilitate the background (server) to search out the reference face image corresponding to the identity information, so as to facilitate the subsequent execution of the 1v1 face recognition, thereby improving the recognition efficiency and the recognition accuracy. However, this method has the disadvantages that the user experience is poor, the operation takes a long time, and the operation time is different for different users, which is almost similar to the conventional fingerprint identification and two-dimensional code identification processes; in addition, the operation is complicated, and quick and convenient experience cannot be brought to the user.
To solve the above problem, the present embodiment provides a face recognition system, as shown in fig. 2, the system includes: the system comprises a server, a face image acquisition terminal and a positioning terminal.
The server stores a large number (for example, M) of reference face images and corresponding identity information, which may include: one or more combinations of account number, name, identification card number and mobile terminal information (such as mobile phone number), different identification information are bound with each other, and finally consistency is realized; for convenience of description, the reference face images and their corresponding identity information are referred to as a global reference face image set and a global identity information set (including a global account number set, a global name set, a global mobile terminal information set, etc.). In some embodiments, the server further comprises a processor for performing data processing, such as performing face recognition tasks, and may further comprise a communication interface.
The face image acquisition terminal comprises a camera for acquiring a face image of a current user, wherein the current user refers to a target user in face recognition, the camera can be a two-dimensional camera or a three-dimensional depth camera, the face image can be one or more of a color image, an infrared image, a gray level image, a depth image and the like, in one embodiment, the camera is an RGBD (red, green and blue) camera, the acquired face image of the current user comprises a depth image and a color image, namely an RGBD image, in one embodiment, the camera is a structured light depth camera, the acquired face image of the current user comprises an infrared structured light image, and compared with the depth image, the gray level image, the color image and the like, the infrared structured light image contains depth information in a hidden manner and also contains texture information in the gray level image and the color image, so that the dual advantages of using the depth image and the texture image can be achieved simultaneously by utilizing the infrared structured light image to perform face recognition. In some embodiments, the face image capturing terminal further includes a processor for performing data processing and control, such as performing a face recognition task, and may further include a communication interface for facilitating communication with a server.
The positioning terminal is used for acquiring the mobile terminal information in the coverage area. Generally, a face image acquisition terminal is arranged in a coverage area of a positioning terminal, and when a user carries out face recognition at present, the positioning terminal can acquire mobile terminal information carried by the user at present. It should be noted that, in a range covered by the positioning terminal, there are often not only the target user but also other users, so that the positioning terminal acquires a plurality of pieces of mobile terminal information (assumed to be N) including the current user, and for convenience of description, the plurality of pieces of mobile terminal information including the current user are referred to as a local mobile terminal information set. It will be appreciated that the local mobile terminal information set should be a subset of the global mobile terminal information set.
The positioning terminal may include, but is not limited to, the following:
(1) Base station
When the mobile terminal enters the range covered by the base station, the base station identifies mobile terminal information, such as a mobile phone number, a terminal ID, and the like (the mobile terminal information is bound with the face recognition application account). Since the range covered by the base station is often small in radius, for example, several hundred meters, the number of users entering the range is often not too large, for example, several thousands or tens of thousands. Thus, the positioning terminal can identify the number of mobile terminals (such as N) and corresponding mobile terminal information within its range, and store the mobile terminal information in its memory.
(2) WIFI devices, e.g. WIFI transmitters
When the mobile terminal enters the range covered by the WIFI equipment, the WIFI equipment identifies the mobile terminal information and stores the mobile terminal information into a memory of the WIFI equipment. Generally, the radius of the coverage range of the WIFI device is several meters to several tens of meters, which is slightly smaller than the coverage range of the base station, so that when the WIFI device is used as a positioning terminal, the number of mobile terminals in the coverage range can reach several hundreds or even less.
(3) Bluetooth device
When the mobile terminal enters the range covered by the Bluetooth device, the Bluetooth device identifies the mobile terminal information and stores the mobile terminal information in a memory of the Bluetooth device.
(4) GPS equipment
After the GPS device inside the mobile terminal is turned on, the mobile terminal is identified and located, and the mobile terminal information (including account information, location, and the like of the mobile terminal) is stored in the memory. Different from the base station, the WIFI device and the bluetooth device, the number of the mobile terminals stored in the GPS device is very large, and even exceeds the number of the mobile terminals stored in the server. Therefore, when positioning is performed, a certain area is often selected for positioning, for example, a face image acquisition terminal is used as a center, and a range with a certain radius is used as a positioning area.
(5) Two or more of a base station, a WIFI device, a Bluetooth device and a GPS device.
The number of the mobile terminals which can be identified by the GPS equipment is large, but the mobile terminals are easily limited by cloud layers and shelters; the number of the mobile terminals which can be identified by the Bluetooth equipment and the WIFI equipment is relatively small, but the influence of a shelter is small, and the cost is low; the number of mobile terminals that can be identified by the base station equipment is relatively constant, but it is costly. So that with two or more of these, cost can be integrated and positioning accuracy can be improved. When the positioning terminal adopts two or more of a base station, a WIFI device, a Bluetooth device and a GPS device, the obtained mobile terminal information can be a set of mobile terminal information sets obtained by a single device, and can also be an intersection set of the mobile terminal information sets obtained by the single device. It will be appreciated that the location terminal may also be other devices having the capability of identifying or locating mobile terminals.
The positioning terminal is used for automatically positioning and identifying the mobile terminal in the coverage area, and the mobile terminal is generally a device which is carried by a user and corresponds to the identity of the user, such as a mobile phone, a tablet, a computer and the like. When a user carries a mobile terminal to enter a range covered by the positioning terminal, the positioning terminal is identified, the positioning terminal can update the mobile terminal information in the coverage range in real time, and the positioning terminal can be considered to acquire the mobile terminal information of all users in the coverage range in real time, namely a local mobile terminal information set. Generally, the number of users in the local mobile terminal information set is different, as few as tens and as many as tens of thousands, but is far smaller than the number of users in the tens or hundreds of millions in the global mobile terminal information set according to the similarity and population density of the positioning terminals.
One or more processors (sub-processors) are contained in the face image acquisition terminal and/or the server and used for executing the face recognition task. In one embodiment, the face recognition task includes the following steps, as shown in fig. 3:
s1, collecting a face image of a current user;
s2, acquiring a local mobile terminal information set;
s3, acquiring a local reference face image set corresponding to the local mobile terminal information set according to the global mobile terminal information set and a global reference face image set corresponding to the global mobile terminal information set;
and S4, matching the acquired face image of the current user with a local reference face image set to realize face recognition.
The step S1 is executed by controlling the camera by the processor in the face image acquisition terminal; step S2 may be executed by the positioning terminal or the server; step S3 is performed by the server; step S4 may be executed by the server or the face image acquisition terminal. According to the system shown in fig. 2, when step S2 and step S4 are performed by different terminals, the specific procedures thereof are different.
In some embodiments, when the positioning terminal executes step S2 and the face image capturing terminal executes step S4, the specific process is as follows:
s11, enabling the user to approach a camera in the face image acquisition terminal, and controlling the camera to acquire the face image of the current user by the processor.
And S21, the face image acquisition terminal is respectively connected and communicated with the positioning terminal and the server. The positioning terminal updates the local mobile terminal information set entering the coverage range of the positioning terminal in real time, and the human face image acquisition terminal processor reads the local mobile terminal information set from the positioning terminal and transmits the local mobile terminal information set to the server.
And S31, after the server receives the local mobile terminal information set, the processor finds local reference face images corresponding to all the mobile terminal information in the local mobile terminal information set through matching search and other modes according to the global mobile terminal information set and the global reference face image set, and forms a local reference face image set. And then transmitting the local reference human face image set to a human face image acquisition terminal.
And S41, matching the acquired face image of the current user with a local reference face image set by a processor in the face image acquisition terminal to realize face identification. For example, whether the current user is in the local reference facial image set or not is determined, or the identity information (such as mobile terminal information) of the current user is confirmed from the local reference facial image set.
In other embodiments, when the positioning terminal performs step S2 and the server performs step S4, the specific process is as follows:
and S12, controlling a camera to collect the face image of the current user by a processor in the face image collecting terminal and sending the face image to a server.
S22, the face image acquisition terminal is respectively connected and communicated with the positioning terminal and the server; and the positioning terminal updates the local mobile terminal information set entering the coverage range of the positioning terminal in real time, and a processor of the facial image acquisition terminal reads the local mobile terminal information set from the positioning terminal and transmits the local mobile terminal information set to the server.
And S32, after the server receives the local mobile terminal information set, the processor finds local reference face images corresponding to all the mobile terminal information in the local mobile terminal information set through matching search and other modes according to the global mobile terminal information set and the global reference face image set, and forms a local reference face image set. And then transmitting the local reference human face image set to a human face image acquisition terminal. .
And S42, matching the face image of the current user with the local reference face image set by the processor in the server to realize face recognition, and sending a recognition result to the face image acquisition terminal.
Compared with the prior art, the embodiment does not need a user to manually input identity information, but the positioning terminal automatically identifies the N first mobile terminals carried by the user in the coverage range of the positioning terminal, and then converts the identification of 1v M into the identification of 1v N. The whole face recognition process only needs the user to face the face to the camera, other additional operations are not needed, and the experience effect is excellent.
In some embodiments, in consideration of the authority issues of the positioning terminal and the face image acquisition terminal, for example, when the positioning terminal is a base station and the face recognition terminal is a device owned by a common merchant, the common merchant often does not directly read the authority of other users from the base station. At this time, it is difficult to read the number of users in the current range of the base station from the base station, i.e. the local mobile terminal information set cannot be read. To solve this problem, in an embodiment, it is considered that the mobile terminal has the right to read the information (such as the base station ID) of the base station where the mobile terminal is located, so when the mobile terminal carried by the user enters the coverage area of the base station where the mobile terminal is located (or other positioning terminals), the mobile terminal extracts the ID of the base station where the mobile terminal is located, and reports the ID of the base station to the server. The mobile terminal information stored in the server further includes base station ID information. In addition, the face image acquisition terminal is also provided with a corresponding device (such as an SIM card and corresponding communication hardware) for communicating with the base station, so that the face image acquisition terminal can be identified by the base station, and the face image acquisition terminal can read the ID of the base station. When executing a face recognition task, the face image acquisition terminal reports the current base station ID to the server, the server positions all mobile terminal information under the ID according to the base station ID reported by the current face image acquisition terminal to form a local mobile terminal information set, meanwhile, the server also generates a local reference face image set corresponding to the local mobile terminal information set, and then a processor in the server or the face image acquisition terminal executes a recognition matching task of a face image of a current user and the reference face image set. The more specific steps of the process are as follows:
and S13, the user approaches to a camera in the face image acquisition terminal, and the processor controls the camera to acquire the face image of the current user.
And S23, the face image acquisition terminal is connected with the server for communication. And the face image acquisition terminal reads the ID information of the positioning terminal and uploads the ID information to the server.
And S33, after receiving the ID information of the positioning terminal, the server searches all mobile terminal information which is the same as the ID information from the global mobile terminal information set to form a local mobile terminal information set, and simultaneously forms all reference face images corresponding to the mobile terminal information in the local mobile terminal information set into a local reference face image set. And then transmitting the local reference human face image set to a human face image acquisition terminal.
And S43, matching the acquired face image of the current user with a local reference face image set by a processor in the face image acquisition terminal to realize face identification. For example, whether the current user is in the local reference facial image set or not is determined, or the identity information (such as mobile terminal information) of the current user is confirmed from the local reference facial image set, and the like.
It can be understood that, in this embodiment, the face recognition may also be directly performed by the server, and the server sends the face recognition result to the face image acquisition terminal.
In some embodiments, the local user in the range of the current user may also be located in other manners, so as to reduce the number of object users whose faces are matched.
The face image is understood to be a face image in a broad sense, and may be a face image directly acquired by a camera, a characteristic face obtained by extracting features of the face image, or other attributes capable of reflecting the characteristics of the face.
The foregoing is a more detailed description of the invention in connection with specific/preferred embodiments and is not intended to limit the practice of the invention to those descriptions. It will be apparent to those skilled in the art that numerous alterations and modifications can be made to the described embodiments without departing from the inventive concepts herein, and such alterations and modifications are to be considered as within the scope of the invention.

Claims (7)

1. A face recognition method, comprising:
the face image acquisition terminal acquires a face image of a current user and reports the ID of a current positioning terminal to the server;
the server acquires a local mobile terminal information set; the mobile terminal entering the coverage area of the locating terminal extracts the ID of the locating terminal and reports the ID to the server, the server stores the mobile terminal information, and the server locates all mobile terminal information under the ID from the stored mobile terminal information according to the ID of the locating terminal reported by the face image acquisition terminal to form a local mobile terminal information set;
the server acquires a local reference face image set corresponding to the local mobile terminal information set according to a global mobile terminal information set and a global reference face image set corresponding to the global mobile terminal information set;
and matching the face image of the current user with the local reference face image to perform face recognition.
2. The face recognition method of claim 1, wherein the means for obtaining the local mobile terminal information set comprises: one or more of a base station, a WIFI device, a Bluetooth device and a GPS device.
3. The face recognition method of claim 1, wherein the face image comprises one or more of a color image, an infrared image, a grayscale image, and a depth image.
4. The face recognition method of claim 1, further comprising: and executing one or more tasks including face unlocking, face payment and face security inspection according to the face recognition result.
5. A face recognition system, comprising:
positioning a terminal;
the face image acquisition terminal is used for acquiring a face image of a current user and extracting ID information of the positioning terminal;
the server is used for storing a global mobile terminal information set and a global reference face image set corresponding to the global mobile terminal information set;
a processor configured to:
according to the ID information of the positioning terminal, positioning all mobile terminal information under the ID information of the positioning terminal from the global mobile terminal information set so as to form a local mobile terminal information set, and simultaneously forming a local reference face image set by all reference face images corresponding to the mobile terminal information in the local mobile terminal information set, wherein all mobile terminal information under the ID information of the positioning terminal is the ID of the positioning terminal extracted and reported to the server according to the mobile terminal entering the coverage area of the positioning terminal, so that the server stores the mobile terminal information, and,
and matching the face image of the current user with the local reference face image to perform face recognition.
6. The face recognition system of claim 5, wherein the processor is further configured to perform one or more of the tasks including face payment, face unlocking, and face security check based on the face recognition result.
7. The face recognition system of claim 5, wherein the location terminal comprises one or more of a base station, a WIFI device, a Bluetooth device, and a GPS device.
CN201810375426.3A 2018-04-24 2018-04-24 Face recognition method and system Active CN110399763B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810375426.3A CN110399763B (en) 2018-04-24 2018-04-24 Face recognition method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810375426.3A CN110399763B (en) 2018-04-24 2018-04-24 Face recognition method and system

Publications (2)

Publication Number Publication Date
CN110399763A CN110399763A (en) 2019-11-01
CN110399763B true CN110399763B (en) 2023-04-18

Family

ID=68322318

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810375426.3A Active CN110399763B (en) 2018-04-24 2018-04-24 Face recognition method and system

Country Status (1)

Country Link
CN (1) CN110399763B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111210375A (en) * 2019-11-27 2020-05-29 重庆特斯联智慧科技股份有限公司 Multi-functional portable wisdom security protection all-in-one
CN113255399A (en) * 2020-02-10 2021-08-13 北京地平线机器人技术研发有限公司 Target matching method and system, server, cloud, storage medium and equipment
CN111639934A (en) * 2020-02-17 2020-09-08 中国银联股份有限公司 Payment method and payment device based on biological characteristic matching
CN113033466B (en) * 2021-04-13 2022-11-15 山东大学 Face recognition method and device
CN113326810A (en) * 2021-06-30 2021-08-31 商汤国际私人有限公司 Face recognition method, system, device, electronic equipment and storage medium
CN113610071B (en) * 2021-10-11 2021-12-24 深圳市一心视觉科技有限公司 Face living body detection method and device, electronic equipment and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103747520A (en) * 2013-12-20 2014-04-23 百度在线网络技术(北京)有限公司 Positioning method and device for mobile terminal
CN103839161A (en) * 2014-03-24 2014-06-04 上海交通大学 System and method for authentication and transmission of mobile payment information
CN105279814A (en) * 2014-07-24 2016-01-27 中兴通讯股份有限公司 Driving recording treatment method and driving recording treatment system
WO2016023347A1 (en) * 2014-08-13 2016-02-18 惠州Tcl移动通信有限公司 Login method and system through human face recognition based on mobile terminal
WO2016206185A1 (en) * 2015-06-24 2016-12-29 中兴通讯股份有限公司 Unlocking method, device and terminal based on face recognition and storage medium
CN107483416A (en) * 2017-07-27 2017-12-15 湖南浩丰文化传播有限公司 The method and device of authentication

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106778489A (en) * 2016-11-14 2017-05-31 深圳奥比中光科技有限公司 The method for building up and equipment of face 3D characteristic identity information banks
CN107169483A (en) * 2017-07-12 2017-09-15 深圳奥比中光科技有限公司 Tasks carrying based on recognition of face
CN107832598B (en) * 2017-10-17 2020-08-14 Oppo广东移动通信有限公司 Unlocking control method and related product

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103747520A (en) * 2013-12-20 2014-04-23 百度在线网络技术(北京)有限公司 Positioning method and device for mobile terminal
CN103839161A (en) * 2014-03-24 2014-06-04 上海交通大学 System and method for authentication and transmission of mobile payment information
CN105279814A (en) * 2014-07-24 2016-01-27 中兴通讯股份有限公司 Driving recording treatment method and driving recording treatment system
WO2016023347A1 (en) * 2014-08-13 2016-02-18 惠州Tcl移动通信有限公司 Login method and system through human face recognition based on mobile terminal
WO2016206185A1 (en) * 2015-06-24 2016-12-29 中兴通讯股份有限公司 Unlocking method, device and terminal based on face recognition and storage medium
CN107483416A (en) * 2017-07-27 2017-12-15 湖南浩丰文化传播有限公司 The method and device of authentication

Also Published As

Publication number Publication date
CN110399763A (en) 2019-11-01

Similar Documents

Publication Publication Date Title
CN110399763B (en) Face recognition method and system
CN109086669B (en) Face recognition identity verification method and device and electronic equipment
US11670058B2 (en) Visual display systems and method for manipulating images of a real scene using augmented reality
CN105426857B (en) Human face recognition model training method and device
WO2020006727A1 (en) Face recognition method and device, and server
US10832037B2 (en) Method and apparatus for detecting image type
CN107483416A (en) The method and device of authentication
CN110297923B (en) Information processing method, information processing device, electronic equipment and computer readable storage medium
CN111639968B (en) Track data processing method, track data processing device, computer equipment and storage medium
CN106303599A (en) A kind of information processing method, system and server
CN107038462B (en) Equipment control operation method and system
CN106255966A (en) StoreFront identification is used to identify entity to be investigated
US10348723B2 (en) Method for biometric recognition of a user amongst a plurality of registered users to a service, employing user localization information
CN110555876A (en) Method and apparatus for determining position
CN111625793A (en) Identity recognition method, order payment method, sub-face library establishing method, device and equipment, and order payment system
US10606886B2 (en) Method and system for remote management of virtual message for a moving object
US11945583B2 (en) Method for generating search information of unmanned aerial vehicle and unmanned aerial vehicle
CN111429143A (en) Transfer method, device, storage medium and terminal based on voiceprint recognition
CN109752001B (en) Navigation system, method and device
CN104077051A (en) Wearable device standby and standby image providing method and apparatus
CN113516167A (en) Biological feature recognition method and device
CN103186590A (en) Method for acquiring identity information of wanted criminal on run through mobile phone
CN109102581B (en) Card punching method, device, system and storage medium
CN111159680B (en) Equipment binding method and device based on face recognition
Ahmed GPark: Vehicle parking management system using smart glass

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: 11-13 / F, joint headquarters building, high tech Zone, 63 Xuefu Road, Yuehai street, Nanshan District, Shenzhen, Guangdong 518000

Applicant after: Obi Zhongguang Technology Group Co.,Ltd.

Address before: 12 / F, joint headquarters building, high tech Zone, 63 Xuefu Road, Nanshan District, Shenzhen, Guangdong 518000

Applicant before: SHENZHEN ORBBEC Co.,Ltd.

GR01 Patent grant
GR01 Patent grant