CN109858463B - Dual-engine user identification method, system and terminal - Google Patents

Dual-engine user identification method, system and terminal Download PDF

Info

Publication number
CN109858463B
CN109858463B CN201910132268.3A CN201910132268A CN109858463B CN 109858463 B CN109858463 B CN 109858463B CN 201910132268 A CN201910132268 A CN 201910132268A CN 109858463 B CN109858463 B CN 109858463B
Authority
CN
China
Prior art keywords
user
face image
image
identification
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910132268.3A
Other languages
Chinese (zh)
Other versions
CN109858463A (en
Inventor
张朝龙
王枥墀
陈思宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chengdu Yunding Silu Information Technology Co ltd
Original Assignee
Chengdu Yunding Silu Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chengdu Yunding Silu Information Technology Co ltd filed Critical Chengdu Yunding Silu Information Technology Co ltd
Priority to CN201910132268.3A priority Critical patent/CN109858463B/en
Publication of CN109858463A publication Critical patent/CN109858463A/en
Application granted granted Critical
Publication of CN109858463B publication Critical patent/CN109858463B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The application discloses a dual-engine user identification method, a system and a terminal, wherein a first face image and a second face image of a user are respectively obtained through a first identification mode and a second identification mode; comparing the similarity of the facial features of the user in the first facial image and the second facial image; if the similarity is larger than or equal to a preset matching threshold, selecting any one of the first face image and the second face image as a final recognition result; or if the similarity is smaller than the preset matching threshold, acquiring the face feature information prestored by the user, comparing the preset face feature information with the first face image and the second face image respectively, and determining the final recognition result. The method comprises the steps of acquiring a face image of a user by adopting different methods, and then comparing the images acquired by the two methods. Therefore, when the user is identified, the user cannot be identified wrongly by using an identification mode, and the face image of the user is accurately determined according to the feature information of the face of the user.

Description

Dual-engine user identification method, system and terminal
Technical Field
The application relates to the technical field of image recognition, in particular to a dual-engine user recognition method, a dual-engine user recognition system and a dual-engine user recognition terminal.
Background
Face recognition is a biometric technology for identity recognition based on facial feature information of a person. A series of related technologies, also commonly called face recognition and face recognition, are used to collect images or video streams containing faces by using a camera or a video camera, automatically detect and track the faces in the images, and then perform face recognition on the detected faces.
In the conventional face recognition technology, the facial features of a person are required to be accurately collected, and the collected facial features are compared with data collected in advance in a database. And only when the comparison result reaches a preset threshold value, for example, the matching rate is over 90%, the matching is considered to be successful, and then the user performing the face recognition is allowed to perform the next operation. If the authentication fails, the user is prohibited from performing any related operations.
Although the user can be identified through the acquisition and comparison of the face features in the conventional face recognition, the accuracy of face recognition for people with different skin colors may be different, for example, for asians with yellow skin and europeans with white skin, the recognition effect is generally better. However, for african people with black skin, the face features of the user may not be accurately identified by taking a picture, and thus the user may not be identified.
Disclosure of Invention
In order to solve the technical problems, the following technical scheme is provided:
in a first aspect, an embodiment of the present application provides a dual-engine user identification method, where the method includes: respectively acquiring a first face image and a second face image of a user through a first identification mode and a second identification mode, wherein the first identification mode and the second identification mode are image identification modes corresponding to different identification algorithms; comparing the similarity of the facial features of the users in the first face image and the second face image; if the similarity is larger than or equal to a preset matching threshold, selecting any one of the first face image and the second face image as a final recognition result; or if the similarity is smaller than a preset matching threshold, acquiring the face feature information prestored by the user, comparing the preset face feature information with the first face image and the second face image respectively, and determining a final recognition result.
By adopting the implementation mode, when the user is identified, different methods are adopted to acquire the face image of the user, and then the images acquired by the two methods are compared. And if the comparison result is that the image similarity threshold obtained by the two methods is in a preset threshold, selecting one of the two methods, otherwise, obtaining the facial feature information of the user, and determining the final recognition result. Therefore, when the user is identified, the user cannot be identified wrongly by using an identification mode, and the face image of the user is accurately determined according to the feature information of the face of the user.
With reference to the first aspect, in a first possible implementation manner of the first aspect, the obtaining a first face image and a second face image of a user by a first recognition manner and a second recognition manner respectively includes: performing behavior feature detection on the user through living body detection, wherein the behavior feature comprises: blinking eyes, opening mouth and shaking head; and if the living body detection passes, respectively acquiring the first face image and the second face image according to different acquisition modes by an image acquisition device.
With reference to the first aspect, in a second possible implementation manner of the first aspect, after determining the final recognition result, the obtaining identity information of the user, and determining the identity of the user according to the identity information includes: a corporate employee or a visitor.
With reference to the second possible implementation manner of the first aspect, in a third possible implementation manner of the first aspect, if it is determined that the user is a company employee according to the identity information, the backlog of the user in the OA office system is obtained; and displaying the backlog on a screen so that the user can know the backlog of the user.
With reference to the second possible implementation manner of the first aspect, in a fourth possible implementation manner of the first aspect, if it is determined that the user is a visitor according to the identity information, a destination address of the user for visiting is searched from visitor reservation information; planning a path to the destination address according to the destination address; and displaying the path information on a screen, so that the user can reach a destination address according to the path information displayed on the screen.
With reference to the fourth possible implementation manner of the first aspect, in a fifth possible implementation manner of the first aspect, if a reservation record of a current visitor does not exist in the visitor reservation information, a historical visitor record is obtained; if the historical visitor record of the user is traversed in the historical visitor record; acquiring the historical visiting times and visiting destinations of the user, and planning path information corresponding to the historical visiting destinations according to the historical visiting destinations; final path information is determined by the user selection.
In a second aspect, an embodiment of the present application provides a dual engine user identification system, including: the image acquisition module is used for respectively acquiring a first face image and a second face image of a user through a first identification mode and a second identification mode, wherein the first identification mode and the second identification mode are image identification modes corresponding to different identification algorithms; the image comparison module is used for comparing the similarity of the facial features of the users in the first face image and the second face image; the judgment processing module is used for selecting any one of the first face image and the second face image as a final recognition result if the similarity is greater than or equal to a preset matching threshold; or if the similarity is smaller than a preset matching threshold, acquiring the face feature information prestored by the user, comparing the preset face feature information with the first face image and the second face image respectively, and determining a final recognition result.
With reference to the second aspect, in a first possible implementation manner of the second aspect, the image acquisition module includes: a living body detection unit configured to perform behavior feature detection on the user through living body detection, the behavior feature including: blinking eyes, opening mouth and shaking head; an image acquisition unit for acquiring the first face image and the second face image respectively in different acquisition modes by an image acquisition device if the living body detection passes.
With reference to the second aspect, in a second possible implementation manner of the second aspect, the system further includes: the user identity recognition unit is used for acquiring the identity information of the user and determining the identity of the user according to the identity information, and comprises: a corporate employee or a visitor.
With reference to the second possible implementation manner of the second aspect, in a third possible implementation manner of the second aspect, if it is determined that the user is a company employee according to the identity information, the system further includes: the first acquisition unit is used for acquiring backlogs of the user in the OA office system; the first display unit is used for displaying the backlog on a screen, so that the user can know the backlog of the user.
With reference to the second possible implementation manner of the second aspect, in a fourth possible implementation manner of the second aspect, if it is determined that the user is a visitor according to the identity information, the system further includes: the traversal unit is used for searching a destination address of the user visit from the visitor reservation information; a first route planning unit, configured to plan a route to the destination address according to the destination address; and the second display unit is used for displaying the path information on a screen so that the user can reach a destination address according to the path information displayed on the screen.
With reference to the fourth possible implementation manner of the second aspect, in a fifth possible implementation manner of the second aspect, if there is no reservation record of a current visitor in the visitor reservation information, the system further includes: the second acquisition unit is used for acquiring a history visitor record; a third obtaining unit, configured to obtain the historical visiting times and visiting destinations of the user if the historical visiting records of the user are traversed in the historical visiting records; the second line planning unit is used for planning path information corresponding to the historical visit destination according to the historical visit destination; and the determining unit is used for determining final path information through the user selection.
In a third aspect, an embodiment of the present application provides a dual-engine user identification terminal, including: a processor; a memory for storing computer executable instructions; when the processor executes the computer-executable instructions, the processor performs the method of the first aspect or any implementation manner of the first aspect.
Drawings
Fig. 1 is a schematic flowchart of a dual-engine user identification method according to an embodiment of the present disclosure;
fig. 2 is a schematic diagram of obtaining facial feature information of a user according to an embodiment of the present application
Fig. 3 is a schematic diagram of a visitor path according to an embodiment of the present disclosure;
fig. 4 is a schematic diagram of another visiting user path according to an embodiment of the present application;
fig. 5 is a schematic diagram of a dual-engine user recognition system according to an embodiment of the present application;
fig. 6 is a schematic structural diagram of a dual-engine user identification terminal according to an embodiment of the present application.
Detailed Description
The present invention will be described with reference to the accompanying drawings and embodiments.
Fig. 1 is a schematic flow chart of a dual-engine user identification method according to an embodiment of the present application, where the method described with reference to fig. 1 includes:
s101, respectively acquiring a first face image and a second face image of a user through a first recognition mode and a second recognition mode.
The user identification method in the embodiment is applicable to identifying the access control, and the automatic identification of the user access control is taken as an example for explanation in the embodiment. Or an access control in the prior art, which is provided with a terminal device based on the user identification method in the embodiment.
When a user enters the identification range of the terminal, the face image of the user is acquired. In the embodiment, the accuracy of face image acquisition is realized. In an exemplary embodiment, the user is prompted to perform corresponding actions according to the prompt, such as prompting the user to face the capturing camera and turn the capturing camera to the right and left by 45-60 degrees, respectively.
The method can be used for collecting photos of multiple users, and the face images of the users are acquired in multiple recognition modes in the embodiment. In this embodiment, the first recognition mode and the second recognition mode are image recognition modes corresponding to different recognition algorithms, and the first face image and the second face image are face images obtained by the image recognition mode corresponding to the recognition algorithm.
Further, in this embodiment, in order to make the user possibly hold a photo of another person for corresponding recognition, the above situation may not only be that recognition is in error, but also may cause a counterfeit company to enter the office area. Therefore, in this embodiment, the user may be subjected to behavior feature detection through live body detection, for example, the behavior feature of the user is detected by using a live body detection device, and at this time, the user is prompted to perform a corresponding facial action, for example: blink eyes, open mouth and shake head. Only when the live body detection passes, the face image of the user is acquired.
S102, comparing the similarity of the facial features of the user in the first face image and the second face image.
After the first face image and the second face image of the user are acquired in S101, in order to ensure the accuracy of the recognition, feature comparison needs to be performed on the first face image and the second face image.
For example, the image processing of the first face image and the second face image, respectively, includes: and acquiring characteristic information such as image processing and contour lines. And then, performing feature matching on the processed first face image and the second face image to determine matching similarity.
S103, if the similarity is larger than or equal to a preset matching threshold, selecting any one of the first face image and the second face image as a final recognition result.
If it is determined that the similarity between the first face image and the second face image is greater than or equal to the preset matching threshold after the feature information matching in S102, any one of the first face image and the second face image may be selected as a final recognition result as a final face image of the current user.
In this embodiment, in order to ensure the accuracy of recognition, the preset matching threshold is set to 95%, and therefore, only when the similarity of the feature information of the first face image and the second face image is higher than 95%, it is considered that the face images of the user obtained by the two recognition methods are accurate, and any one of the two recognition methods may be selected. Otherwise, further acquisition of the face image of the user is required.
S104, if the similarity is smaller than a preset matching threshold, acquiring the face feature information prestored by the user, comparing the preset face feature information with the first face image and the second face image respectively, and determining a final recognition result.
If the similarity of the feature information of the first face image and the second face image is less than 95%, there may be a certain difference between the face images obtained by the two recognition methods due to problems of user skin color, face trauma and the like. At this time, as shown in fig. 2, feature information such as a nose, a face contour, and an eye contour of the user is obtained with emphasis. And respectively matching the acquired feature information with the first face image and the second face image, and if the feature information in the face image acquired by one recognition mode is completely consistent, taking the face image acquired by the recognition mode as a final user face image.
The user identification method in the conventional art generally passes the user directly after identifying the user passing. However, the user identification method provided in this embodiment further includes acquiring the identity information of the user when the user passes the authentication. Since users are generally classified into employees within a company and visiting users, the identity of the user is determined based on the identity information.
And after the user identity is determined, if the user is determined to be a company employee according to the identity information, obtaining backlogs of the user in the OA office system. And displaying the backlog on a screen so that the user can know the backlog of the user. For example, the user's name, the department, meetings scheduled today for the department, incomplete tasks by the user himself, and newly assigned tasks by the user are displayed on the display screen.
Furthermore, after the content is displayed, the user is prompted to determine the information and then the entrance guard is opened, so that the user can read the displayed information. At this time, the update information is sent to the OA system to inform a certain staff that the journey of the day is known.
And if the user is determined to be a visitor according to the identity information, searching a destination address of the user visit from visitor reservation information. Usually, a visitor is not familiar with a visiting place, and therefore a mistake may occur, as shown in fig. 3, in this embodiment, a route to the destination address is planned according to the destination address, and the route information is displayed on a screen, so that the user can reach the destination address according to the route information displayed on the screen.
The user can record the route map in a photographing mode, and further can smoothly arrive at the destination. In this embodiment, an input active window may be generated on the screen, and is used for the user to input his or her mobile phone number, and then click the acquisition button, and the path information is sent to the mobile phone of the user.
The visitor has a reserved visitor record, but if the visitor reservation information does not have the reservation record of the current visitor, a historical visitor record is obtained. If the historical visiting record of the user is traversed in the historical visiting record, obtaining the historical visiting times and visiting destinations of the user, as shown in fig. 4, planning path information corresponding to the historical visiting destinations according to the historical visiting destinations, and determining final path information through the user selection.
Of course, if there is no reserved visiting record or no historical visiting record, an activity input box is generated to prompt the user to input the visiting destination department information and click to send. Then the corresponding department receives the information of the visitor and confirms whether to receive the treatment.
As can be seen from the foregoing embodiments, the present embodiment provides a user identification method, including: respectively acquiring a first face image and a second face image of a user through a first identification mode and a second identification mode, wherein the first identification mode and the second identification mode are image identification modes corresponding to different identification algorithms; comparing the similarity of the facial features of the users in the first face image and the second face image; if the similarity is larger than or equal to a preset matching threshold, selecting any one of the first face image and the second face image as a final recognition result; or if the similarity is smaller than a preset matching threshold, acquiring the face feature information prestored by the user, comparing the preset face feature information with the first face image and the second face image respectively, and determining a final recognition result. When the user is identified, different methods are adopted to acquire the face image of the user, and then the images acquired by the two methods are compared. And if the comparison result is that the image similarity threshold obtained by the two methods is in a preset threshold, selecting one of the two methods, otherwise, obtaining the facial feature information of the user, and determining the final recognition result. Therefore, when the user is identified, the user cannot be identified wrongly by using an identification mode, and the face image of the user is accurately determined according to the feature information of the face of the user.
Example two
Corresponding to the dual-engine user identification method provided by the embodiment, the application also provides an embodiment of a dual-engine user identification system. Referring to fig. 5, the dual engine user recognition system 20 includes an image capturing module 201, an image comparing module 202, and a judgment processing module 203.
The image acquisition module 201 is configured to acquire a first face image and a second face image of a user through a first identification manner and a second identification manner, where the first identification manner and the second identification manner are image identification manners corresponding to different identification algorithms. An image comparison module 202, configured to compare the similarity of the user facial features in the first face image and the second face image. The judgment processing module 203 is configured to select any one of the first face image and the second face image as a final recognition result if the similarity is greater than or equal to a preset matching threshold; or if the similarity is smaller than a preset matching threshold, acquiring the face feature information prestored by the user, comparing the preset face feature information with the first face image and the second face image respectively, and determining a final recognition result.
In an exemplary embodiment, the image acquisition module 201 comprises: a living body detecting unit and an image acquiring unit. A living body detection unit configured to perform behavior feature detection on the user through living body detection, the behavior feature including: blink eyes, open mouth and shake head. An image acquisition unit for acquiring the first face image and the second face image respectively in different acquisition modes by an image acquisition device if the living body detection passes.
The dual-engine user recognition system 20 provided in this embodiment further includes: the user identity recognition unit is used for acquiring the identity information of the user and determining the identity of the user according to the identity information, and comprises: a corporate employee or a visitor.
Further, if it is determined that the user is a company employee according to the identity information, the dual engine user identification system 20 further includes: the device comprises a first acquisition unit and a first display unit. The first obtaining unit is used for obtaining backlogs of the user in the OA office system. The first display unit is used for displaying the backlog on a screen, so that the user can know the backlog of the user.
If the user is determined to be a visitor based on the identity information, the dual engine user recognition system 20 further includes: the device comprises a traversing unit, a first route planning unit and a second display unit.
And the traversal unit is used for searching the destination address of the user visit from the visitor reservation information. And the first route planning unit is used for planning a route reaching the destination address according to the destination address. And the second display unit is used for displaying the path information on a screen so that the user can reach a destination address according to the path information displayed on the screen.
If there is no reservation record of the current visitor in the visitor reservation information, the dual engine subscriber identification system 20 further includes: the system comprises a second acquisition unit, a third acquisition unit and a second line planning unit. And the second acquisition unit is used for acquiring the history visitor record. A third obtaining unit, configured to obtain the historical visiting times and visiting destinations of the user if the historical visiting record of the user is traversed in the historical visiting record. The second line planning unit is used for planning path information corresponding to the historical visit destination according to the historical visit destination; and the determining unit is used for determining final path information through the user selection.
EXAMPLE III
The embodiment of the present application further provides a dual-engine user identification terminal, referring to fig. 6, where the dual-engine user identification terminal 30 includes: a processor 301, a memory 302, and a communication interface 303.
In fig. 6, the processor 301, the memory 302, and the communication interface 303 may be connected to each other by a bus; the bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one thick line is shown in FIG. 6, but this is not intended to represent only one bus or type of bus.
The processor 301 generally controls the overall functions of the dual-engine user recognition terminal 30, such as the start-up of the terminal, recognition of a face image of the user after the terminal is started up, accurate recognition of the user, selection of a task to be executed according to the user's category, and the like. Further, the processor 301 may be a general-purpose processor, such as a Central Processing Unit (CPU), a Network Processor (NP), or a combination of a CPU and an NP. The processor may also be a Microprocessor (MCU). The processor may also include a hardware chip. The hardware chips may be Application Specific Integrated Circuits (ASICs), Programmable Logic Devices (PLDs), or a combination thereof. The PLD may be a Complex Programmable Logic Device (CPLD), a Field Programmable Gate Array (FPGA), or the like.
The memory 302 is configured to store computer executable instructions to support the operation of the dual engine user identification terminal 30 data. The memory 301 may be implemented by any type or combination of volatile or non-volatile memory devices, such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
After the dual engine subscriber identity terminal 30 is started, the processor 301 and the memory 302 are powered on, and the processor 301 reads and executes the computer executable instructions stored in the memory 302 to complete all or part of the steps in the above-described subscriber identity method embodiment.
The communication interface 303 is used for the dual-engine user recognition terminal 30 to transmit data, for example, to enable data communication with a server, a camera, and the like. The communication interface 303 includes a wired communication interface, and may also include a wireless communication interface. The wired communication interface comprises a USB interface, a Micro USB interface and an Ethernet interface. The wireless communication interface may be a WLAN interface, a cellular network communication interface, a combination thereof, or the like.
In an exemplary embodiment, the dual engine subscriber identity terminal 30 provided by the embodiments of the present application further includes a power supply component that provides power to the various components of the dual engine subscriber identity terminal 30. The power components may include a power management system, one or more power sources, and other components associated with generating, managing, and distributing power for the dual engine subscriber identity terminal 30.
A communication component configured to facilitate wired or wireless communication between the dual engine user identification terminal 30 and other devices. The dual engine subscriber identity terminal 30 may access a wireless network based on a communication standard, such as WiFi, 2G or 3G, or a combination thereof. The communication component receives a broadcast signal or broadcast related information from an external broadcast management system via a broadcast channel. The communication component also includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the dual engine user identification terminal 30 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), terminals, micro-terminals, processors, or other electronic components.
The same and similar parts among the various embodiments in the specification of the present application may be referred to each other. Especially, for the system and terminal embodiments, since the method therein is basically similar to the method embodiments, the description is relatively simple, and the relevant points can be referred to the description in the method embodiments.
It is noted that, in this document, relational terms such as "first" and "second," and the like, may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
Of course, the above description is not limited to the above examples, and technical features that are not described in this application may be implemented by or using the prior art, and are not described herein again; the above embodiments and drawings are only for illustrating the technical solutions of the present application and not for limiting the present application, and the present application is only described in detail with reference to the preferred embodiments instead, it should be understood by those skilled in the art that changes, modifications, additions or substitutions within the spirit and scope of the present application may be made by those skilled in the art without departing from the spirit of the present application, and the scope of the claims of the present application should also be covered.

Claims (10)

1. A dual engine user identification method, the method comprising:
respectively acquiring a first face image and a second face image of a user through a first identification mode and a second identification mode, wherein the first identification mode and the second identification mode are image identification modes corresponding to different identification algorithms;
comparing the similarity of the facial features of the users in the first face image and the second face image;
if the similarity is larger than or equal to a preset matching threshold, selecting any one of the first face image and the second face image as a final recognition result; alternatively, the first and second electrodes may be,
and if the similarity is smaller than a preset matching threshold, acquiring the pre-stored facial feature information of the user, comparing the pre-stored facial feature information with the first facial image and the second facial image respectively, and determining a final recognition result.
2. The method according to claim 1, wherein the obtaining of the first face image and the second face image of the user by the first recognition mode and the second recognition mode respectively comprises:
performing behavior feature detection on the user through living body detection, wherein the behavior feature comprises: blinking eyes, opening mouth and shaking head;
and if the living body detection passes, respectively acquiring the first face image and the second face image according to different acquisition modes by an image acquisition device.
3. The method of claim 1, wherein after determining the final recognition result, further comprising obtaining identity information of the user, and determining the identity of the user according to the identity information comprises: a corporate employee or a visitor.
4. The method of claim 3, wherein if the user is determined to be a company employee according to the identity information, backlog of the user in an OA office system is obtained;
and displaying the backlog on a screen so that the user can know the backlog of the user.
5. The method of claim 3, wherein if the user is determined to be a visitor based on the identity information, finding a destination address of the user visit from visitor subscription information;
planning path information reaching the destination address according to the destination address;
and displaying the path information on a screen, so that the user can reach a destination address according to the path information displayed on the screen.
6. The method of claim 5, wherein if no reservation record exists for a current visitor in the visitor reservation information, obtaining a historical visitor record;
if the historical visiting record of the user is traversed in the historical visiting record, obtaining the historical visiting times and visiting destinations of the user;
planning path information corresponding to the historical visiting destination according to the historical visiting destination;
final path information is determined by the user selection.
7. A dual engine user identification system, the system comprising:
the image acquisition module is used for respectively acquiring a first face image and a second face image of a user through a first identification mode and a second identification mode, wherein the first identification mode and the second identification mode are image identification modes corresponding to different identification algorithms;
the image comparison module is used for comparing the similarity of the facial features of the users in the first face image and the second face image;
the judgment processing module is used for selecting any one of the first face image and the second face image as a final recognition result if the similarity is greater than or equal to a preset matching threshold; or if the similarity is smaller than a preset matching threshold, acquiring the pre-stored facial feature information of the user, comparing the pre-stored facial feature information with the first facial image and the second facial image respectively, and determining a final recognition result.
8. The system of claim 7, wherein the image acquisition module comprises:
a living body detection unit configured to perform behavior feature detection on the user through living body detection, the behavior feature including: blinking eyes, opening mouth and shaking head;
an image acquisition unit for acquiring the first face image and the second face image respectively in different acquisition modes by an image acquisition device if the living body detection passes.
9. The system of claim 7, further comprising: the user identity recognition unit is used for acquiring the identity information of the user and determining the identity of the user according to the identity information, and comprises: a corporate employee or a visitor.
10. A dual engine user identification terminal, comprising:
a processor;
a memory for storing computer executable instructions;
the computer-executable instructions, when executed by the processor, cause the processor to perform the method of any of claims 1-6.
CN201910132268.3A 2019-02-22 2019-02-22 Dual-engine user identification method, system and terminal Active CN109858463B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910132268.3A CN109858463B (en) 2019-02-22 2019-02-22 Dual-engine user identification method, system and terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910132268.3A CN109858463B (en) 2019-02-22 2019-02-22 Dual-engine user identification method, system and terminal

Publications (2)

Publication Number Publication Date
CN109858463A CN109858463A (en) 2019-06-07
CN109858463B true CN109858463B (en) 2021-03-26

Family

ID=66898672

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910132268.3A Active CN109858463B (en) 2019-02-22 2019-02-22 Dual-engine user identification method, system and terminal

Country Status (1)

Country Link
CN (1) CN109858463B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110782572A (en) * 2019-10-23 2020-02-11 广东穗都建筑工程有限公司 Intelligent monitoring method and system for security protection of park
CN111667269B (en) * 2020-06-08 2023-04-07 江苏高聚识别技术有限公司 Face automatic identification type consumption system
CN115208616B (en) * 2022-05-20 2023-06-23 深圳铸泰科技有限公司 Internet of things safety monitoring method and system based on double engines

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN201679381U (en) * 2010-05-07 2010-12-22 深圳市飞瑞斯科技有限公司 Safety box based on face recognition
CN106156732A (en) * 2016-07-01 2016-11-23 合网络技术(北京)有限公司 Object identifying method and object recognition equipment
CN109359460A (en) * 2018-11-20 2019-02-19 维沃移动通信有限公司 A kind of face recognition method and terminal device

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101639891B (en) * 2008-07-28 2012-05-02 汉王科技股份有限公司 Double-camera face identification device and method
KR101760258B1 (en) * 2010-12-21 2017-07-21 삼성전자주식회사 Face recognition apparatus and method thereof
CN102735237B (en) * 2012-06-29 2015-11-11 安科智慧城市技术(中国)有限公司 A kind of visitor's paths planning method, device and system
CN203520414U (en) * 2013-10-31 2014-04-02 山西万瑞康科技有限公司 Multi-algorithm human face recognition combination system
CN103646244B (en) * 2013-12-16 2018-01-09 北京天诚盛业科技有限公司 Extraction, authentication method and the device of face characteristic
CN105187726B (en) * 2015-06-17 2021-05-18 巽腾(广东)科技有限公司 Multifunctional mobile image processing device and processing method
CN108932456B (en) * 2017-05-23 2022-01-28 北京旷视科技有限公司 Face recognition method, device and system and storage medium
CN107289949B (en) * 2017-07-26 2020-08-07 湖北工业大学 Indoor guidance identification device and method based on face identification technology

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN201679381U (en) * 2010-05-07 2010-12-22 深圳市飞瑞斯科技有限公司 Safety box based on face recognition
CN106156732A (en) * 2016-07-01 2016-11-23 合网络技术(北京)有限公司 Object identifying method and object recognition equipment
CN109359460A (en) * 2018-11-20 2019-02-19 维沃移动通信有限公司 A kind of face recognition method and terminal device

Also Published As

Publication number Publication date
CN109858463A (en) 2019-06-07

Similar Documents

Publication Publication Date Title
CN109858463B (en) Dual-engine user identification method, system and terminal
KR102177235B1 (en) An attendance check system using deep learning based face recognition
US10043059B2 (en) Assisted photo-tagging with facial recognition models
US11875622B2 (en) Authentication method and user equipment
KR101988253B1 (en) System and method for auto managing entrance and exit of employee using high-pass type
US20090174805A1 (en) Digital camera focusing using stored object recognition
WO2010103735A1 (en) Face collation device, electronic appliance, method of controlling face collation device, and control program of face collation device
KR20170023748A (en) Photo sharing method and apparatus for
US9070024B2 (en) Intelligent biometric identification of a participant associated with a media recording
US9087255B2 (en) Image processor, image processing method and program, and recording medium
US20240021006A1 (en) Information processing apparatus, information processing method, and non-transitory computer-readable storage medium
WO2020093612A1 (en) Clock-in/out method and system based on smart wristband, and computer device
KR20070105074A (en) Method of managing image in a mobile communication terminal
CN106096011A (en) Method for picture sharing and device
US20210264766A1 (en) Anti-lost method and system for wearable terminal and wearable terminal
US20230056154A1 (en) Server device, conference room management method, and program recording medium
US11468592B2 (en) Information processing apparatus and information processing method
CN111754655A (en) Regional access management method, system and computer readable storage medium
US20230021003A1 (en) Information processing apparatus, information processing method, and non-transitory computer readable storage
WO2024065657A1 (en) Access control method and access control apparatus
US20230290177A1 (en) Private photo sharing system, method and network
KR20210058267A (en) Worker's access management apparatus for enterprise resource planning system
KR20210058266A (en) Worker's access management apparatus for enterprise resource planning system
KR20210058265A (en) Worker's access management apparatus for enterprise resource planning system
JP2024001310A (en) Imaging apparatus, method and program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant