CN113011348B - Intelligent service processing equipment based on 3D information identification - Google Patents

Intelligent service processing equipment based on 3D information identification Download PDF

Info

Publication number
CN113011348B
CN113011348B CN202110312252.8A CN202110312252A CN113011348B CN 113011348 B CN113011348 B CN 113011348B CN 202110312252 A CN202110312252 A CN 202110312252A CN 113011348 B CN113011348 B CN 113011348B
Authority
CN
China
Prior art keywords
information
authentication
user
data
dimensional data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110312252.8A
Other languages
Chinese (zh)
Other versions
CN113011348A (en
Inventor
左忠斌
左达宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tianmu Aishi Beijing Technology Co Ltd
Original Assignee
Tianmu Aishi Beijing Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tianmu Aishi Beijing Technology Co Ltd filed Critical Tianmu Aishi Beijing Technology Co Ltd
Priority to CN202110312252.8A priority Critical patent/CN113011348B/en
Publication of CN113011348A publication Critical patent/CN113011348A/en
Application granted granted Critical
Publication of CN113011348B publication Critical patent/CN113011348B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/70Multimodal biometrics, e.g. combining information from different biometric modalities
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/64Three-dimensional objects
    • GPHYSICS
    • G07CHECKING-DEVICES
    • G07FCOIN-FREED OR LIKE APPARATUS
    • G07F19/00Complete banking systems; Coded card-freed arrangements adapted for dispensing or receiving monies or the like and posting such transactions to existing accounts, e.g. automatic teller machines
    • G07F19/20Automatic teller machines [ATMs]
    • GPHYSICS
    • G07CHECKING-DEVICES
    • G07FCOIN-FREED OR LIKE APPARATUS
    • G07F7/00Mechanisms actuated by objects other than coins to free or to actuate vending, hiring, coin or paper currency dispensing or refunding apparatus
    • G07F7/08Mechanisms actuated by objects other than coins to free or to actuate vending, hiring, coin or paper currency dispensing or refunding apparatus by coded identity card or credit card or other personal identification means
    • G07F7/10Mechanisms actuated by objects other than coins to free or to actuate vending, hiring, coin or paper currency dispensing or refunding apparatus by coded identity card or credit card or other personal identification means together with a coded signal, e.g. in the form of personal identification information, like personal identification number [PIN] or biometric data

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Finance (AREA)
  • Business, Economics & Management (AREA)
  • Human Computer Interaction (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Accounting & Taxation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Collating Specific Patterns (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)

Abstract

The invention provides intelligent business processing equipment based on 3D information identification, which comprises a biological information acquisition module, a data processing module and a data processing module, wherein the biological information acquisition module is used for acquiring biological characteristic data of a user; the client information authentication module is used for realizing authentication of the client identity according to the comparison result of the obtained three-dimensional data and the standard three-dimensional data; and the client operation module is used for providing service interaction for the user. It is proposed for the first time to use the physiological characteristics of the human body as a payment medium to replace the physical card. The unique 3D acquisition and recognition system ensures the accuracy and the misjudgment rate of authentication, thereby supporting the physiological characteristics of the human body as a payment medium, needing no participation of entity cards, passwords and the like, being more friendly to users and convenient to use.

Description

Intelligent service processing equipment based on 3D information identification
Technical Field
The invention relates to the technical field of teller machines, in particular to the technical field of teller machines with 3D intelligent recognition.
Background
When a customer performs self-service transaction at a bank, an ATM is usually used, but the ATM is dependent on a magnetic card and an IC card during the transaction, and password verification is mostly adopted during the verification. But both the magnetic card and the IC have the risk of being duplicated, while the password authentication has the risk of being broken. So the current password format requirements are more and more complex, which makes some users often forget passwords. There are also some technologies mentioned in the prior art that use face recognition and voice recognition to ensure the security of the transaction of the ATM, and functionally expand the application range of the ATM, for example, besides depositing and withdrawing, various other services in the bank can be handled.
However, the existing face recognition is 2D recognition, so that the risk of being imitated is greatly increased. Even with photographs, can pass authentication. It is therefore desirable to incorporate biopsy techniques, or depth information probes, in a 2D face recognition approach to prevent the use of photographs to fool the authentication system.
Although some 3D face recognition technologies exist, for example, a 3D face is used to unlock a mobile phone, the feature points are few, the recognition accuracy is low, the misjudgment rate is high, and the method is difficult to be applied to a bank payment system with high security requirements. However, some high-precision 3D face recognition algorithms are complex, and usually more than 10 minutes are required for acquiring and recognizing the face, so that the real-time performance cannot be guaranteed, and the method cannot be applied to actual payment products. That is, although the prior art has a machine for payment (possibly including 2D identification) with an identity authentication requirement such as an ATM machine, and also has an existing 3D identification technology (such as unlocking of a 3D face recognition of a apple mobile phone), the prior art cannot independently use the machine as a unique authentication means for payment, so that the prior art does not use the machine together with the machine, so that the 3D information of a human body is used as a unique identity bearing medium for replacing cards and passwords.
In the prior art, it has also been proposed to define the camera position using empirical formulas including rotation angle, target size, object distance, thereby compromising the speed of synthesis and the effect. However, in practical applications, it was found that: unless an accurate angle measuring device is provided, the user is insensitive to the angle, and the angle is difficult to accurately determine; the size of the target is difficult to accurately determine, particularly in certain applications where the target needs to be replaced frequently, a lot of extra work is required for each measurement, and specialized equipment is required to accurately measure irregular targets. The error of measurement causes the camera position to set error, so that acquisition and synthesis speed and effect can be influenced; further improvements in accuracy and speed are needed.
The current 3D recognition speed is still to be improved, in which case, if 3D acquisition and recognition of various biological information are performed, the business handling speed will be greatly delayed. But if only 3D acquisition of single biological information is used, this again presents a safety hazard. Moreover, if the standard data and each acquired data are for the same area, the calculated amount is too large each time the service is processed, and the service processing speed is reduced. But if the area is reduced, security is again compromised.
In addition to ATM machines, other self-service payment and transaction machines such as vending machines exist.
So at present, when payment authentication is involved, there are several problems:
1. identity authentication relies on face recognition, but 2D recognition accuracy is not high and is easy to deception.
2. All transactions require physical card participation, increasing the complexity of the operation and the risk of loss.
3. The identity authentication relies on passwords, which have a risk of cracking. Meanwhile, too complicated passwords bring a burden to the user for memorizing the passwords.
4. The technical accuracy, safety and real-time performance of the current common 3D acquisition and recognition are not enough. The improvement of safety can lead to slow acquisition and recognition, which are contradictory.
Disclosure of Invention
The present invention has been made in view of the above problems, and has as its object to provide a DTM (intelligent digital teller machine) which overcomes or at least partially solves the above problems.
The invention provides an intelligent service processing device based on 3D information identification, which comprises
The biological information acquisition module is used for acquiring biological characteristic data of a user;
The client information authentication module is used for realizing authentication of the client identity according to the comparison result of the acquired application three-dimensional data and the standard three-dimensional data;
the client operation module is used for providing service interaction for the user;
The standard three-dimensional data comprises biological characteristic 3D information of each region which is complete to the user, and the three-dimensional data is applied to part of biological characteristic 3D information of the user; wherein the data range of the application three-dimensional data is smaller than the data range of the standard three-dimensional data;
Wherein applying the comparison of the three-dimensional data with the standard three-dimensional data comprises: and comparing the application three-dimensional data synthesized by the low authentication level region image with the standard three-dimensional data, and/or comparing the application three-dimensional data synthesized by the high authentication level region image with the standard three-dimensional data.
Optionally: the application three-dimensional data includes 3D information of the biometric features of a plurality of authentication levels.
Optionally: the authentication includes: firstly, comparing and identifying the 3D information of the biological characteristics of the user with the low authentication level with the pre-stored user standard 3D information, and allowing the user to operate the service with the low authentication level after the identification is passed.
Optionally: the authentication includes: and comparing and identifying the 3D information of the high-authentication-level biological characteristics with the user standard 3D information, and allowing the user to operate the high-authentication-level service after the identification is passed.
Optionally: the method comprises the steps of comparing and identifying the 3D information of the high-authentication-level biological characteristics with the user standard 3D information, generating the 3D information of the high-authentication-level biological characteristics, and realizing the generation and comparison and identification of the 3D information of the high-authentication-level biological characteristics while the user performs service interaction.
Optionally: when the comparison and recognition of the 3D information of the high-authentication-level biological characteristics are carried out, the comparison and recognition object is standard 3D information corresponding to the user screened out through the low-authentication recognition.
Optionally: when biological information is acquired, the position of the image acquisition device meets the following conditions:
Wherein L is the linear distance of the optical center of the image acquisition device when two acquisition positions are adjacent; f is the focal length of the image acquisition device; d is the rectangular length or width of the photosensitive element of the image acquisition device; t is the distance from the photosensitive element of the image acquisition device to the surface of the target along the optical axis; delta is the adjustment coefficient, delta <0.603.
Optionally: delta <0.410, delta <0.356, delta <0.311, delta <0.284, delta <0.261, delta <0.241, or delta <0.107.
Optionally: and sending the collected user biological characteristic data to a local processor or a server for three-dimensional data synthesis to form 3D information of the biological characteristics of the user.
Optionally: the standard three-dimensional data is data of a predetermined size and dimension.
Inventive aspects and technical effects
1. It is proposed for the first time to use the physiological characteristics of the human body as a payment medium to replace the physical card. The special 3D acquisition and recognition system (with the following invention) ensures the accuracy and misjudgment rate of authentication, thereby supporting the physiological characteristics of human bodies as payment media without participation of entity cards, passwords and the like, being more friendly to users and convenient to use.
2. The 3D acquisition and recognition technology of the iris, the hand and the face with high precision and low time delay is provided for the first time. In particular by at least one of the following means: ① A certain standard is regulated for collection and synthesis, so that the collection and synthesis precision is higher and the speed is higher; ② Setting a mark on a camera or a background, and adjusting the position of a target object to enable the preset characteristic of the target object to be aligned with the mark, so that the position of an image of the target object in a picture shot by the camera is ensured to be fixed, the calculation load of an algorithm is reduced, and the synthesis speed is improved; ③ The method has the advantages that the method is limited to collect a plurality of images of the target object at a plurality of fixed positions, so that the relation between the images is fixed during each collection, and the algorithm can be specially designed according to the fixed relation, so that the calculation load of the algorithm is reduced, and the synthesis speed is improved. ④ The acquired images are divided to separate the part containing the target object, so that the data volume of each image can be greatly reduced, the calculated volume is reduced rapidly when the images are combined, and the combining speed is improved.
3. The iris, hand and face 3D acquisition and recognition technology is combined with 3D human body gesture recognition, and the iris, hand and face 3D human body gesture recognition is synchronously acquired and recognized, so that the authentication precision is further improved, and the time delay is reduced.
4. By optimizing the acquisition position of the camera, the acquisition speed and the acquisition precision are improved. And when the position is optimized, the angle is not required to be measured, the size of the target is not required to be measured, and the applicability is stronger.
5. The 3D information of different human organisms is classified, the 3D information of different areas is collected and identified under different service levels, and a step-by-step screening mode is adopted, so that the data processing amount is greatly reduced, and the safety and the real-time performance are considered.
6. The standard data is the complete data of the user, and the application data is part of the data of the user, so that a large amount of data does not need to be collected and processed during business handling, and only part of processing is needed. Saving business handling time.
Drawings
Various other advantages and benefits will become apparent to those of ordinary skill in the art upon reading the following detailed description of the preferred embodiments. The drawings are only for purposes of illustrating the preferred embodiments and are not to be construed as limiting the invention. Also, like reference numerals are used to designate like parts throughout the figures. In the drawings:
FIG. 1 is a schematic diagram of a DTM machine according to an embodiment of the present invention;
fig. 2 is an enlarged schematic diagram of a collection device in a DTM machine a region according to an embodiment of the present invention;
FIG. 3 is an enlarged schematic diagram of the acquisition device in zone B of the DTM machine according to an embodiment of the invention;
reference numerals illustrate:
The device comprises a cabinet 1000, a head/face and iris acquisition device 1001, a hand acquisition device 1002, a transaction device 1003, a carrier board 1004, a display area 1005, an operation area 1006, a server 2000, a 201A image acquisition device, a 600A light source, a 400 processor, a 700A detection device, a 301B image acquisition device, a 603B light source and a 703B detection device.
Detailed Description
Exemplary embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
The target object (such as a person to be collected) is arranged in front of the background plate, and the background plate can select a solid background or a regular pattern background so as to facilitate the extraction of the edge of a subsequent picture and improve the operation speed.
Setting light source parameters such as light source illumination intensity, color temperature and the like, so that the illumination condition is a standardized condition.
A plurality of marks are arranged on a display, a camera reticle or a background plate, and the alignment of the target object and the marks is prompted by a visual or program automatic detection mode. The human body may be moved, for example, by a three-dimensional motion platform carrying the human body. However, whether a person is standing or sitting, the person may lean left or right, for example, the left or right shoulders may be asymmetric. At this point the acquirer is required to command the acquirer to move, or the acquirer to move by looking at the screen display, to remain aligned with the indicia.
The camera takes pictures at a plurality of locations around the person to be acquired, which should meet predetermined standardized conditions (described in detail below).
The image processing device performs preprocessing on each image acquired by the camera, divides the image, extracts useful information parts in the image, removes useless information parts and forms a preprocessed image with standardized size. And performing matching synthesis on the plurality of preprocessed images by using a matching synthesis algorithm to form 3D point cloud information of the target object.
Application of DTM machine
DTM machines may also be referred to as intelligent business processing devices, in use, without the need for the customer to provide credentials identifying the identity of the customer, such as social-use identity cards, passports, etc.; the entity certificate or virtual certificate which is issued by a certain organization and used for identifying the account of the client is not required to be provided by the client, such as a social security card, a store membership card, a bank card (or a card number), a credit card (or a card number), a commodity carrying card, a game point card, a website or app member and other certificates with payment and transaction functions, and the identity information and account information related to services provided by the government and enterprises are associated with the client biological characteristics by using the client biological characteristics as unique identity identification, so that the biological characteristics of the client can be used as unique identification of various social activities and economic activities.
The following is given as a limited example.
1. And (5) transferring accounts. When a customer transfers money in a bank, the DTM machine identifies biological 3D information of the customer, compares the biological 3D information with a plurality of biological 3D information stored in advance in the bank, identifies the identity of the customer, and associates information (such as asset information, customer grade and the like) of a bank account of the customer with the 3D information of the customer, so that the customer is allowed to transfer money on the account.
2. And (5) an unmanned supermarket. The customer supplies the card identifying the item number to the DTM machine, which reads the card, thereby identifying the item the customer wants to purchase and calculating the total amount to be paid. After performing the customer biometric 3D identification, the DTM machine (or its associated server) correlates the customer account with the payment amount, and deducts the corresponding amount directly from the customer account. Or sending a payment request to the bank, sending the acquired 3D information of the client to the bank, and carrying out payment of the bill after the bank identifies.
3. Ordering goods. The customer logs in the factory ordering page through the DTM machine, and selects ordered goods in the ordering page. The DTM machine collects the customer biometric 3D information and sends it to the factory, which generates an order from the customer biometric 3D information and sends the calculated cost to the DTM machine (or its associated server) requesting payment from the customer. The customer calls the bank account to pay the order through the own biological 3D information. And the factory sends goods to the client according to the information such as the name, the address, the contact information and the like associated with the client 3D information.
4. A gate access control system. The gate is connected to the DTM machine (the DTM machine may be different from the DTM machine used in the bank in the form of a profile, but the core acquisition and recognition functions are identical). After purchasing the ticket, the customer performs identification and authentication in a DTM machine, the DTM machine collects the biological 3D information of the customer, identifies the identity of the customer, inquires the ticket information under the identity of the customer, and if the ticket information accords with the current number of vehicles, sends an opening instruction to a gate. The ticket of the customer may be purchased on another DTM machine, for example, similar to the application scenario described above, or may be purchased directly through another channel, but in either case, the correspondence between the ticket and the identity of the customer is uniquely determined. After the gate-connected DTM machine collects the customer biometric 3D information, if the system does not store the customer's identity information, it may request a query for the customer's identity from a server of a government-related authority, such as a public security system.
The above applications are only a limited list and are not limiting as to their application and structure.
DTM machine structure
As shown in fig. 1-3, the DTM machine includes a cabinet 1000 that is divided into A, B, C areas from top to bottom. Where the a-region is primarily used to capture and identify the head/face and iris and provide a display area/interactive interface for the customer. The region B is mainly used for collecting and identifying hands, and provides a display area/interaction interface for customers. The C area is mainly used for bearing the user, and the acquisition and identification feet.
The area a includes a head/face and iris acquisition device 1001 including an a image acquisition device 201, an a light source 600, and may also include an a detection device 700.
The a-image acquisition device 201 may be a fixed multi-camera matrix, acquiring human face/iris information from different angles, respectively.
The a-image capturing device 201 may also be a single camera that rotates around a single axis, capturing facial/iris information from different angles by rotation, respectively.
The image acquisition device 201 may also be mounted on a rotating device, and driven to rotate around an acquisition target (head, face, iris, etc.) by the rotating device. The rotating device includes a rail and a bearing structure, and the a image capturing device 201 is mounted on the bearing structure and moves along the rail. The track may be an arcuate track, and may be one or more. When a plurality of tracks are provided, the a image capturing device 201 may be a plurality of cameras, which are respectively located on different tracks, so as to capture different areas of the target object. The plurality of tracks can be vertically arranged, left and right arranged or combined. But even one track may carry two or more cameras. For example, in a track of ±90°, one camera rotates from 0 ° to 90 ° and the other camera rotates from-90 ° to 0 °. The values are merely examples, i.e. a plurality of cameras may acquire different areas of the object on the same track. In addition, when a track, the bearing structure can bear a plurality of cameras, and the optical axes of the cameras form a certain included angle, namely, the acquisition ranges at the same position are different. Thus, the acquisition range of the system at any position at any time is enlarged, and the acquisition efficiency is improved. The controller is used to control the movement of the a-image acquisition device 201.
The a-image acquisition device 201 may also be used in conjunction with a spatial light modulator. The spatial light modulator includes a plurality of optical units, each of which is a reflective structure, a transmissive structure, or a microlens structure, so that light rays of different angles of the object are respectively deflected to different acquisition regions of the image acquisition device 201 through each of the optical units on the spatial light modulator. Therefore, the rotation device is not needed, a plurality of images of a plurality of angles of the target object can be acquired without arranging a plurality of image acquisition devices, the structure is more stable and simple, and the cost is lower.
The image capturing device 201 captures images of multiple angles of the target object, and transmits the images to the processor 400, and the processor 400 synthesizes (specifically described in detail below) a 3D model of the entire or a portion of the target object, for example, a 3D model of a human face, using a 3D synthesis algorithm for the images of the multiple angles. The 3D model can be formed by point cloud data and can also comprise texture information after skinning.
The a area also includes a display area 1005, the display area 1005 for displaying the contents of the transaction with the customer. For example, the display area 1005 may display the progress of the customer's transfer and the result of the transfer according to the operation steps when the customer performs the transfer operation. Preferably, the display area is a touch-operable screen.
The B region includes a hand acquisition device 1002, which includes a carrier board 1004, a B image acquisition device 301, a B light source 603, and may further include a B detection device 703.
The carrying board 1004 is made of transparent material, and can include a hand outline indication line thereon, for indicating a customer to open the hand in a fixed pattern and place the hand in a fixed area, so as to facilitate standardized collection of 3D information of the hand. However, since the transparent material can image the hand, which affects 3D synthesis, the carrier plate 1004 can be woven with high strength thin wires, which carry the human hand, with larger voids between wires, for enabling the camera to acquire more hand images. At this time, the carrier 1004 may be indicated with different colors to indicate the hand-shaped outline. This is also one of the points of the invention.
The B-image acquisition device 301 may be mounted on a rotating device, and is rotated around an acquisition target (hand) by the rotating device. The rotating device includes a rail and a bearing structure, and the B image capturing device 301 is mounted on the bearing structure and moves along the rail. The track may be an arcuate track, and may be one or more. When a plurality of tracks are provided, the B-image capturing device 301 may be a plurality of cameras, which are respectively located on different tracks, so as to capture different areas of the target object. The plurality of tracks can be vertically arranged, left and right arranged or combined. But even one track may carry two or more cameras. For example, in a track of ±90°, one camera rotates from 0 ° to 90 ° and the other camera rotates from-90 ° to 0 °. The values are merely examples, i.e. a plurality of cameras may acquire different areas of the object on the same track. In addition, when a track, the bearing structure can bear a plurality of cameras, and the optical axes of the cameras form a certain included angle, namely, the acquisition ranges at the same position are different. Thus, the acquisition range of the system at any position at any time is enlarged, and the acquisition efficiency is improved. The controller 500 is used to control the movement of the B-image pickup device 301.
The B-image acquisition device 301 may also be used in conjunction with a spatial light modulator. The spatial light modulator includes a plurality of optical units, each of which is a reflective structure, a transmissive structure, or a microlens structure, so that light rays of different angles of the object are respectively deflected to different acquisition regions of the B image acquisition device 201 through each of the optical units on the spatial light modulator. Therefore, the rotation device is not needed, a plurality of image acquisition devices are not needed to be arranged, images of a plurality of angles of the target object can be acquired, the structure is more stable and simple, and the cost is lower.
The B-image capturing device 301 captures images of the object at a plurality of angles, and transmits the images to the processor 400, and the processor 400 synthesizes (specifically described in detail below) a 3D model of the entire or a portion of the object using a 3D synthesis algorithm for the images at the plurality of angles, for example, to generate a 3D model of the human face. The 3D model can be formed by point cloud data and can also comprise texture information after skinning.
The B area may also include an operation area 1006, where the operation area 1006 is used to provide an interface for a customer to operate the DTM machine, such as a physical keyboard or a virtual keyboard.
The processor 400 pre-processes the pictures, and sends the synthesized head, face, iris and/or hand 3D models to the server 2000 through a network, the server 2000 compares and identifies the acquired 3D models with standard models stored in a database, if the identification is consistent, the identity authentication is completed, and the client is allowed to perform the next operation. The specific authentication procedure will be described in detail later.
Of course, since the DTM machine has limited computing power, the processor 400 may send a plurality of photographed pictures to the server 2000 after only preprocessing the pictures, and the server 2000 may perform feature point extraction, matching, and 3D synthesis of the pictures.
The face model is a part of the head where the skin is exposed, and includes, for example, a region below the forehead line, inside the left and right ears, and above the chin. Of course, the ear may not be included.
In some cases, a transaction device 1003 may also be included in the a and/or B areas, the transaction device including a transaction portal that may provide access to the DTM machine for various value documents and/or a transaction portal that may provide egress from the DTM machine for various value goods, securities, currencies, etc. Because the biological 3D information of the customer is identity information and also identifies account information of the user at banks, hospitals, government, schools, shops, factories, enterprises and the like, the DTM machine can have no transaction entrance and only transaction exit. Of course, since the DTM machine is more advanced, in some cases, the function of downward compatibility with ATM may be retained, for example, the DTM machine may allow the user to use an entity card such as a bank card to conduct a transaction, and the DTM machine is only used as an authentication mode of identity, or even only used as an auxiliary mode of identity authentication (the user still uses a password to perform authentication), where a transaction portal, such as a card insertion portal, or other credential portal, needs to be provided.
DTM machines may not include the C region. If the area C is included, the area C is used for bearing the customer, sensing the weight of the customer by using the pressure sensor, and sending the weight information to the server 2000 as a means of identity recognition/authentication. Meanwhile, an image acquisition device can be arranged in the C area and used for acquiring 3D information of the user's foot. The acquisition mode and the structure are similar to those of the A area and the B area.
DTM machine authentication method
Step1: the server 2000 receives the 3D model containing the client biometric transmitted by the DTM engine.
The model may be a model of the face, head, iris, hand, or a model of the above-mentioned parts or partial areas of other body parts. For example, only finger information of a client's hand is collected at the time of collection; or complete header information at the time of acquisition, but processor 400 only transmits cheek portion model data at the time of transmission to the server. Thus, the data transmission amount can be shortened, and the recognition speed can be improved.
Step 2: the server 2000 compares the received 3D model (application 3D data) with the stored standard 3D model (standard 3D data) to thereby recognize the client identity.
When point cloud comparison and identification are carried out, the working principle is as follows: first, the point cloud is a basic element constituting the 3D model, and it contains spatial coordinate information (XYZ) and color information (RGB). Attributes of the point cloud include spatial resolution, point location accuracy, surface normal vector, and the like. Its characteristics are not affected by external conditions and will not change for translation and rotation. The reverse software can edit and process the point cloud, such as: IMAGEWARE, GEOMAGIC, CATIA, COPYCAD, rapidform, and the like. The method based on airspace direct matching specific to the sky-eye point cloud comparison and identification method comprises the following steps: the iterative closest point method ICP (Iterative closest point), the ICP method is typically split into two steps, a first step of feature point fitting and a second step of surface overall best fitting. The purpose of first fitting the alignment feature points is to find and align the two point clouds that are to be compared in the shortest time. But is not limited thereto. For example, it may be:
in the first step, three or more characteristic points are selected as fitting key points in the corresponding rigid areas of the two point clouds, and the characteristic points are directly matched in a corresponding mode through coordinate transformation.
ICP is used for registering curves or curved surface fragments, is a very effective tool in the 3D data reconstruction process, and given rough initial alignment conditions of two 3D models, ICP iteratively seeks rigid transformation between the two to minimize alignment errors and realize registration of space geometric relations of the two.
Given collectionAnd/>The set elements represent coordinate points of two model surfaces, the ICP registration technology iteratively solves the corresponding point closest to the model surfaces, establishes a transformation matrix, and carries out transformation on one of the corresponding points until a certain convergence condition is reached, and the iteration is stopped:
1.1 ICP algorithm
Input. P 1,P2.
Output transformed P 2
P2(0)=P2,l=0;
Do
Each point in For P 2 (l)
Find a nearest point y i in P 1;
End For
Calculation of Registration error E;
If E is greater than a certain threshold
Calculating a transformation matrix T (l) between P 2 (l) and Y (l);
P2(l+1)=T(l)·P2(l),l=l+1;
Else
Stopping;
End If
While||P2(l+l)-P2(l)||>threshold;
Wherein registration errors
1.2 Matching based on local feature points:
Taking facial information recognition as an example, a facial model is mainly divided into a rigid model part and a plastic model part, and plastic deformation affects alignment accuracy and further affects similarity. The first time and the second time of the plastic model are subjected to data acquisition with local difference, one solution is to select characteristic points only in a rigid area, wherein the characteristic points are attributes which are extracted from an object and keep stable under a certain condition, and the characteristic points of the ICP (inductively coupled plasma) characteristic points of a closest point method are iterated by a common method to perform fitting alignment.
Firstly, extracting the region of the face, which is less affected by the expression, such as nose region nasal tip, external corner of eye frame, forehead region, cheekbone region, ear region, etc. The finger joints of the human body are rigid areas, the palm parts are plastic areas, and characteristic points are selected in the finger areas to be optimal. The iris is a rigid model.
Requirements for feature points:
1) The completeness implies as much object information as possible, so that the object information is different from objects of other categories; 2) The amount of data required for expression is as small as possible; 3) It is also desirable that the features remain unchanged, preferably under model rotation, translation, and mirror image transformations.
In 3D biometric recognition, the similarity of the input model is calculated by aligning two 3D biometric model point clouds, wherein the registration error is used as a difference measure.
And a second step of: after the characteristic points are best fitted, the data of the point cloud after the whole curved surface is best fitted are aligned.
And thirdly, calculating the similarity. Least square method
The least squares method (also known as least squares) is a mathematical optimization technique. It finds the best functional match for the data by minimizing the sum of squares of the errors. The unknown data can be easily obtained by the least square method, and the sum of squares of errors between the obtained data and the actual data is minimized. The least squares method can also be used for curve fitting. Other optimization problems can also be expressed in terms of least squares by minimizing energy or maximizing entropy. The method is commonly used for solving the problem of curve fitting, and further solving the problem of complete fitting of the curved surface. The data convergence can be quickened through an iterative algorithm, and the optimal solution can be rapidly obtained.
If the 3D data model is input in STL file format, its deviation is determined by calculating the distance of the point cloud from the triangle patch. Thus, this method requires the establishment of a plane equation for each triangular patch, with the deviation being the point-to-plane distance. For the 3D data model, such as IGES or STEP model, the free-form surface expression form is NURBS surface, so that the calculation of the point-to-surface distance needs to be performed by using a numerical optimization method. Expressing deviation by iteratively calculating the minimum distance from each point in the point cloud to the NURBS curved surface, or performing specified scale dispersion on the NURBS curved surface, and approximating the point deviation by using the point-to-point distance, or converting the point-to-point distance into an STL format for deviation calculation. Different coordinate alignment and deviation calculation methods obtain different detection results. The magnitude of the alignment error will directly affect the accuracy of the detection and the reliability of the assessment report.
The best fitting alignment is to detect that the deviation is averaged to be integral, terminate the alignment process of iterative computation on the condition that the integral deviation is minimum, perform 3D analysis on the registration result, generate a result object to output in the form of the root mean square of the error between two graphs, and the larger the root mean square is, the larger the difference of the two models at the position is reflected. The opposite is true. And judging whether the object is a comparison object according to the comparison weight ratio.
Step 3: and the server 2000 sends a corresponding instruction to the DTM machine according to the comparison and identification result.
If the identification passes, an instruction is sent to the DTM machine to allow the client to operate next. If the identification is not passed, an instruction is sent to the DTM to prompt the client to authenticate again. Only the face is authenticated once before, then the hand can be authenticated again this time, i.e. re-acquisition, comparison, recognition.
The above is the basic principle of DTM machine authentication, but in order to achieve both authentication efficiency and authentication accuracy. The identity authentication may be performed as follows.
(1) The processor 400 transmits a plurality of pictures of a low authentication level area (client face part) to the server 2000.
(2) The server 2000 performs 3D synthesis on the facial part pictures to obtain a client facial part 3D model.
(3) The server 2000 performs comparison recognition of the synthesized local 3D model of the client face with a plurality of pieces of 3D information of the client face stored in advance. If the similarity is within the threshold value range, the client identity information is sent to a DTM machine; and if the similarity is not in the threshold range, comparing the next prestored 3D information of the face of the client until the matched client is found.
Since 1: n is time-consuming, so that the server 2000 can compare and identify the part of the face 3D model with the pre-stored customer face 3D information when performing the above-mentioned identification. However, after the local comparison is performed, a certain error rate exists, and after the identification is passed, the authentication level should be marked one level lower to ensure the safety. This is also one of the points of the invention.
(4) If the corresponding client is identified to be found, the server 2000 sends an instruction to the DTM to allow the client to operate next, and sends the current authentication level and the client identity information to the DTM;
(5) The DTM receives the user operation instruction, judges whether the level of the next operation selected by the client is consistent with the authentication level which is passed currently, and allows the user to operate if the operation level is lower than or equal to the authentication level; if the operation level is higher than the authentication level, the DTM transmits a higher level authentication request to the server 2000.
(6) The server 2000 synthesizes the 3D model (e.g., the 3D model of the whole face, the hands, and/or the iris) of the high authentication level area while the DTM interacts with the client, compares it with the 3D information of the client that has been previously obtained, and if the authentication is passed, transmits an authentication pass instruction and the current authentication level to the DTM. Since the customer has been previously found in the database, the 3D model of the synthesized face, hand, and/or iris need not be used again in this step to compare with the 3D information of all customers in the database, but only the 3D information of the corresponding face, hand, and/or iris of the customer found previously. This can greatly improve authentication efficiency and speed. This is also one of the points of the invention.
(7) DTM allows further operation of the client based on the authentication pass instruction, and the level of authentication currently passed.
The correspondence between the human body area and the authentication level can be set according to the actual situation.
Region(s) Iris Fingerprint Palm Face (complete) Hand (complete) Ear with a handle
Authentication level 10 6 4 8 9 3
The table is only used for reference, different authentication levels can be set according to actual needs, for example, the authentication levels can be set according to acquisition and synthesis difficulty, and the authentication levels can also be set according to the uniqueness of the biological characteristics. It will be appreciated that a complete organ is not required for authentication, for example a certain area of the face is also possible, so that these local areas can all be provided with a corresponding authentication level. And, some other body parts may set the corresponding authentication level as well.
The data of the DTM machine is two types, one is the standard data which is collected, and the data needs to be strictly processed according to the collection equipment, the method and the flow of the standard data. Such data is typically used as a standard to be used as a basis for comparison with other acquired data. For example, when a customer opens an account for the first time, the bank needs to collect the biological 3D standard data of the user and store the data in a background database, and when a subsequent user transacts business, the data collected at the time can be used for comparing with the data, so as to judge whether the identity of the user is legal or not. Because the data are standard data, the requirements of acquisition conditions, flow and equipment are more strict, and the comprehensiveness and accuracy of the data are ensured. The other is the collected application data. For example, each time a customer handles a business, the biological 3D information needs to be collected on the DTM machine, so as to obtain application data of the customer at this time. The application data and the standard data are compared to identify whether the application data and the standard data belong to the same customer, so that identification and authentication of the customer identity can be performed.
Since the standard data acquisition is performed with a relatively sufficient time, for example, when the user opens an account, the standard data includes the biometric 3D information of the entire, individual areas of the user. However, when the service processing is performed, the acquired application data does not need complete 3D information, otherwise the acquisition and comparison time can be greatly reduced. Therefore, it is preferable that the application data collected at this time is partial biometric 3D information of the user. For example, one or more of iris data, hand data, fingerprint data, or face data. The setting mode of the standard data and the application data is also one of the invention points of the invention. Although the above embodiment only exemplifies a two-step authentication method of a low authentication level and a high authentication level, it will be understood by those skilled in the art that since human biological features have different levels (as described above), a plurality of authentication methods may be provided accordingly. For example, the first operation may be performed after the user passes through the first authentication level, the second operation may be performed after the user passes through the second authentication level, and the nth operation may be performed after the user passes through the nth authentication level. One of the reasons for this is that since bio 3D synthesis and recognition is essentially different from 2D, it takes a long time, and thus a user's low-level operation can be rapidly satisfied through low-level synthesis and authentication, avoiding waiting time. While the high-level synthesis and authentication can be performed silently while the user is operating, the user waiting time is not occupied. Thereby improving user satisfaction. Is also one of the invention.
The server 2000 may be a remote cloud platform, or may be a server or workstation that is closely spaced from the DTM machine, or may even be a server platform within the DTM machine. Although the above mentions that the 3D synthesis step is performed in a server, the person skilled in the art will appreciate that it is also possible to provide the 3D synthesis step in a DTM machine.
Standard acquisition method of DTM (digital television) machine
The data of the DTM machine is two types, one is the standard data which is collected, and the data needs to be strictly processed according to the collection equipment, the method and the flow of the standard data. Such data is typically used as a standard to be used as a basis for comparison with other acquired data. For example, when a customer opens an account for the first time, the bank needs to collect the biological 3D standard data of the user and store the data in a background database, and when a subsequent user transacts business, the data collected at the time can be used for comparing with the data, so as to judge whether the identity of the user is legal or not. Because the data are standard data, the requirements of acquisition conditions, flow and equipment are more strict, and the comprehensiveness and accuracy of the data are ensured.
The other is the collected application data. For example, each time a customer handles a business, the biological 3D information needs to be collected on the DTM machine, so as to obtain application data of the customer at this time. The application data and the standard data are compared to identify whether the application data and the standard data belong to the same customer, so that identification and authentication of the customer identity can be performed.
The so-called standardized acquisition method, i.e. whenever and wherever an acquisition is performed, follows a consistent acquisition procedure, acquisition conditions and acquisition equipment using the same structure.
1. Standardized light source
The light source is used for providing illumination for the target object, so that the area to be collected of the target object is illuminated, and the illumination intensity is approximately the same. The light source may comprise a plurality of sub-light sources, or may be an integral light source providing illumination to different areas of the object from different directions. Because of the concave-convex outline of the target, the light source needs to provide illumination in different directions so as to realize the uniformity of illuminance in different areas of the target.
The measuring device 700 may also be used to detect reflected light illuminance, light intensity, reflected light illuminance, reflected light color temperature, reflected light wavelength, reflected light position, reflected light uniformity, sharpness of a reflected image, contrast of a reflected image, and/or any combination thereof of the target object 300, thereby controlling the light intensity of the light source, the light illuminance, the light color temperature of the light, the light wavelength of the light, the light direction, the light position, and/or any combination thereof. Therefore, the detection device can be a device for specially measuring the parameters, and can also be image acquisition equipment such as CCD, CMOS, cameras, video cameras and the like. Therefore, the detection device and the image acquisition device may preferably be the same component, i.e. the image acquisition device performs the function of the detection device to detect the optical characteristics of the object. Before the image of the target object is acquired, the image acquisition device is used for detecting whether the illumination condition of the target object meets the requirement or not, the appropriate illumination condition is realized by controlling the light source, and then the image acquisition device 201 starts to acquire a plurality of pictures for 3D synthesis.
2. Image acquisition device acquisition position standardization
When 3D acquisition is carried out, the optical axis direction of the image acquisition device at different acquisition positions is changed relative to the target object, and at the moment, the positions of two adjacent image acquisition devices or the two adjacent acquisition positions of the image acquisition devices meet the following conditions:
Wherein L is the linear distance of the optical center of the image acquisition device 1 when two acquisition positions are adjacent; f is the focal length of the image acquisition device 1; d is the rectangular length or width of a photosensitive element (CCD) of the image acquisition device 1; t is the distance from the photosensitive element of the image acquisition device 1 to the surface of the target along the optical axis; delta is the adjustment coefficient, delta <0.603.
D, taking a rectangular length when the two positions are along the length direction of the photosensitive element of the image acquisition device 1; when the above two positions are along the width direction of the photosensitive element of the image pickup device 1, d takes a rectangular width.
The distance from the photosensitive element to the surface of the object along the optical axis is taken as T when the image pickup device 1 is in either one of two positions. In addition to this method, in another case, L is a straight line distance of the optical centers of the two image capturing devices 1 of a n、An+1, and distances of the respective photosensitive elements of the two image capturing devices 1 of a n-1、An+2 and a n、An+1 adjacent to the two image capturing devices 1 of a n、An+1 along the optical axis to the surface of the object are respectively T n-1、Tn、Tn+1、Tn+2,T=(Tn-1+Tn+Tn+1+Tn+2)/4. Of course, the average value calculation may be performed not only by the adjacent 4 positions but also by more positions.
L should be the straight line distance between the optical centers of the two image capturing devices 1, but since the optical center position of the image capturing devices is not easily determined in some cases, the center of the photosensitive element of the image capturing device 1, the geometric center of the image capturing device 1, the center of the axis of connection of the image capturing device 1 to the cradle head (or platform, stand), the center of the lens proximal end or distal end surface may be used instead in some cases, and it has been found through experiments that the error caused thereby is within an acceptable range.
In general, in the prior art, parameters such as an object size and a field angle are used as a mode for estimating a camera position, and a positional relationship between two cameras is also expressed by an angle. The angle is inconvenient in practical use because the angle is not well measured in practical use. And, the object size may change as the measurement object changes. For example, when a 3D information acquisition of an adult head is performed and then a child head is performed, the head size needs to be measured again and reckoned again. The inconvenient measurement and repeated measurement bring about errors in measurement, thereby causing errors in camera position estimation. According to the scheme, according to a large amount of experimental data, the empirical condition which needs to be met by the position of the camera is provided, so that not only is the angle which is difficult to accurately measure measured avoided, but also the size and the dimension of an object do not need to be directly measured. In the experience condition, d and f are fixed parameters of the camera, and when the camera and the lens are purchased, the manufacturer can give corresponding parameters without measurement. T is only a straight line distance, and can be conveniently measured by using a traditional measuring method, such as a ruler and a laser range finder. Therefore, the empirical formula of the invention makes the preparation process convenient and quick, and improves the arrangement accuracy of the camera positions, so that the cameras can be arranged in the optimized positions, thereby simultaneously taking into account the 3D synthesis precision and speed, and specific experimental data are described below.
By using the device provided by the invention, experiments are carried out, and the following experimental results are obtained.
The camera lens was replaced and the experiment was repeated, the following experimental results were obtained.
The camera lens was replaced and the experiment was repeated, the following experimental results were obtained.
From the above experimental results and a lot of experimental experience, it can be derived that the value of δ should satisfy δ <0.603, and at this time, a partial 3D model can be synthesized, and although some parts cannot be synthesized automatically, it is acceptable in case of low requirements, and the part that cannot be synthesized can be compensated by manual or replacement algorithm. Particularly, when the value of delta satisfies delta <0.410, the balance between the synthesis effect and the synthesis time can be optimally considered; delta <0.356 can be chosen for better synthesis, where the synthesis time increases but the quality of the synthesis is better. Of course, to further enhance the effect of the synthesis, δ <0.311 may be selected. And when δ is 0.681, it is not synthesized. It should be noted here that the above ranges are merely preferred embodiments and do not constitute a limitation of the scope of protection.
And as can be seen from the above experiments, for determining the photographing position of the camera, only the camera parameters (focal length f, CCD size) and the distance T between the camera CCD and the object surface need to be obtained according to the above formula, which makes it easy to design and debug the device. Since the camera parameters (focal length f, CCD size) are already determined at the time of purchase of the camera and are indicated in the product description, they are readily available. The camera position can be calculated easily from the above formula without the need for cumbersome angle of view measurements and object size measurements. Particularly, in some occasions, a camera lens needs to be replaced, and then the method can obtain the camera position by directly replacing the conventional parameter f of the lens and calculating; similarly, when different objects are collected, the measurement of the object size is also complicated due to the different sizes of the objects. By using the method of the invention, the camera position can be more conveniently determined without measuring the object size. The camera position determined by the invention can be used for combining time and combining effect. Thus, the above empirical condition is one of the inventive aspects of the present invention.
The above data are obtained by experiments performed to verify the condition of the formula, and are not limiting on the invention. Even without this data, the objectivity of the formula is not affected. The person skilled in the art can adjust the parameters of the equipment and the details of the steps according to the requirement to perform experiments, and other data are obtained according with the formula.
The rotation motion of the invention is that the previous position acquisition plane and the subsequent position acquisition plane are crossed instead of parallel in the acquisition process, or the optical axis of the previous position image acquisition device and the optical axis of the subsequent position image acquisition position are crossed instead of parallel. That is, the movement of the acquisition region of the image acquisition device around or partially around the object can be considered as a relative rotation of the two. Although more orbital rotational motion is exemplified in the embodiments of the present invention, it is understood that the limitations of the present invention may be used as long as non-parallel motion between the acquisition region of the image acquisition device and the target object is rotational. The scope of the invention is not limited to orbital rotation in the embodiments.
The adjacent acquisition positions refer to two adjacent positions on a moving track, in which acquisition actions occur when the image acquisition device moves relative to a target object. This is generally well understood for image acquisition device motion. However, when the object moves to cause the relative movement of the two objects, the motion of the object is converted into the motion of the object, and the image acquisition device moves according to the relativity of the motion. At this time, two adjacent positions of the image acquisition device, at which acquisition actions occur in the converted movement track, are measured.
3. Target position normalization
The system is also provided with a display which is connected with the camera and can display the object shot by the camera. At the same time, some of the indicia 800 are displayed on the display, the indicia 800 being cross-hairs, indicia dots, circles, lines, rectangles, irregular patterns, and/or combinations thereof. The image of the object captured by the camera and the markers are superimposed on the display, and the position of the object can be adjusted by viewing the display so that the specific area of the object is aligned with the markers. For example, when the photographing target is a human head or face, the horizontal lines of the cross marks are aligned with the corners of eyes of the human, and the vertical lines are aligned with the nose; when the shooting target object is a human eye, the transverse line of the cross mark is aligned with the corners of eyes of the human, the longitudinal line is aligned with the nose, or the longitudinal line is aligned with the midpoint of the connecting line of the corners of eyes in the two eyes; when the photographing object is a human hand, the mark line is aligned with the finger midline or with the finger edge. Therefore, before each acquisition, when the camera is positioned at the initial position, the position of the target object is adjusted according to the mark, so that the position of the target object is consistent each time, and the synthesis complexity is reduced.
4. Target background normalization
The background plate is arranged on the opposite side of the image acquisition device and provides a simple background pattern for the target object. The background plate is entirely solid or mostly solid (the main body). In particular, a white or black panel, the specific color being selected according to the target body color. The background plate is usually a flat plate, preferably can also be a curved plate, such as a concave plate, a convex plate and a spherical plate, and even in certain application scenes, the background plate can be a background plate with a wavy surface; the splice plate can also be in various shapes, for example, three sections of planes can be used for splicing, the whole body is concave, or a plane and a curved surface can be used for splicing, and the like. In addition to the shape of the surface of the background plate being variable, the shape of the edges can be selected as desired. Typically rectilinear, so as to constitute a rectangular plate. But in some applications its edges may be curved.
In some cases, the camera is rotated to capture the image, and the background plate should be rotated in synchronization with the camera. In some cases, multiple cameras are used to take a picture, where the background plate may be stationary.
5. Image preprocessing normalization
The picture of the object needs to be subjected to standardized preprocessing, i.e. useful information is extracted, and the rest is filled with empty data. I.e. first the object contour is found, the object (effective information area) is kept along the object contour, while the rest of the image (non-effective information area) is removed, and the removed part is filled with solid color, preferably with blank data, thus forming a rectangular picture of a predetermined picture size. For example, for human faces, hands, bodies, limbs, feet or other targets, standardized pretreatment before 3D synthesis can be performed in a similar manner. For example, when 3D synthesis of the face is performed, the hairline-auricle-chin is taken as an edge, the human face information is reserved, and other parts in the picture are removed, so that a standardized preprocessing picture is formed.
The above description of how to perform standardized acquisition does not necessarily enable standardized acquisition under many conditions in practical acquisition applications (but it is not excluded that standardized acquisition may be implemented if the conditions are the same). In addition, in practical applications, it is not necessary to standardize acquisition in order to obtain application data. At this time, some standard contents in the standardized acquisition can be adjusted according to actual conditions. But at least one of the items should be standardized, preferably the acquisition position of the image acquisition device should be standardized. For example, when the bank has saved the customer's standardized data, and the customer again uses the banking services, the user may operate on the DTM machine only by himself, at which point some architecture of the DTM machine may be slightly different from that of the DTM machine that collected the standardized data, which is also allowed.
In the description provided herein, numerous specific details are set forth. However, it is understood that embodiments of the invention may be practiced without these specific details. In some instances, well-known methods, structures and techniques have not been shown in detail in order not to obscure an understanding of this description.
Similarly, it should be appreciated that in the foregoing description of exemplary embodiments of the invention, various features of the invention are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the disclosure and aiding in the understanding of one or more of the various inventive aspects. However, the disclosed method should not be construed as reflecting the intention that: i.e., the claimed invention requires more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive aspects lie in less than all features of a single foregoing disclosed embodiment. Thus, the claims following the detailed description are hereby expressly incorporated into this detailed description, with each claim standing on its own as a separate embodiment of this invention.
Those skilled in the art will appreciate that the modules in the apparatus of the embodiments may be adaptively changed and disposed in one or more apparatuses different from the embodiments. The modules or units or components of the embodiments may be combined into one module or unit or component and, furthermore, they may be divided into a plurality of sub-modules or sub-units or sub-components. Any combination of all features disclosed in this specification (including any accompanying claims, abstract and drawings), and all of the processes or units of any method or apparatus so disclosed, may be used in combination, except insofar as at least some of such features and/or processes or units are mutually exclusive. Each feature disclosed in this specification (including any accompanying claims, abstract and drawings), may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise.
Furthermore, those skilled in the art will appreciate that while some embodiments herein include some features but not others included in other embodiments, combinations of features of different embodiments are meant to be within the scope of the invention and form different embodiments. For example, in the claims, any of the claimed embodiments may be used in any combination.
Various component embodiments of the invention may be implemented in hardware, or in software modules running on one or more processors, or in a combination thereof. Those skilled in the art will appreciate that some or all of the functionality of some or all of the components according to embodiments of the present invention may be implemented in practice using a microprocessor or Digital Signal Processor (DSP). The present invention can also be implemented as an apparatus or device program (e.g., a computer program and a computer program product) for performing a portion or all of the methods described herein. Such a program embodying the present invention may be stored on a computer readable medium, or may have the form of one or more signals. Such signals may be downloaded from an internet website, provided on a carrier signal, or provided in any other form.
It should be noted that the above-mentioned embodiments illustrate rather than limit the invention, and that those skilled in the art will be able to design alternative embodiments without departing from the scope of the appended claims. In the claims, any reference signs placed between parentheses shall not be construed as limiting the claim. The word "comprising" does not exclude the presence of elements or steps not listed in a claim. The word "a" or "an" preceding an element does not exclude the presence of a plurality of such elements. The invention may be implemented by means of hardware comprising several distinct elements, and by means of a suitably programmed computer. In the unit claims enumerating several means, several of these means may be embodied by one and the same item of hardware. The use of the words first, second, third, etc. do not denote any order. These words may be interpreted as names.
By now it should be appreciated by those skilled in the art that while a number of exemplary embodiments of the invention have been shown and described herein in detail, many other variations or modifications of the invention consistent with the principles of the invention may be directly ascertained or inferred from the present disclosure without departing from the spirit and scope of the invention. Accordingly, the scope of the present invention should be understood and deemed to cover all such other variations or modifications.

Claims (9)

1. An intelligent service processing device based on 3D information identification, which is characterized in that: comprising
The biological information acquisition module is used for acquiring biological characteristic data of a user;
The client information authentication module is used for realizing authentication of the client identity according to the comparison result of the acquired application three-dimensional data and the standard three-dimensional data;
the client operation module is used for providing service interaction for the user;
The standard three-dimensional data comprises biological characteristic 3D information of each region which is complete to the user, and the three-dimensional data is applied to part of biological characteristic 3D information of the user; wherein the data range of the application three-dimensional data is smaller than the data range of the standard three-dimensional data;
Wherein applying the comparison of the three-dimensional data with the standard three-dimensional data comprises: comparing the application three-dimensional data synthesized by the low authentication level region image with the standard three-dimensional data, and/or comparing the application three-dimensional data synthesized by the high authentication level region image with the standard three-dimensional data;
When biological information is acquired, the position of the image acquisition device meets the following conditions:
Wherein L is the linear distance of the optical center of the image acquisition device when two acquisition positions are adjacent; f is the focal length of the image acquisition device; d is the rectangular length or width of the photosensitive element of the image acquisition device; t is the distance from the photosensitive element of the image acquisition device to the surface of the target along the optical axis; delta is the adjustment coefficient, delta <0.603.
2. The intelligent service processing device according to claim 1, wherein: the application three-dimensional data includes 3D information of the biometric features of a plurality of authentication levels.
3. The intelligent service processing device according to claim 1, wherein: the authentication includes: firstly, comparing and identifying the 3D information of the biological characteristics of the user with the low authentication level with the pre-stored user standard 3D information, and allowing the user to operate the service with the low authentication level after the identification is passed.
4. The intelligent service processing device according to claim 1, wherein: the authentication includes: and comparing and identifying the 3D information of the high-authentication-level biological characteristics with the user standard 3D information, and allowing the user to operate the high-authentication-level service after the identification is passed.
5. The intelligent service processing device according to claim 4, wherein: the method comprises the steps of comparing and identifying the 3D information of the high-authentication-level biological characteristics with the user standard 3D information, generating the 3D information of the high-authentication-level biological characteristics, and realizing the generation and comparison and identification of the 3D information of the high-authentication-level biological characteristics while the user performs service interaction.
6. The intelligent service processing device according to claim 4 or5, wherein: when the comparison and recognition of the 3D information of the high-authentication-level biological characteristics are carried out, the comparison and recognition object is standard 3D information corresponding to the user screened out through the low-authentication recognition.
7. The intelligent service processing device according to claim 1, wherein: delta <0.410, delta <0.356, delta <0.311, delta <0.284, delta <0.261, delta <0.241, or delta <0.107.
8. The intelligent service processing device according to claim 1, wherein: and sending the collected user biological characteristic data to a local processor or a server for three-dimensional data synthesis to form 3D information of the biological characteristics of the user.
9. The intelligent service processing device according to claim 1, wherein: the standard three-dimensional data is data of a predetermined size and dimension.
CN202110312252.8A 2019-12-12 2019-12-12 Intelligent service processing equipment based on 3D information identification Active CN113011348B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110312252.8A CN113011348B (en) 2019-12-12 2019-12-12 Intelligent service processing equipment based on 3D information identification

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201911276153.8A CN111160137B (en) 2019-12-12 2019-12-12 Intelligent business processing equipment based on biological 3D information
CN202110312252.8A CN113011348B (en) 2019-12-12 2019-12-12 Intelligent service processing equipment based on 3D information identification

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
CN201911276153.8A Division CN111160137B (en) 2019-12-12 2019-12-12 Intelligent business processing equipment based on biological 3D information

Publications (2)

Publication Number Publication Date
CN113011348A CN113011348A (en) 2021-06-22
CN113011348B true CN113011348B (en) 2024-05-14

Family

ID=70557021

Family Applications (2)

Application Number Title Priority Date Filing Date
CN201911276153.8A Active CN111160137B (en) 2019-12-12 2019-12-12 Intelligent business processing equipment based on biological 3D information
CN202110312252.8A Active CN113011348B (en) 2019-12-12 2019-12-12 Intelligent service processing equipment based on 3D information identification

Family Applications Before (1)

Application Number Title Priority Date Filing Date
CN201911276153.8A Active CN111160137B (en) 2019-12-12 2019-12-12 Intelligent business processing equipment based on biological 3D information

Country Status (1)

Country Link
CN (2) CN111160137B (en)

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101174949A (en) * 2006-10-30 2008-05-07 华为技术有限公司 Biological authentication method and system
CN103824068A (en) * 2014-03-19 2014-05-28 上海看看智能科技有限公司 Human face payment authentication system and method
CN104376249A (en) * 2014-11-28 2015-02-25 苏州福丰科技有限公司 Automatic teller system and processing method based on three-dimensional face recognition
CN104680375A (en) * 2015-02-28 2015-06-03 优化科技(苏州)有限公司 Identification verifying system for living human body for electronic payment
CN105453524A (en) * 2013-05-13 2016-03-30 霍约什实验室Ip有限公司 System and method for authorizing access to access-controlled environments
CN106790260A (en) * 2017-02-03 2017-05-31 国政通科技股份有限公司 A kind of multiple-factor identity identifying method
CN108269187A (en) * 2018-01-29 2018-07-10 深圳壹账通智能科技有限公司 Verification method, device, equipment and the computer storage media of financial business
CN108319930A (en) * 2018-03-09 2018-07-24 百度在线网络技术(北京)有限公司 Identity identifying method, system, terminal and computer readable storage medium
CN108416312A (en) * 2018-03-14 2018-08-17 天目爱视(北京)科技有限公司 A kind of biological characteristic 3D data identification methods and system taken pictures based on visible light
CN109035379A (en) * 2018-09-10 2018-12-18 天目爱视(北京)科技有限公司 A kind of 360 ° of 3D measurements of object and information acquisition device

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100631763B1 (en) * 2004-07-26 2006-10-09 삼성전자주식회사 3D motion graphic user interface and method and apparatus for providing same
AT507759B1 (en) * 2008-12-02 2013-02-15 Human Bios Gmbh REQUEST-BASED PERSON IDENTIFICATION PROCEDURE
US9384486B2 (en) * 2014-07-15 2016-07-05 Verizon Patent And Licensing Inc. Secure financial payment
US9560345B2 (en) * 2014-12-19 2017-01-31 Disney Enterprises, Inc. Camera calibration
CN105391859A (en) * 2015-11-09 2016-03-09 小米科技有限责任公司 Switching method and apparatus of scene modes
CN105243740B (en) * 2015-11-25 2017-10-24 四川易辨信息技术有限公司 Card safety identification authentication system and implementation method based on biometrics identification technology
EP3729309A4 (en) * 2017-12-21 2021-08-25 Samsung Electronics Co., Ltd. Systems and methods for biometric user authentication
CN108334874A (en) * 2018-04-04 2018-07-27 北京天目智联科技有限公司 A kind of 3D four-dimension iris image identification equipment

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101174949A (en) * 2006-10-30 2008-05-07 华为技术有限公司 Biological authentication method and system
CN105453524A (en) * 2013-05-13 2016-03-30 霍约什实验室Ip有限公司 System and method for authorizing access to access-controlled environments
CN103824068A (en) * 2014-03-19 2014-05-28 上海看看智能科技有限公司 Human face payment authentication system and method
CN104376249A (en) * 2014-11-28 2015-02-25 苏州福丰科技有限公司 Automatic teller system and processing method based on three-dimensional face recognition
CN104680375A (en) * 2015-02-28 2015-06-03 优化科技(苏州)有限公司 Identification verifying system for living human body for electronic payment
CN106790260A (en) * 2017-02-03 2017-05-31 国政通科技股份有限公司 A kind of multiple-factor identity identifying method
CN108269187A (en) * 2018-01-29 2018-07-10 深圳壹账通智能科技有限公司 Verification method, device, equipment and the computer storage media of financial business
CN108319930A (en) * 2018-03-09 2018-07-24 百度在线网络技术(北京)有限公司 Identity identifying method, system, terminal and computer readable storage medium
CN108416312A (en) * 2018-03-14 2018-08-17 天目爱视(北京)科技有限公司 A kind of biological characteristic 3D data identification methods and system taken pictures based on visible light
CN109035379A (en) * 2018-09-10 2018-12-18 天目爱视(北京)科技有限公司 A kind of 360 ° of 3D measurements of object and information acquisition device

Also Published As

Publication number Publication date
CN111160137A (en) 2020-05-15
CN111160137B (en) 2021-03-12
CN113011348A (en) 2021-06-22

Similar Documents

Publication Publication Date Title
US10043089B2 (en) Personal identification method and apparatus for biometrical identification
CN108985134B (en) Face living body detection and face brushing transaction method and system based on binocular camera
US20150347833A1 (en) Noncontact Biometrics with Small Footprint
US8090160B2 (en) Automated method for human face modeling and relighting with application to face recognition
US9336438B2 (en) Iris cameras
US20040240711A1 (en) Face identification verification using 3 dimensional modeling
US11023762B2 (en) Independently processing plurality of regions of interest
CN106981016A (en) A kind of remote self-help real name buys the method and system of phonecard
CN109766876A (en) Contactless fingerprint acquisition device and method
US20170186170A1 (en) Facial contour recognition for identification
CN107491675A (en) information security processing method, device and terminal
CN109145716B (en) Boarding gate verifying bench based on face recognition
CN105913264B (en) Face payment mechanism based on the identification of fingerprint secondary identities
KR102441562B1 (en) Smart vending machine with AI-based adult authentication function
CN108389053B (en) Payment method, payment device, electronic equipment and readable storage medium
US20220277311A1 (en) A transaction processing system and a transaction method based on facial recognition
KR20090132839A (en) System and method for issuing photo-id card
US11450140B2 (en) Independently processing plurality of regions of interest
WO2002009024A1 (en) Identity systems
CN113011348B (en) Intelligent service processing equipment based on 3D information identification
CN210955356U (en) DTM machine
US20180307886A1 (en) Touchless fingerprint payment system
CN108875472A (en) Image collecting device and face auth method based on the image collecting device
WO2005064525A1 (en) A method and apparatus for providing information relating to a body part of a person, such as for identifying the person
D’Apuzzo et al. Three-dimensional human face feature extraction from multi images

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant