WO2020062848A1 - Human face identification method, photocenter calibration method and terminal - Google Patents

Human face identification method, photocenter calibration method and terminal Download PDF

Info

Publication number
WO2020062848A1
WO2020062848A1 PCT/CN2019/084183 CN2019084183W WO2020062848A1 WO 2020062848 A1 WO2020062848 A1 WO 2020062848A1 CN 2019084183 W CN2019084183 W CN 2019084183W WO 2020062848 A1 WO2020062848 A1 WO 2020062848A1
Authority
WO
WIPO (PCT)
Prior art keywords
terminal
image
points
light
dot matrix
Prior art date
Application number
PCT/CN2019/084183
Other languages
French (fr)
Chinese (zh)
Inventor
朱洪波
刘昆
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Publication of WO2020062848A1 publication Critical patent/WO2020062848A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/166Detection; Localisation; Normalisation using acquisition arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/30Authentication, i.e. establishing the identity or authorisation of security principals
    • G06F21/31User authentication
    • G06F21/32User authentication using biometric data, e.g. fingerprints, iris scans or voiceprints
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification

Definitions

  • the present application relates to the technical field of terminals, and in particular, to a face recognition method, a light center calibration method, and a terminal.
  • Step 1 The dot-matrix projector on the mobile phone projects infrared light onto the human face, and the human face reflects the light projected by the dot-matrix projector.
  • Step 2 The infrared camera on the mobile phone collects the light reflected from the face to obtain a face image.
  • Step 3 The mobile phone compares the face image collected by the infrared camera with the face image stored in advance. If they are the same, the mobile phone is unlocked.
  • a hacker can input other face images (face images collected by an infrared camera on a mobile phone that does not perform verification) into the mobile phone. After the mobile phone obtains the face image, it can input the face image and the pre-stored face image By comparison, there is a possibility that face recognition will pass. It can be seen that, in the prior art, the face unlocking method of the terminal has low security.
  • Embodiments of the present application provide a face recognition method, a light center calibration method, and a terminal, so as to improve the security of face unlocking of the terminal.
  • an embodiment of the present application provides a face recognition method.
  • the method is applicable to a terminal (such as a mobile phone, an iPad, etc.).
  • the method includes: a dot matrix projector on the terminal emits light, and the The camera collects images to be verified; the terminal determines N first image points on the image to be verified; N is an integer greater than or equal to 1; and the terminal is based on the N first image points and the camera's Parameter to determine N first object points corresponding to the N first image points; the terminal determines whether the N first object points are on the emission light of the dot matrix projector; if the N number The first object point is on the emission light of the dot matrix projector, and the terminal judges whether the image to be verified is consistent with a pre-stored face image; if the image to be verified is consistent with a pre-stored face image, the The terminal determines that the face recognition is successful.
  • the terminal when the terminal performs face recognition, after collecting the image to be verified, it can determine whether the object point corresponding to the image point on the image to be verified is on the emission light of the dot matrix projector. When the image is verified, it is obtained by collecting the light reflected by the emitted light from the local dot matrix projector on the object. In this way, it is possible to prevent hackers from inputting other face images (not images obtained by collecting the light reflected by the emitted light on the object) into the mobile phone to pass the verification.
  • a face recognition method has high security.
  • the face recognition method provided in the embodiment of the present application is implemented based on the basic optical imaging principle. The solution is simple, the calculation amount is small, and it is helpful to receive the calculation amount and improve the efficiency.
  • the Jth light emitting point among the K light emitting points of the dot matrix projector emits light.
  • Point emission light K is an integer greater than or equal to 1
  • J is an integer less than or equal to K, and is greater than or equal to 1
  • the camera on the terminal collects the first reference picture and the second reference picture
  • the terminal selects M second image points on the first reference image, where M is an integer greater than or equal to 2
  • the terminal determines an AND on the second reference image according to the M second image points and a matching algorithm.
  • the terminal according to the M second image points and the camera
  • the parameters of the head determine the corresponding M second object points, and the corresponding M third object points are determined according to the M third image points and the parameters of the camera, wherein the P second object point and The Pth third object point matches;
  • the terminal calculates coordinates of a virtual light center of the Jth light emitting point of the dot matrix transmitter, where the virtual light center is obtained by the intersection of M first straight lines, so The P-th second object point is connected to the P-th third object point to obtain the first straight line.
  • the terminal may calibrate the virtual light center of the light emitting point on the dot matrix projector.
  • the emission light of the virtual light center that is, the M first straight lines
  • the emission light of the light emitting point may be determined .
  • the terminal can determine the emission light of the light emitting point on the dot matrix projector, so that after the terminal obtains the image to be verified, it determines whether the object point corresponding to the image point on the image to be verified is the light emitting point on the dot matrix projector. Emit light.
  • Such a face recognition method has high security.
  • the terminal judges whether the N first object points are on the emission light of the dot matrix projector, the Jth light emitting point among the K emitting points of the dot matrix projector Emitting light, K is an integer greater than or equal to 1, J is an integer less than or equal to K, and is greater than or equal to 1;
  • the camera on the terminal collects a first reference picture and a second reference picture; the first reference picture is Obtained by the terminal collecting an object at a first distance from the terminal, and the second reference image is obtained by collecting the object at a second distance; the second distance is greater than the first distance;
  • the terminal selects M second image points on the first reference picture, where M is an integer greater than or equal to 2; and the terminal determines whether the second reference picture is different from the second reference picture according to the M second image points and the matching algorithm.
  • the terminal may perform temperature drift compensation on the coordinates of the determined object points. After the temperature drift compensation, the coordinates of the object points are more accurate, thereby obtaining more accurate virtual optical center coordinates, and improving optical center calibration. Accuracy. Further, when the coordinates of the light center and the object point are more accurate, that is, the emitted light from the light emitting point on the dot matrix projector is more accurate, and the accuracy of the emitted light from the object point corresponding to the image point on the image to be verified is higher , That is, face recognition is more accurate.
  • the terminal determining whether the N first object points are on the emission light of the dot matrix projector includes: the terminal according to the K virtual lights of the dot matrix transmitter The coordinates of the center and the equation of K * M emission rays determine whether the N first object points are on the emission rays of K emission points.
  • the terminal obtains the equations of the M emission rays corresponding to each of the K illumination points, and obtains the equations of K * M emission rays in total. After the terminal obtains the image to be verified, it determines whether an object point corresponding to an image point on the image to be verified is an equation of the K * M emitted rays. In this way, it is possible to prevent hackers from entering other face images into the mobile phone and passing the verification as much as possible. Such a face recognition method has high security.
  • the terminal determining whether the N first object points are on the emission light of the dot matrix projector includes: the terminal according to the K virtual lights of the dot matrix transmitter The coordinates of the center and the coordinates of the calibrated object points are used to determine whether the N first object points are on at least one of the K lines of the virtual optical center and the calibrated object points. On at least one of the K lines, it is determined that the N first object points are on the emission light of the dot matrix projector, and if it is not on at least one of the K lines, it is determined The N first object points are not on the emission light of the dot matrix projector.
  • the terminal obtains the light center and the calibrated object point of each of the K light emitting points. After the terminal obtains the image to be verified, it is determined whether an object point corresponding to an image point on the image to be verified is on at least one of the light center of a light emitting point and a line connecting the calibrated object point. In this way, it is possible to prevent hackers from entering other face images into the mobile phone and passing the verification as much as possible. Such a face recognition method has high security.
  • the dot-matrix projector on the terminal emits light
  • the terminal is in a lock screen state before the camera on the terminal collects the image to be verified, and after the terminal determines that the face recognition is successful
  • the method further includes: unlocking the terminal.
  • the face recognition method provided in the embodiment of the present application can unlock the device, which helps improve the security of face unlocking.
  • the dot matrix projector on the terminal emits light, and before the camera on the terminal collects the image to be verified, the terminal displays a payment verification interface; after the terminal determines that the face recognition is successful
  • the method further includes: the terminal executing a payment process.
  • the face recognition method provided in the embodiment of the present application can also perform online payment, which helps improve payment security.
  • the dot-matrix projector on the terminal emits light, and before the camera on the terminal collects the image to be verified, the terminal displays a permission or password setting interface; the terminal determines face recognition After success, the method further includes: the terminal performing a permission or password setting operation.
  • the face recognition method provided in the embodiment of the present application can also set permissions or passwords, which helps improve the security of the password or permission settings.
  • the terminal determines that face recognition has failed.
  • an embodiment of the present application provides a method for calibrating an optical center.
  • the method is applicable to a terminal having a dot matrix projector or other terminals capable of projecting light.
  • the method includes: the Jth light emitting point among the K light emitting points of the dot matrix projector on the terminal emits light, K is an integer greater than or equal to 1, J is an integer less than or equal to K, and greater than or equal to 1;
  • the camera on the terminal collects a first reference picture and a second reference picture; the first reference picture is obtained by the terminal collecting objects at a first distance from the terminal, and the second reference picture is the Obtained by collecting the object at a second distance; the second distance is greater than the first distance;
  • the terminal selects M second image points on the first reference picture, where M is an integer greater than or equal to 2;
  • the terminal determines corresponding M second object points according to the M second image points and parameters of the camera, and determines corresponding M third objects according to the M third image points and parameters of the camera.
  • the terminal calculates coordinates of a virtual light center of the Jth light emitting point of the dot matrix transmitter, the virtual light center is obtained by intersecting M first straight lines, and the Pth second object point and the The P-th third object point is connected to obtain the first straight line.
  • an embodiment of the present application further provides a method for calibrating an optical center, which is applicable to a terminal having a dot matrix projector or other terminals capable of projecting light.
  • the method includes: the Jth light emitting point among the K light emitting points of the dot matrix projector on the terminal emits light, K is an integer greater than or equal to 1, J is an integer less than or equal to K, and is greater than or equal to 1; Camera captures a first reference picture and a second reference picture; the first reference picture is obtained by the terminal collecting objects at a first distance from the terminal, and the second reference picture is that the acquisition is in the second The second distance is greater than the first distance; the terminal selects M second image points on the first reference picture, where M is an integer greater than or equal to 2; the terminal According to the M second image points and the matching algorithm, M third image points on the second reference picture that are matched one by one with each second image point of the M second image points, respectively, The P second image points match the Pth third image point, where P is
  • the terminal calculates a virtual number of the Jth light emitting point of the dot matrix transmitter The coordinates of the optical center.
  • the virtual optical center is obtained by the intersection of M first straight lines.
  • the P-th second object point after temperature drift compensation is connected to the P-th third object point after temperature drift compensation.
  • the first straight line is obtained.
  • an embodiment of the present application further provides a face recognition method, which is applicable to a terminal (such as a mobile phone, an iPad, etc.).
  • the method includes: a dot matrix projector on the terminal emits light, and the terminal on the terminal Camera detects the image to be verified; the terminal determines whether the image to be verified is consistent with a pre-stored face image; if the image to be verified is consistent with a pre-stored face image, the terminal determines that the N first image points of N; N is an integer greater than or equal to 1; the terminal determines the N first image points corresponding to the N first image points according to the N first image points and parameters of the camera The object point; the terminal judges whether the N first object points are on the emission light of the dot matrix projector; if the N first object points are on the emission light of the dot matrix projector, determining Face recognition succeeded.
  • an embodiment of the present application provides a terminal, including a dot matrix projector, a camera, a processor, and a memory.
  • the dot matrix projector is used to emit light; the camera is used to collect images to be verified; the memory is used to store one or more computer programs; when the one or more computer programs stored in the memory are executed by a processor, enable the terminal to implement the fourth aspect or the first aspect Any of the four possible design methods.
  • an embodiment of the present application provides a terminal, including a dot matrix projector, a camera, a processor, and a memory.
  • the Jth light emitting point among the K light emitting points of the dot matrix projector emits light
  • the camera is used to collect the first reference picture and the second reference picture
  • the memory is used to store one or more computer programs;
  • the terminal can implement the second aspect or any one of the possible design methods of the second aspect; or when the one or more computer programs stored in the memory are implemented by the processor
  • the terminal can implement the third aspect or any one of the possible design methods of the third aspect.
  • an embodiment of the present application further provides a terminal, where the terminal includes a module / unit that executes the first aspect or a method of any possible design of the first aspect; or the terminal includes performing the second aspect Aspect or the module / unit of any possible design method of the second aspect; or, the terminal includes a module / unit that executes the third aspect or any possible design method of the third aspect; or The terminal includes a module / unit that executes the fourth aspect or the method of any possible design of the fourth aspect;
  • modules / units can be implemented by hardware, and can also be implemented by hardware executing corresponding software.
  • a computer-readable storage medium is further provided in an embodiment of the present application.
  • the computer-readable storage medium includes a computer program, and when the computer program is run on a terminal, the terminal executes the first aspect or the foregoing first aspect. Any one of the possible design methods in one aspect; or, when the computer program runs on the terminal, causing the terminal to execute the second aspect or any one of the foregoing possible design methods in the second aspect; or, when the computer When the program is run on the terminal, the terminal is caused to execute the third aspect or any one of the foregoing third aspect possible design methods; or when the computer program is run on the terminal, the terminal is caused to execute the fourth aspect or Any one of the possible design methods of the fourth aspect.
  • an embodiment of the present application further provides a computer program product that, when the computer program product runs on a terminal, causes the terminal to execute the first aspect or any one of the foregoing possible designs of the first aspect.
  • FIG. 1 is a schematic diagram of an image plane coordinate system, a camera coordinate system, and a world coordinate system according to an embodiment of the present application;
  • FIG. 2 is a schematic structural diagram of a mobile phone according to an embodiment of the present application.
  • FIG. 3 is a schematic diagram of an application scenario according to an embodiment of the present application.
  • FIG. 4 is a schematic structural diagram of a mobile phone according to an embodiment of the present application.
  • FIG. 5 is a schematic flowchart of calibrating a virtual optical center of a dot matrix projector provided by an embodiment of the present application
  • FIG. 6 is a schematic diagram of a virtual optical center calibration process of a dot matrix projector provided by an embodiment of the present application
  • FIG. 7 is a schematic diagram of a first reference diagram and a second reference diagram provided by an embodiment of the present application.
  • FIG. 8 is a schematic diagram of light emitted from a light emitting point on a dot matrix projector provided by an embodiment of the present application.
  • FIG. 9 is a schematic structural diagram of a dot matrix projector provided by an embodiment of the present application.
  • FIG. 10 is a schematic flowchart of a face recognition method according to an embodiment of the present application.
  • FIG. 11 is a schematic flowchart of another face recognition method according to an embodiment of the present application.
  • FIG. 12 is a schematic diagram of a mobile phone display interface according to an embodiment of the present application.
  • FIG. 13 is a schematic diagram of a mobile phone display interface according to an embodiment of the present application.
  • FIG. 14 is a schematic structural diagram of a terminal according to an embodiment of the present application.
  • the image plane coordinate system involved in the embodiment of the present application is a coordinate system established on the imaging plane of the camera.
  • the origin of the image plane coordinate system is the center of the imaging plane.
  • the camera collects the light reflected by the object and presents these light on the imaging plane to obtain an image of the object.
  • the image plane coordinate system is represented by o1-u-v. Please refer to FIG. 1.
  • o1 is the center of the imaging plane
  • the image plane coordinate system is the origin
  • the u axis and the v axis are the coordinate axes of the image plane coordinate system, respectively.
  • the u axis is the horizontal axis of the image plane coordinate system
  • the v axis is the vertical axis of the image plane coordinate system.
  • the camera coordinate system involved in the embodiment of the present application that is, the coordinate system whose origin is the center of the camera, and hereinafter, the camera coordinate system is represented by o2-x-y-z. Please continue to refer to FIG. 1.
  • o2 is the camera center and the origin of the camera coordinate system.
  • the x-axis, y-axis, and z-axis are the coordinate axes of the camera coordinate system.
  • the world coordinate system that is, the absolute coordinate system involved in the embodiments of the present application, can be used to calibrate the position of a camera or a dot matrix projector.
  • the world coordinate system is represented by o3-X-Y-Z. Please continue to refer to FIG. 1.
  • o3 is the origin of the world coordinate system
  • the X axis, Y axis, and Z axis are the coordinate axes of the world coordinate system.
  • the points in the image plane coordinate system can be converted into the camera coordinate system or the world coordinate system through corresponding conversion formula conversion (described later).
  • the points in the camera coordinate system or the world coordinate system can be converted into the image plane coordinate system through corresponding conversion formula conversion (described later).
  • the image points involved in the embodiments of the present application are points on an image obtained by a camera, that is, points in an image plane coordinate system.
  • the object points involved in the embodiments of the present application usually, there is a conversion relationship between the object points and the image points.
  • the camera can determine an object point corresponding to the image point according to an image point on the captured image and a conversion formula containing the optical parameters of the camera.
  • the corresponding image point of the object point W on the imaging plane is W ′.
  • the object point W can be represented in a camera coordinate system, and the image point W 'can be represented in an image plane coordinate system.
  • the images involved in this embodiment of the present application can be in the form of a picture or a collection of data, such as some parameters (such as pixels, Color information, etc.).
  • the multiple involved in the embodiments of the present application refers to two or more.
  • the terminal may be a portable terminal including a dot matrix projector and an infrared camera, such as a mobile phone, a tablet computer, a wearable device (such as a smart watch) with a wireless communication function, and the like.
  • portable terminals include, but are not limited to, Or a portable terminal with another operating system.
  • the above portable terminal may also be another portable terminal, as long as it can realize the function of projecting light and image acquisition (function of acquiring light projected by the local machine to obtain an image).
  • the above-mentioned terminal may not be a portable terminal, but a desktop computer capable of implementing light projection and image acquisition functions (functions of acquiring light projected by the local machine to obtain images).
  • the terminal supports multiple applications.
  • one or more of the following applications a camera application, an instant messaging application, and the like.
  • instant messaging applications users can send text, voice, pictures, video files, and other files to other contacts; or users can use instant messaging applications to implement voice and video calls with other contacts.
  • FIG. 2 shows a schematic structural diagram of the mobile phone 100.
  • the mobile phone 100 may include a processor 110, an external memory interface 120, an internal memory 121, a universal serial bus (USB) interface 130, a charge management module 140, a power management module 141, a battery 142, an antenna 1, an antenna 2, Mobile communication module 151, wireless communication module 152, dot matrix projector 160, audio module 170, speaker 170A, receiver 170B, microphone 170C, headphone interface 170D, sensor module 180, button 190, motor 191, indicator 192, camera 193, A display screen 194, and a subscriber identification module (SIM) card interface 195.
  • SIM subscriber identification module
  • the sensor module 180 may include a pressure sensor 180A, a gyro sensor 180B, an air pressure sensor 180C, a magnetic sensor 180D, an acceleration sensor 180E, a distance sensor 180F, a proximity light sensor 180G, a fingerprint sensor 180H, a temperature sensor 180J, a touch sensor 180K, and ambient light. Sensor 180L, bone conduction sensor 180M, etc.
  • the structure illustrated in the embodiment of the present invention does not constitute a specific limitation on the mobile phone 100.
  • the mobile phone 100 may include more or fewer parts than shown, or some parts may be combined, or some parts may be split, or different parts may be arranged.
  • the illustrated components can be implemented in hardware, software, or a combination of software and hardware.
  • the processor 110 may include one or more processing units.
  • the processor 110 may include an application processor (AP), a modem processor, a graphics processing unit (GPU), and an image signal.
  • AP application processor
  • GPU graphics processing unit
  • ISP image signal processor
  • DSP digital signal processor
  • NPU neural-network processing unit
  • different processing units may be independent devices or integrated in one or more processors.
  • the controller may be a nerve center and a command center of the mobile phone 100.
  • the controller can generate operation control signals according to the instruction operation code and timing signals, and complete the control of fetching and executing instructions.
  • the processor 110 may further include a memory for storing instructions and data.
  • the memory in the processor 110 is a cache memory.
  • the memory may store instructions or data that the processor 110 has just used or used cyclically. If the processor 110 needs to use the instruction or data again, it can be directly called from the memory. Repeated accesses are avoided and the waiting time of the processor 110 is reduced, thereby improving the efficiency of the system.
  • the mobile phone 100 implements a display function through a GPU, a display screen 194, and an application processor.
  • the GPU is a microprocessor for image processing and is connected to the display 194 and an application processor.
  • the GPU is used to perform mathematical and geometric calculations for graphics rendering.
  • the processor 110 may include one or more GPUs that execute program instructions to generate or change display information.
  • the display screen 194 is used to display images, videos, and the like.
  • the display screen 194 includes a display panel.
  • the display panel can use a liquid crystal display (LCD), an organic light-emitting diode (OLED), an active matrix organic light emitting diode or an active matrix organic light emitting diode (active-matrix organic light-emitting diode).
  • LCD liquid crystal display
  • OLED organic light-emitting diode
  • active matrix organic light emitting diode active-matrix organic light-emitting diode
  • active-matrix organic light-emitting diode active-matrix organic light-emitting diode
  • emitting diodes AMOLED
  • flexible light-emitting diodes FLEDs
  • Miniled MicroLed
  • Micro-oLed quantum dot light emitting diodes
  • QLEDs quantum dot light emitting diodes
  • the mobile phone 100 may include one or N display screens 194, where N is a positive integer
  • the camera 193 is used to capture still images or videos.
  • the camera 193 may include a photosensitive element such as a lens group and an image sensor, where the lens group includes a plurality of lenses (convex lenses or concave lenses) for collecting the light signals reflected by the object to be captured and transmitting the collected light signals to the image sensor. .
  • the image sensor generates an image of an object to be captured according to the light signal. If the mobile phone 100 is currently in a lock screen state, the image sensor sends the generated image to the processor 110, and the processor 110 runs the face recognition algorithm provided in the embodiment of the present application to identify the image. If the mobile phone 100 currently displays a viewfinder interface of a camera application, the display screen 194 displays the image in the viewfinder interface.
  • the dot matrix projector 160 is used to project light.
  • the light projected by the dot matrix projector 160 may be visible light, or infrared light (for example, infrared laser light).
  • the camera 193 may be an infrared camera to collect infrared laser light emitted by a dot matrix projector.
  • the camera 193 shown in FIG. 1 includes 1-N cameras. If a camera is included, the visible light camera used for taking pictures and videos used by the camera application and the camera used for face recognition are the same camera. If multiple cameras are included, the cameras used by the camera application and the cameras used for face recognition may be different cameras. For example, the camera used by the camera application is a visible light camera, and the camera used for face recognition is an infrared camera.
  • the internal memory 121 may be used to store computer executable program code, where the executable program code includes instructions.
  • the processor 110 executes various functional applications and data processing of the mobile phone 100 by executing instructions stored in the internal memory 121.
  • the internal memory 121 may include a storage program area and a storage data area.
  • the storage program area may store an operating system, at least one application required by a function (such as a sound playback function, an image playback function, etc.) and the like.
  • the storage data area can store data (such as audio data, phone book, etc.) created during the use of the mobile phone 100.
  • the internal memory 121 may include a high-speed random access memory, and may also include a non-volatile memory, such as at least one magnetic disk storage device, a flash memory device, a universal flash memory (UFS), and the like.
  • UFS universal flash memory
  • the distance sensor 180F is used to measure distance.
  • the mobile phone 100 can measure the distance by infrared or laser.
  • the mobile phone 100 may use a distance sensor 180F to measure the distance to achieve fast focusing.
  • the mobile phone 100 may also use the distance sensor 180F to detect whether a person or an object is approaching.
  • the proximity light sensor 180G may include, for example, a light emitting diode (LED) and a light detector, such as a photodiode.
  • the light emitting diode may be an infrared light emitting diode.
  • the mobile phone 100 emits infrared light through a light emitting diode.
  • the mobile phone 100 uses a photodiode to detect infrared reflected light from nearby objects. When sufficient reflected light is detected, it can be determined that there is an object near the mobile phone 100. When insufficient reflected light is detected, the mobile phone 100 may determine that there is no object near the mobile phone 100.
  • the mobile phone 100 can use the proximity light sensor 180G to detect that the user is holding the mobile phone 100 close to the ear to talk, so as to automatically turn off the screen to save power.
  • the proximity light sensor 180G can also be used in holster mode, and the pocket mode automatically unlocks and locks the screen.
  • the ambient light sensor 180L is used to sense ambient light brightness.
  • the mobile phone 100 can adaptively adjust the brightness of the display screen 194 according to the perceived ambient light brightness.
  • Ambient light sensor 180L can also be used to automatically adjust white balance when taking pictures.
  • the ambient light sensor 180L can also cooperate with the proximity light sensor 180G to detect whether the mobile phone 100 is in a pocket to prevent accidental touch.
  • the fingerprint sensor 180H is used to collect fingerprints.
  • the mobile phone 100 can use the collected fingerprint characteristics to realize fingerprint unlocking, access application lock, fingerprint photographing, fingerprint answering calls, etc.
  • the temperature sensor 180J is used to detect the temperature.
  • the mobile phone 100 executes a temperature processing strategy using the temperature detected by the temperature sensor 180J.
  • the touch sensor 180K is also called “touch panel”.
  • the touch sensor 180K may be disposed on the display screen 194, and the touch screen is composed of the touch sensor 180K and the display screen 194, which is also referred to as a "touch screen”.
  • the touch sensor 180K is used to detect a touch operation acting on or near it.
  • the touch sensor can pass the detected touch operation to the application processor to determine the type of touch event.
  • a visual output related to the touch operation may be provided through the display screen 194.
  • the touch sensor 180K may also be disposed on the surface of the mobile phone 100, which is different from the position where the display screen 194 is located.
  • the mobile phone 100 can implement audio functions through an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, a headphone interface 170D, and an application processor. Such as music playback, recording, etc.
  • the mobile phone 100 can receive the input of the key 190 and generate a key signal input related to the user settings and function control of the mobile phone 100.
  • the mobile phone 100 may use the motor 191 to generate a vibration prompt (such as an incoming call vibration prompt).
  • the indicator 192 in the mobile phone 100 can be an indicator light, which can be used to indicate the charging status, power change, and can also be used to indicate messages, missed calls, notifications, and so on.
  • the SIM card interface 195 in the mobile phone 100 is used to connect a SIM card. The SIM card can be inserted and removed from the SIM card interface 195 to achieve contact and separation with the mobile phone 100.
  • FIG. 3 is a schematic diagram of an application scenario provided by an embodiment of the present application.
  • the mobile phone in FIG. 3 uses the mobile phone 100 shown in FIG. 2 as an example.
  • the distance sensor 180F (not shown in FIG. 3) on the mobile phone 100 detects that an object (ie, a person) is approaching, the dot matrix projector 160 emits light.
  • the object reflects the emitted light of the dot matrix projector 160, and the reflected light is collected by the camera 193 to obtain an image to be verified (that is, a face image).
  • the processor 110 in the mobile phone 100 runs the code of the face recognition algorithm provided in the embodiment of the present application (the code of the face recognition algorithm may be stored in the internal memory 121 or stored in the storage space of the camera 193 itself), and judges whether the Verify whether the image is obtained by collecting the light reflected by the emitted light from the dot matrix projector 160 on the object; if it is, then the image to be verified is further matched with the face image stored in advance; if not, a prompt message is output to The user is reminded that the image to be verified is not obtained by collecting the light reflected by the emitted light from the local dot matrix projector 160 on the object, which poses a security risk.
  • the code of the face recognition algorithm may be stored in the internal memory 121 or stored in the storage space of the camera 193 itself
  • the face recognition method provided in the embodiment of the present application has a new step compared with the aforementioned face recognition method in the prior art.
  • This step is to determine whether an image to be verified (for example, a face image) is It is obtained by collecting the light reflected by the emitted light from the local dot matrix projector on the object.
  • the newly added step may be performed before comparing the image to be verified with a face image stored in advance, or may be performed after comparing the image to be verified with a face image stored in advance.
  • the processor 110 in the mobile phone 100 executes the code of the face recognition algorithm provided in the embodiment of the present application, and compares whether the image to be verified (such as a face image) is consistent with the face image stored in advance; if it is, further judges whether It is verified whether the image is obtained by collecting the light reflected by the emitted light from the dot matrix projector 160 on the object.
  • the image to be verified such as a face image
  • the mobile phone 100 provided in the embodiment of the present application performs face recognition, it can be determined whether the image to be verified (that is, the face image) is collected and reflected by the light emitted by the local dot matrix projector 160 on the object. Get the light. In this way, it is possible to prevent hackers from inputting other face images (not images obtained by collecting the light reflected by the emitted light on the object) into the mobile phone to pass the verification.
  • face recognition method has high security.
  • the face recognition algorithm of the embodiment of the present application will be described below through components related to the face recognition algorithm provided in the embodiment of the present application.
  • the components in FIG. 1 refer to the related description about FIG. 1.
  • the processor 110 is integrated with the application processor 110-1 as an example.
  • the distance sensor 180F in the mobile phone 100 can detect whether an object approaches.
  • the distance sensor 180F detects that an object (such as a human face) is approaching, it sends an instruction to the application processor 101-1, which is used to indicate that an object is approaching.
  • the application processor 110-1 activates the dot matrix projector 160 to project light.
  • the application processor 110-1 triggers the activation of the camera 193 to collect the light reflected by the object to obtain an image to be verified (such as a face image).
  • the proximity sensor 180F when it detects an object approaching, it can also generate a command to trigger the dot matrix projector 160 and the camera 193, and send the command to the application processor 110-1 to notify the application processor 110. -1 activates the dot matrix projector 160 and the camera 193.
  • the dot matrix projector 160 and the camera 193 in the mobile phone 100 may also be turned on periodically or always on.
  • the dot matrix projector 160 may periodically project light, and the camera 193 periodically collects images to be verified.
  • the camera 193 After the camera 193 captures the image to be verified, it sends the image to be verified to the application processor 110-1.
  • the application processor 110-1 runs the code of the face recognition algorithm stored in the internal memory 121 (not shown in FIG. 4) to recognize the image to be verified. If the identification is passed, the display screen 194 is unlocked. For example, the main interface of the mobile phone 100 may be displayed by switching from the lock screen.
  • the application processor 110-1 runs the code of the face recognition algorithm provided in the embodiment of the present application, and the identification of the image to be verified may include two processes.
  • the first process The application processor 110-1 determines whether the image to be verified is obtained by collecting light reflected from the object by the light emitted from the local dot matrix projector 160.
  • the second process the application processor 110-1 determines whether the image to be verified is consistent with the face image stored in advance.
  • the camera 193 collects reflected light, and the reflected light is reflected by an object (for example, a human face) on the emitted light of the dot matrix projector 160. Therefore, in the first process described above, the mobile phone 100 can determine whether the object point on the object is on the light emitted by the dot matrix projector 160. Therefore, the mobile phone 100 may first determine the position of the emitted light of the dot matrix projector 160 in space.
  • an object for example, a human face
  • the following describes the process by which the mobile phone 100 determines the position of the emitted light of the dot matrix projector 160 in space.
  • the process in which the mobile phone 100 determines the position of the emitted light of the dot matrix projector 160 in space can be implemented by virtual optical center calibration.
  • the virtual optical center of the dot matrix projector can be calibrated, that is, the process in which the mobile phone 100 determines the position coordinates of the virtual optical center of the dot matrix projector.
  • FIG. 5 is a schematic diagram of a virtual optical center calibration process of a dot matrix projector provided by an embodiment of the present application. As shown in FIG. 5, the process includes:
  • Step 1 The mobile phone 100 obtains a first reference picture and a second reference picture, and the first reference picture and the second reference picture include the same photographic subject.
  • the designer may set a calibration plate at a Znear at a first distance (for example, 20 cm) from the dot matrix projector 160.
  • the dot matrix projector 160 on the mobile phone 100 projects light onto the calibration plate.
  • the light reflected by the calibration plate is collected by the camera 193 to obtain a first reference image.
  • the designer moves the position of the calibration plate, and moves the calibration plate to Zfar at a second distance (for example, 70 cm) from the dot matrix projector 160.
  • the light reflected by the calibration plate located at Zfar is collected again by the camera 193 to obtain a second reference picture. It can be seen that the first reference picture and the second reference picture are images obtained by photographing the same subject at different positions.
  • the first reference picture and the second reference picture are both presented on the imaging plane.
  • the first reference picture and the second reference picture are not distinguished on the imaging plane, but actually The first reference picture and the second reference picture are two different pictures formed on the imaging plane.
  • the calibration plate may be a white board, and the light projected by the dot matrix projector 160 is visible light or infrared light.
  • the dot matrix projector 160 can project infrared light in the form of structured light (that is, the infrared light is projected in a specific shape, that is, the light projected onto the surface of the object has a specific shape.
  • the specific shape may be a straight line, speckles, etc.); or the calibration plate may be a plate that has undergone special treatment.
  • the calibration plate is provided with special marks for identifying different positions, and the mobile phone 100 obtains the first reference picture and the second reference. In the drawing, the special mark on the first reference picture and the special mark on the second reference picture can be determined.
  • FIG. 7 are schematic diagrams of a first reference diagram and a second reference diagram provided by an embodiment of the present application.
  • the first reference picture and the second reference picture are images obtained by the camera 193 on the calibration plates located at different positions. Therefore, the display position of each object (such as a special mark) on the first reference picture and the second reference picture is different. As shown in FIG. 7, the dark areas on the first reference picture and the second reference picture are special marks.
  • Step 2 The mobile phone 100 determines position coordinates of two image points Q1 and P1 on the first reference picture.
  • the first reference picture and the second reference picture are for the same photographic object (calibration plate or speckle projected by the dot-matrix projector) at different positions.
  • the photographic object may be the calibration plate. Projected speckles).
  • the mobile phone 100 may take any two image points on the first reference picture, that is, Q1 and P1. Alternatively, if the dot projector projects speckles, the two image points Q1 and P1 may be respective center points of some two speckles on the first reference picture.
  • the image points Q1 and P1 on the first reference picture determined by the mobile phone 100 are coordinates in the image plane coordinate system ouv .
  • Step 3 The mobile phone 100 determines Q2 and P2 on the second reference diagram according to Q1 and P1 on the first reference diagram, where Q2 corresponds to Q1 and P2 corresponds to P1.
  • the mobile phone 100 may determine a point corresponding to the one image point on the second reference image according to one image point on the first reference image.
  • the mobile phone 100 may determine Q2 and P2 on the second reference diagram according to a prior art matching algorithm (such as a similarity matching algorithm). Regarding the matching algorithm, this application will not go into details.
  • the mobile phone 100 After the mobile phone 100 determines Q1 and P1 on the first reference picture, it can determine Q2 corresponding to Q1 and P2 corresponding to P1 on the second reference picture.
  • the first reference picture and the second reference picture are not distinguished on the imaging plane.
  • the image points on the first reference picture and the second reference picture are in the image plane coordinate system. Therefore, after the mobile phone 100 determines Q2 and P2 on the second reference diagram according to Q1 and P1 on the first reference diagram, Q1, P1, Q2, and P2 can be marked in the image plane coordinate system at the same time.
  • the fourth step the mobile phone 100 determines the object point Q3 according to the image point Q1 on the first reference picture, and determines the object point P3 based on the image point P1; the mobile phone 100 determines the object point Q4 based on the image point Q2 on the second reference picture, P2 determines the object point P4.
  • an emission light of a dot matrix projector is projected to an object point Q3 on a calibration plate at Znear.
  • the object point Q3 reflects light, and the emitted light forms an image point Q1 on the imaging plane.
  • the one ray is projected to the object point Q4 on the calibration plate at Zfar.
  • the object point Q4 reflects light, and the emitted light forms an image point Q2 on the imaging plane.
  • Another emitting light of the dot matrix projector is projected to the object point P3 on the calibration plate at Znear.
  • the object point P3 reflects light, and the emitted light forms an image point P1 on the imaging plane.
  • the other emitted light is projected onto the object point P4 on the calibration plate at Zfar.
  • the object point P4 reflects light, and the emitted light forms an image point P2 on the imaging plane. Therefore, in the first to third steps of the mobile phone 100, after the mobile phone 100 determines the coordinates of the image points Q1 and P1, it can determine the corresponding object point Q3 according to Q1, determine the corresponding object point P3 according to P1, and determine the corresponding object according to Q2. At point Q4, the corresponding object point P4 is determined according to P2.
  • the mobile phone 100 may calculate the object point coordinates according to the image point coordinates and a preset algorithm, and there may be multiple preset algorithms, such as an inverse perspective imaging algorithm or other algorithms, which are not described in detail in the embodiments of the present application. The following takes the inverse perspective imaging algorithm as an example.
  • the mobile phone 100 may determine the object point Q3 corresponding to Q1 according to formula (1).
  • Q1 is the image point on the first reference picture, that is, in the image plane coordinate system, so the Q1 coordinate is (u1, v1).
  • Z1 is the distance from the object point Q3 to the imaging plane, that is, 20 cm, and P is the conversion formula of the image plane coordinate system to the camera coordinate system:
  • f is the focal length of the camera.
  • the coordinates of Q1 are (u1, v1) and f are known
  • the coordinates (x1, y1, z1) of the object point Q3 can be determined by formula (2), and the coordinates of this Q3 are the coordinates in the camera coordinate system .
  • the mobile phone 100 can determine the object point P3 corresponding to P1, the object point Q4 corresponding to P2, and the object point P4 corresponding to P2 through the above formula, which is not described in detail here.
  • the mobile phone 100 determines a straight line according to Q3 and Q4, determines a straight line according to P3 and P4, and determines the intersection Q of the two straight lines, that is, the virtual optical center of the dot matrix projector 160.
  • the virtual optical center of the calibrated dot matrix projector 160 is also a point in the camera coordinate system.
  • two image points on the first reference picture and the second reference picture are taken as examples.
  • the mobile phone 100 can also take a few more points on the first reference picture.
  • the mobile phone 100 determines 1000 image points on the first reference picture.
  • the mobile phone 100 determines 1000 pixels corresponding to the 1000 pixels on the second reference picture.
  • the mobile phone 100 determines 1000 object points corresponding to the 1000 image points (that is, 1000 object points on the calibration plate at Znear) according to the 1000 image points on the first reference picture.
  • the mobile phone 100 determines 1000 object points corresponding to the 1000 image points (that is, 1000 object points on the calibration plate at Zfar) according to the 1000 image points on the second reference picture.
  • the mobile phone 100 connects the 1000 object points on the calibration board at Znear obtained in the fourth step with the 1000 object points on the calibration board at Zfar to obtain 1000 straight lines. The intersection points of the 100 straight lines That is, the virtual optical center of the dot matrix projector.
  • FIG. 8 is a schematic diagram of 1000 rays projected by the dot matrix projector 160. The intersection of these 1000 rays is the virtual optical center of the dot matrix projector 160.
  • the intersection point of the 1000 rays is one, the intersection point is the virtual optical center of the dot matrix projector 160. If there are multiple intersections of the 1000 rays (for example, the intersection of two rays is one, and the intersection of the other two rays is another), the mobile phone 100 may determine one of the intersections (for example, at multiple The intersection point at the center of the intersection point) is the virtual light center of the dot matrix projector 160; or, the mobile phone 100 may also have other ways to determine the point based on the multiple intersection points (that is, the virtual light center of the dot matrix projector 160). Examples are not limited.
  • FIG. 9 is a schematic structural diagram of a dot matrix projector 160 according to an embodiment of the present application.
  • a plurality of light emitting points are provided on the dot matrix projector 160 (take 12 as an example in FIG. 8).
  • the process of calibrating the virtual light center shown in FIG. 5 is a process of determining a virtual light center of one light emitting point (for example, light emitting point 1) among a plurality of light emitting points. For other light emitting points, a similar process can be used to calibrate the virtual light center.
  • a plurality of emitted light rays of the light emitting point will be obtained.
  • the mobile phone 100 completes the virtual light center calibration process of a light emitting point of the dot matrix projector 160 and obtains the equation of the emitted light of the light emitting point (such as between Q3 and Q4).
  • the mobile phone 100 can store the virtual optical center coordinates of the light emitting point and the equation of the emitted light for use.
  • the determined virtual optical center coordinates and the equations of the emitted light are in the camera coordinate system.
  • the mobile phone 100 may not store the equation of the emitted light of the dot matrix projector 160, but store the virtual optical center coordinates of the light emitting points and the coordinates of the calibrated object points.
  • Luminous virtual light center and calibration at a set distance For example, as described in the previous embodiment, when receiving astigmatism from a dot matrix projector at a set distance (such as Znear), some light spots or spots can be collected, and the coordinates of these spots or spots can be used as a table.
  • the coordinates of the object point in 1.
  • the mobile phone 100 can store the virtual light-center coordinates, Q3 and P3 (or Q4, P4) of the light-emitting point 1 correspondingly.
  • Table 1 the mapping relationship between the virtual optical center coordinates of the light emitting point and the object point coordinates provided by the embodiment of the present application.
  • some or all of the multiple light emitting points on the dot matrix projector 160 may be lighted. It should be noted that which light emitting points are illuminated by the mobile phone 100 may be set when the mobile phone 100 leaves the factory. Then different mobile phones have different lighting points, which helps improve the security of each mobile phone. Or, at a certain time (such as the first time), when the distance sensor 180F detects that an object is approaching, the dot matrix projector 160 lights up the first 4 light emitting points, and the light emitted by these 4 light emitting points is projected onto the object, and the object The reflected light is collected by the camera 193 to obtain an image to be verified.
  • the four light-emitting points after the dot-matrix projector 160 lights up, and the light emitted by the latter four light-emitting points is projected onto the object. Get the image to be verified. It can be seen that, when the mobile phone 100 is in use, the light-emitting points that are lit each time may be different, which helps to improve security.
  • the mobile phone 100 lights up the first 4 light emitting points this time, because the mobile phone 100 stores the coordinate positions of the light center and the object point of each of the first 4 light emitting points. Therefore, after the mobile phone 100 obtains the image to be verified, it is determined whether the image to be verified is obtained by collecting light reflected on the object from the emission light of the first four light emitting points of the dot matrix projector 160.
  • FIG. 10 is a schematic flowchart of a face recognition method according to an embodiment of the present application. As shown in FIG. 10, the process of the method includes:
  • the camera 193 sends the image to be verified to the application processor 110-1 in the mobile phone 100.
  • the application processor 110-1 runs the code of the image correction algorithm provided in the embodiment of the present application to execute as follows S1002-S1005 process.
  • S1002 Determine an image point E 'on the image to be verified.
  • the application processor 110-1 may determine any one image point on the image to be verified, that is, E '.
  • the image point E ' is a point of the image plane coordinate system.
  • S1003 Determine whether the object point E corresponding to the image point E 'is on the light emitted from the virtual light center of the dot matrix projector. If yes, execute S1004; if not, execute S1005.
  • the application processor 110-1 may determine the object point E corresponding to the image point E 'according to the foregoing formula (1), and obtain that the object point E is a point in the camera coordinate system.
  • the application processor 110-1 determines whether the object point E is in the equation of the emitted light of the luminous point. .
  • the application processor 110-1 may according to the virtual light center and multiple object points corresponding to the virtual light center. Coordinates determine multiple straight lines (since the virtual optical center and the object point are both points in the camera coordinate system, the multiple straight lines obtained are also straight lines in the camera coordinate system).
  • the application processor 110-1 determines whether the object point E is on a certain straight line among the plurality of straight lines. If yes, it means that the image to be verified is obtained by collecting the light reflected on the object by the emission light from the dot matrix projector 160. If not, it means that the image to be verified is obtained by collecting the light reflected on the object by the light emitted from the dot matrix projector 160.
  • the mobile phone 100 can set a logo for each light emitting point.
  • the application processor 110-1 determines the identification of the lighted light emitting point (mode one, which light point is lit is determined by the application processor 110-1, so the application processor 110 -1 knows the identification of the illuminated light point. In the second way, the lighting of that light point is determined by the dot matrix projector 160. Which dot is illuminated by the dot matrix projector 160 sends the identification of the light point to the application. Processor 110-1).
  • the application processor 110-1 determines the virtual optical center coordinates of the light-emitting point according to Table 2. If there are multiple lighting points, the application processor 110-1 may determine the virtual light center of each lighting point and the corresponding object point. Taking the light emitting point 1 as an example, the application processor 110-1 determines that the virtual light center of the light emitting point 1 is A and a plurality of object points corresponding to the virtual light center A according to Table 1. The application processor 110-1 may determine a plurality of straight lines according to the coordinates of the virtual light center A and a plurality of object points corresponding to the virtual light center A.
  • the virtual optical centers of the luminous points 1-4 in Table 1 and the object points corresponding to each virtual optical center are in the camera coordinate system.
  • the mobile phone 100 can convert the virtual light center and the object point corresponding to each virtual light center into the world coordinate system, which is not limited in the embodiment of the present application. If the virtual light centers in Table 1 and the object points corresponding to each virtual light center are points in the actual coordinate system, then in the subsequent process of using the mobile phone 100, after the mobile phone 100 captures the image to be verified, the image to be verified will be The upper image point E 'is converted into the world coordinate system.
  • the application processor 110-1 may convert the coordinate system of the image point E ′ on the image to be verified to the same coordinate system as the virtual optical center and the object point in Table 1. Or, the application processor 110-1 may The image point E 'on the image to be verified can be converted into the same coordinate system as the emitted light of the light emitting point.
  • the image sensor in the camera 193 captures the first reference picture and the second reference picture. Because the image sensor will be affected by temperature changes when acquiring images, that is, the temperature drift phenomenon, resulting in errors in the virtual optical center coordinates and object point coordinates obtained during the virtual optical center calibration process of the mobile phone 100. Therefore, the mobile phone 100 may perform temperature drift compensation on a virtual optical center and an object point corresponding to the virtual optical center before using a virtual optical center and an object point corresponding to the virtual optical center in Table 1 to obtain a pass. The coordinates of the virtual optical center after temperature drift compensation and the object point corresponding to the virtual optical center are then executed as shown in FIG. 10 or FIG. 12. Among them, the temperature drift compensation method may have multiple thermal compensation methods or non-thermal component compensation methods, etc., which are not limited in the embodiments of the present application.
  • S1004 Determine whether the image to be verified is consistent with a pre-stored face image, and if they are consistent, unlock the device, and if not, do not respond to the image to be verified.
  • the unlocking device may be that the mobile phone 100 is lit by the black screen and displays the main interface. If the mobile phone 100 is locked and bright, the unlocking device is that the mobile phone 100 switches from the lock screen interface to the main interface.
  • S1005 Output a prompt message, which is used to prompt the user that the image to be verified is obtained by collecting the light reflected on the object from the emitted light of the dot matrix projector 160, which poses a security risk.
  • the prompt information may be text information or icon information displayed on the display screen 194, or voice information played by a speaker.
  • the mobile phone 100 may automatically delete the image to be verified.
  • an image point E 'on the image to be verified is taken as an example.
  • the application processor 110-1 may select N image points on the image to be verified (for example, determine the N image points on the image to be verified in S1002). For each of the N image points, S1003 can be performed once.
  • the application processor 110-1 determines that among the N object points corresponding to the N image points, the number of object points on the emission light of the light emitting point is greater than a preset number, the application processor 110-1 executes S1004 .
  • the dot matrix projector 160 may pass through optical elements such as diffractive optical elements (DOE).
  • DOE diffractive optical elements
  • the optical element diffracts the emitted light from the dot matrix projector 160 to form a diffracted light spot.
  • FIG. 11 is a diffracted light spot formed after being diffracted by an optical element.
  • the diffracted light spot is projected on the human face, and the camera 193 collects the reflected light of the diffracted light spot on the object to obtain an image to be verified.
  • the 0th order is at the center position of the diffracted light spot, and the 1st order, the 2nd order, and the like from the center position to the edge position. Therefore, the center position on the to-be-verified image captured by the mobile phone 100 is level 0, and the order from the center position to the edge position is level 1, level 2, and so on. Therefore, when the mobile phone 100 selects an image point on the image to be verified, it may first take a certain image point at the center position, that is, level 0, on the image to be verified, and then execute S1003.
  • the mobile phone 100 may determine other image points of level 0 again, and then execute S1003. If the number of object points on the emission light of the dot matrix projector 160 among multiple object points corresponding to multiple image points of level 0 is greater than a preset number, the mobile phone 100 determines that the image to be verified is collected by the dot matrix projector 160 Resulting from the rays of light reflected on an object. Alternatively, the mobile phone 100 may continue to determine the image point on level 1 and then execute S1003.
  • the mobile phone 100 determines that the number of the object points on the light emitted by the dot matrix projector 160 among the multiple object points corresponding to the multiple image points on the 0-1 level is greater than a preset number, the mobile phone 100 determines that the image to be verified is collected by The light emitted from the dot matrix projector 160 is obtained by the light reflected on the object.
  • the application processor 110-1 first determines whether the image to be verified is obtained by collecting light reflected on the object by the emission light of the dot matrix projector 160, and if so, determines Whether the image to be verified matches the pre-stored face image.
  • the execution order between the above processes can be adjusted, that is, the application processor 110-1 first determines whether the image to be verified matches the pre-stored face image, and when the image to be verified matches the pre-stored image, the process is then determined Whether the image is obtained by collecting the light reflected by the emitted light from the dot matrix projector 160 on the object.
  • the process is as follows:
  • the camera sends the image to be verified to the application processor 110-1 in the mobile phone 100.
  • the application processor 110-1 runs the code of the image correction algorithm provided in the embodiment of the present application to execute the following S1202 -S1205 process.
  • S1202 Determine whether the image to be verified is consistent with the pre-stored face image. If they are consistent, execute S1203. If they are not consistent, do not respond to the image to be verified.
  • Virtual light center S1203 Determine an image point E 'on the image to be verified.
  • S1204 Determine whether the object point E corresponding to the image point E 'is on the light emitted from the virtual optical center of the dot matrix projector. If yes, unlock the device; if not, execute S1205.
  • S1205 Output a prompt message, which is used to prompt the user that the image to be verified is not obtained by collecting the light reflected on the object from the light emitted by the dot matrix projector 160, which poses a security risk.
  • the scenario of unlocking the device through face recognition is taken as an example.
  • the face recognition algorithm provided in the embodiments of the present application can also be applied to other scenarios, such as face-brush payment (for example, the mobile phone 100 displays a payment interface, and executes the payment process when the image verification to be verified passes; , Do not execute the payment process), face punch, password, or permission settings (such as the phone 100 display permission or password setting interface, when the image to be verified passes the authorization or password setting operation, when the image to be verified fails, Perform permissions or password setting operations) and other similar scenarios.
  • a trigger condition for triggering face recognition such as a proximity sensor detecting an object approaching or other conditions
  • a process of acquiring an image to be verified that is, a face image
  • the process of matching the image with the pre-stored face image, the process after the image to be verified is successfully processed, or the process after the image to be verified fails may refer to related implementations in the prior art. The specific implementation is not limited.
  • the face recognition method provided in the embodiment of the present application can determine whether the image to be verified (that is, the face image) is obtained by collecting the light reflected on the object by the light emitted by the local dot matrix projector 160. In this way, it is possible to prevent hackers from inputting other face images (not images obtained by collecting the light reflected by the emitted light of the machine on the object) into the mobile phone for verification. Such a face recognition method has high security. Further, the face recognition method provided in the embodiment of the present application is implemented based on the basic optical imaging principle, the scheme is simple, the calculation amount is small, it is helpful to receive the calculation amount, and improve the efficiency.
  • the face recognition method provided in the embodiment of the present application may be a default function of the mobile phone 100, that is, after the user activates the mobile phone 100, the mobile phone 100 automatically performs face recognition using the face recognition method provided in the embodiment of the present application.
  • the face recognition method may be set by the user, that is, when the user uses the mobile phone 100, the face recognition mode of the mobile phone 100 is set to the face recognition mode provided in the embodiment of the application.
  • some steps in the face recognition method are set by a user, and some steps are defaulted by the mobile phone 100. For example, taking FIG. 10 as an example, the mobile phone 100 in FIG. 10 executes only S1004 by default, and does not execute S1001-S1003. When the user activates this function (a function of detecting whether a face image is generated by light projected by the machine), when the mobile phone 100 recognizes a face, S1001-S1005 is performed.
  • FIG. 13 is a schematic diagram of an interface of a mobile phone setting application according to an embodiment of the present application.
  • the mobile phone 100 displays a setting interface 1301 of a setting application, and the setting interface 1301 includes a password setting option 1302.
  • the display interface of the mobile phone 100 is as shown in FIG. 13 (b).
  • the mobile phone 100 displays an interface 1303 of the password setting option 1302, and the interface 1303 includes a graphic password, a digital password, a fingerprint password, and a face password.
  • the face password option 1304 is activated, the interface of the mobile phone 100 is as shown in (c) of FIG. 13.
  • the mobile phone 100 displays a face password setting interface 1305, and the interface 1305 includes only two options of face matching and local light emission determination.
  • the mobile phone 100 When the user activates the control 1306 corresponding to face matching only, the mobile phone 100 only compares the face image with the stored face image. If they are the same, the authentication is passed, and if they are not, the authentication is not passed. When the user activates the local light emission to determine the corresponding control 1307, the mobile phone 100 executes the process shown in FIG. 10 or FIG. 12 again.
  • the method provided by the embodiment of the present application is described from the perspective of the terminal (the mobile phone 100) as the execution subject.
  • the terminal may include a hardware structure and / or a software module, and implement the foregoing functions in the form of a hardware structure, a software module, or a hardware structure and a software module. Whether one of the above functions is executed by a hardware structure, a software module, or a hardware structure plus a software module depends on the specific application of the technical solution and design constraints.
  • FIG. 14 illustrates a terminal 1400 provided by the present application.
  • the terminal 1400 may include a dot matrix projector 1401, a camera 1402, one or more processors 1403, a memory 1404, and one or more computer programs 1405.
  • the communication bus 1406 is connected.
  • the dot matrix projector 1401 is used to project light (all or part of the light emitting points on the dot matrix projector 1401 are used to project light), and the camera 1402 is used to collect images (the image to be verified, the first reference picture or the second reference picture);
  • the one or more computer programs 1405 are stored in the memory 1404 and configured to be executed by the one or more processors 1403.
  • the one or more computer programs 1405 include instructions, and the foregoing instructions may be used to execute All or part of the steps shown in FIG. 5 and each step in the corresponding embodiment; or, S1002-S1005 shown in FIG. 10 and each step in the corresponding embodiment are performed; or S1202-S1205 and Each step in the corresponding embodiment.
  • An embodiment of the present invention further provides a computer storage medium.
  • the storage medium may include a memory, and the memory may store a program.
  • the execution of the terminal includes the previous execution, as shown in FIG. 5, FIG. 10, and FIG. All or part of the steps performed by the terminal described in the method embodiment shown in 12.
  • An embodiment of the present invention further provides a computer program product that, when the computer program product runs on a terminal, causes the terminal to execute a method including the method described in the previous embodiments shown in FIG. 5, FIG. 10, and FIG. All or part of the steps performed by the terminal.
  • Computer-readable media includes computer storage media and communication media including any medium that facilitates transfer of a computer program from one place to another.
  • a storage media may be any available media that can be accessed by a computer.
  • computer-readable media may include RAM, ROM, electrically erasable programmable read-only memory (EEPROM), read-only memory (EEPROM), compact disc-read-only memory (CD-ROM) ROM) or other optical disk storage, magnetic disk storage media or other magnetic storage devices, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and can be accessed by a computer. Also. Any connection is properly a computer-readable medium.
  • disks and discs include compact discs (CDs), laser discs, optical discs, digital video discs (DVDs), floppy discs, and Blu-ray discs, among which Discs usually reproduce data magnetically, while discs use lasers to reproduce data optically.
  • CDs compact discs
  • DVDs digital video discs
  • floppy discs floppy discs
  • Blu-ray discs among which Discs usually reproduce data magnetically, while discs use lasers to reproduce data optically. The above combination should also be included in the protection scope of the computer-readable medium.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Security & Cryptography (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Computer Hardware Design (AREA)
  • Telephone Function (AREA)
  • Controls And Circuits For Display Device (AREA)
  • Projection Apparatus (AREA)

Abstract

Disclosed are a human face identification method, a photocenter calibration method and a terminal. The method is applicable to the terminal and comprises: a dot matrix projector on the terminal emits light, and a camera on the terminal collects an image to be verified; the terminal determines N first image points on the image to be verified; N is an integer greater than or equal to 1; the terminal determines, according to the N first image points and the parameters of the camera, N first object points corresponding to the N first image points; the terminal judges whether the N first object points are arranged on the emitted light of the dot matrix projector; if the N first object points are arranged on the emitted light of the dot matrix projector, the terminal judges whether the image to be verified is consistent to the prestored human face image; if the image to be verified is consistent to the prestored human face image, the terminal determines that human face identification is successful. In this way, a hacker is prevented from inputting other human face images to a mobile phone to pass the verification (images other than the one obtained by collecting the reflected light of a light emitted by the current mobile phone on an object), thereby ensuring high security.

Description

一种人脸识别方法、光心标定方法和终端Face recognition method, optical center calibration method and terminal
本申请要求于2018年9月30日提交中国专利局、申请号为201811162463.2、发明名称为“一种人脸识别方法、光心标定方法和终端”,其全部内容通过引用结合在本申请中。This application requires that the Chinese Patent Office be filed on September 30, 2018 with the application number 201811162463.2 and the invention name be "a method of face recognition, optical center calibration method and terminal", the entire contents of which are incorporated herein by reference.
技术领域Technical field
本申请涉及终端技术领域,尤其涉及一种人脸识别方法、光心标定方法和终端。The present application relates to the technical field of terminals, and in particular, to a face recognition method, a light center calibration method, and a terminal.
背景技术Background technique
现有技术中,终端的解锁方式有多种,比如密码解锁、指纹解锁、人脸解锁等。以手机为例,且以人脸解锁为例,用户通过人脸解锁手机的过程大致如下:In the prior art, there are multiple ways to unlock the terminal, such as password unlock, fingerprint unlock, face unlock, and so on. Taking the mobile phone as an example, and the face unlocking as an example, the process of the user unlocking the mobile phone through the face is roughly as follows:
第一步:手机上的点阵投射器向人脸投射红外光线,人脸对点阵投射器投射的光进行反射。Step 1: The dot-matrix projector on the mobile phone projects infrared light onto the human face, and the human face reflects the light projected by the dot-matrix projector.
第二步:手机上的红外摄像头采集人脸反射的光线,得到人脸图像。Step 2: The infrared camera on the mobile phone collects the light reflected from the face to obtain a face image.
第三步:手机将红外摄像头采集到的人脸图像与事先存储的人脸图像进行比较,若一致,则解锁手机。Step 3: The mobile phone compares the face image collected by the infrared camera with the face image stored in advance. If they are the same, the mobile phone is unlocked.
现有技术中,黑客可以将其它人脸图像(非执行验证的手机上的红外摄像头采集的人脸图像)输入到手机中,手机得到人脸图像后,将人脸图像和预存的人脸图像进行比较,存在人脸识别通过的可能性。可见,现有技术中,终端的人脸解锁方式安全性较低。In the prior art, a hacker can input other face images (face images collected by an infrared camera on a mobile phone that does not perform verification) into the mobile phone. After the mobile phone obtains the face image, it can input the face image and the pre-stored face image By comparison, there is a possibility that face recognition will pass. It can be seen that, in the prior art, the face unlocking method of the terminal has low security.
发明内容Summary of the Invention
本申请实施例提供一种人脸识别方法、光心标定方法和终端,用以提高终端的人脸解锁的安全性。Embodiments of the present application provide a face recognition method, a light center calibration method, and a terminal, so as to improve the security of face unlocking of the terminal.
第一方面,本申请实施例提供一种人脸识别方法,该方法适用于终端(比如手机、ipad等),该方法包括:所述终端上的点阵投射器发射光线,所述终端上的摄像头采集到待验证图像;所述终端确定所述待验证图像上的N个第一像点;N为大于等于1的整数;所述终端根据所述N个第一像点和所述摄像头的参数,确定所述N个第一像点对应的N个第一物点;所述终端判断所述N个第一物点是否在所述点阵投射器的发射光线上;若所述N个第一物点在所述点阵投射器的发射光线上,所述终端判断所述待验证图像是否与预存的人脸图像一致;若所述待验证图像与预存的人脸图像一致,所述终端确定人脸识别成功。In a first aspect, an embodiment of the present application provides a face recognition method. The method is applicable to a terminal (such as a mobile phone, an iPad, etc.). The method includes: a dot matrix projector on the terminal emits light, and the The camera collects images to be verified; the terminal determines N first image points on the image to be verified; N is an integer greater than or equal to 1; and the terminal is based on the N first image points and the camera's Parameter to determine N first object points corresponding to the N first image points; the terminal determines whether the N first object points are on the emission light of the dot matrix projector; if the N number The first object point is on the emission light of the dot matrix projector, and the terminal judges whether the image to be verified is consistent with a pre-stored face image; if the image to be verified is consistent with a pre-stored face image, the The terminal determines that the face recognition is successful.
在本申请实施例中,终端进行人脸识别时,采集到待验证图像后,可以判断待验证图像上的像点所对应的物点是否在点阵投射器的发射光线上,若是,说明待验证图像时采集由本机的点阵投射器的发射光线在物体上反射的光线得到的。通过这种方式,可以尽可能的避免黑客将其它人脸图像(并非是采集本机的发射光线在物体上反射的光线得到的图像)输入到手机中而通过验证。这样的人脸识别方式,安全性较高。而且,本申请实施例提供的人脸识别方法是基于基础的光学成像原理来实现,方案简单,计算量较小,有助于接收计算量,提高效率。In the embodiment of the present application, when the terminal performs face recognition, after collecting the image to be verified, it can determine whether the object point corresponding to the image point on the image to be verified is on the emission light of the dot matrix projector. When the image is verified, it is obtained by collecting the light reflected by the emitted light from the local dot matrix projector on the object. In this way, it is possible to prevent hackers from inputting other face images (not images obtained by collecting the light reflected by the emitted light on the object) into the mobile phone to pass the verification. Such a face recognition method has high security. In addition, the face recognition method provided in the embodiment of the present application is implemented based on the basic optical imaging principle. The solution is simple, the calculation amount is small, and it is helpful to receive the calculation amount and improve the efficiency.
在一种可能的设计中,在所述终端判断所述N个第一物点是否在所述点阵投射器的发射光线上之前,所述点阵投射器的K个发光点中第J发光点发射光线,K为大于或等于1的整数,J为小于或等于K,大于等于1的整数;所述终端上的摄像头采集到第一参考图和第二参 考图;所述第一参考图是所述终端采集距离所述终端第一距离的物体得到的,所述第二参考图是所述采集处于第二距离的所述物体得到的;所述第二距离大于所述第一距离;所述终端选定第一参考图上的M个第二像点,M为大于或等于2的整数;所述终端根据所述M个第二像点和匹配算法,确定第二参考图上与分别所述M个第二像点中每个第二像点一一匹配的M个第三像点,其中,第P个第二像点与所述第P个第三像点匹配,P为大于或等于1,且小于或等于M的整数;所述终端根据所述M个第二像点和所述摄像头的参数确定对应的M个第二物点,根据所述M个第三像点和所述摄像头的参数确定对应的M个第三物点,其中,所述第P个第二物点与所述第P个第三物点匹配;所述终端计算所述点阵发射器的第J个发光点的虚拟光心的坐标,所述虚拟光心为M条第一直线相交得到,所述第P个第二物点与所述第P个第三物点相连得到所述第一直线。In a possible design, before the terminal determines whether the N first object points are on the emission light of the dot matrix projector, the Jth light emitting point among the K light emitting points of the dot matrix projector emits light. Point emission light, K is an integer greater than or equal to 1, J is an integer less than or equal to K, and is greater than or equal to 1; the camera on the terminal collects the first reference picture and the second reference picture; the first reference picture Is obtained by the terminal collecting an object at a first distance from the terminal, and the second reference image is obtained by collecting the object at a second distance; the second distance is greater than the first distance; The terminal selects M second image points on the first reference image, where M is an integer greater than or equal to 2; and the terminal determines an AND on the second reference image according to the M second image points and a matching algorithm. M third image points in which each second image point matches one by one, wherein the P second image point matches the P third image point, and P is An integer greater than or equal to 1 and less than or equal to M; the terminal according to the M second image points and the camera The parameters of the head determine the corresponding M second object points, and the corresponding M third object points are determined according to the M third image points and the parameters of the camera, wherein the P second object point and The Pth third object point matches; the terminal calculates coordinates of a virtual light center of the Jth light emitting point of the dot matrix transmitter, where the virtual light center is obtained by the intersection of M first straight lines, so The P-th second object point is connected to the P-th third object point to obtain the first straight line.
在本申请实施例中,终端可以标定点阵投射器上发光点的虚拟光心,在该过程中,可以确定虚拟光心的发射光线(即M条第一直线)即发光点的发射光线。通过这种方式,终端可以确定点阵投射器上发光点的发射光线,以便终端得到待验证图像后,判断待验证图像上的像点对应的物点是否所述点阵投射器上发光点的发射光线上。通过这种方式,可以尽可能的避免黑客将其它人脸图像输入到手机中而通过验证。这样的人脸识别方式,安全性较高。In the embodiment of the present application, the terminal may calibrate the virtual light center of the light emitting point on the dot matrix projector. In this process, the emission light of the virtual light center (that is, the M first straight lines), that is, the emission light of the light emitting point may be determined . In this way, the terminal can determine the emission light of the light emitting point on the dot matrix projector, so that after the terminal obtains the image to be verified, it determines whether the object point corresponding to the image point on the image to be verified is the light emitting point on the dot matrix projector. Emit light. In this way, it is possible to prevent hackers from entering other face images into the mobile phone and passing the verification as much as possible. Such a face recognition method has high security.
在一种可能的设计中,所述终端判断所述N个第一物点是否在所述点阵投射器的发射光线上之前,所述点阵投射器的K个发光点中第J发光点发射光线,K为大于或等于1的整数,J为小于或等于K,大于等于1的整数;所述终端上的摄像头采集到第一参考图和第二参考图;所述第一参考图是所述终端采集距离所述终端第一距离的物体得到的,所述第二参考图是所述采集处于第二距离的所述物体得到的;所述第二距离大于所述第一距离;所述终端选定第一参考图上的M个第二像点,M为大于或等于2的整数;所述终端根据所述M个第二像点和匹配算法,确定第二参考图上与分别所述M个第二像点中每个第二像点一一匹配的M个第三像点,其中,第P个第二像点与所述第P个第三像点匹配,P为大于或等于1,且小于或等于M的整数;所述终端根据所述M个第二像点和所述摄像头的参数确定对应的M个第二物点,根据所述M个第三像点和所述摄像头的参数确定对应的M个第三物点,其中,所述第P个第二物点与所述第P个第三物点匹配;所述终端计算所述点阵发射器的第J个发光点的虚拟光心的坐标,所述虚拟光心为M条第一直线相交得到,经过温漂补偿后的第P个第二物点与经过温漂补偿后的所述第P个第三物点相连得到所述第一直线。In a possible design, before the terminal judges whether the N first object points are on the emission light of the dot matrix projector, the Jth light emitting point among the K emitting points of the dot matrix projector Emitting light, K is an integer greater than or equal to 1, J is an integer less than or equal to K, and is greater than or equal to 1; the camera on the terminal collects a first reference picture and a second reference picture; the first reference picture is Obtained by the terminal collecting an object at a first distance from the terminal, and the second reference image is obtained by collecting the object at a second distance; the second distance is greater than the first distance; The terminal selects M second image points on the first reference picture, where M is an integer greater than or equal to 2; and the terminal determines whether the second reference picture is different from the second reference picture according to the M second image points and the matching algorithm. M third image points in which each second image point matches one by one, wherein the P second image point matches the P third image point, and P is greater than Or equal to 1, and an integer less than or equal to M; the terminal according to the M second image points and the camera Corresponding M second object points are determined according to the parameters of the M, and the corresponding M third object points are determined according to the M third image points and the parameters of the camera, wherein the P second object point is related to all The P-th third object point matching is described; the terminal calculates the coordinates of the virtual light center of the J-th light emitting point of the dot matrix transmitter, the virtual light center is obtained by the intersection of M first straight lines, The P-th second object point after drift compensation is connected to the P-th third object point after temperature drift compensation to obtain the first straight line.
在本申请实施例中,终端可以对确定的物点的坐标进行温漂补偿,经过温漂补偿后物点的坐标更为准确,进而得到更为准确的虚拟光心的坐标,提高光心标定的准确性。进一步,光心坐标、物点坐标更准确时,即点阵投射器上发光点的发射光线更准确,判断待验证图像上的像点对应的物点是否发光点的发射光线的准确性更高,即人脸识别准确性更高。In the embodiment of the present application, the terminal may perform temperature drift compensation on the coordinates of the determined object points. After the temperature drift compensation, the coordinates of the object points are more accurate, thereby obtaining more accurate virtual optical center coordinates, and improving optical center calibration. Accuracy. Further, when the coordinates of the light center and the object point are more accurate, that is, the emitted light from the light emitting point on the dot matrix projector is more accurate, and the accuracy of the emitted light from the object point corresponding to the image point on the image to be verified is higher , That is, face recognition is more accurate.
在一种可能的设计中,所述终端判断所述N个第一物点是否在所述点阵投射器的发射光线上,包括:所述终端根据所述点阵发射器的K个虚拟光心的坐标和K*M个发射光线的方程,判断所述N个第一物点是否在K个发光点的发射光线上。In a possible design, the terminal determining whether the N first object points are on the emission light of the dot matrix projector includes: the terminal according to the K virtual lights of the dot matrix transmitter The coordinates of the center and the equation of K * M emission rays determine whether the N first object points are on the emission rays of K emission points.
在本申请实施例中,终端在标定过程中,得到K个发光点中每个发光点对应的M个发射光线的方程,总共得到K*M个发射光线的方程。终端得到待验证图像后,确定待验证图像上的一个像点对应的物点是否在所述K*M个发射光线的方程。通过这种方式,可以尽可能的避免黑客将其它人脸图像输入到手机中而通过验证。这样的人脸识别方式,安全性较高。In the embodiment of the present application, during the calibration process, the terminal obtains the equations of the M emission rays corresponding to each of the K illumination points, and obtains the equations of K * M emission rays in total. After the terminal obtains the image to be verified, it determines whether an object point corresponding to an image point on the image to be verified is an equation of the K * M emitted rays. In this way, it is possible to prevent hackers from entering other face images into the mobile phone and passing the verification as much as possible. Such a face recognition method has high security.
在一种可能的设计中,所述终端判断所述N个第一物点是否在所述点阵投射器的发射光 线上,包括:所述终端根据所述点阵发射器的K个虚拟光心的坐标和根据已标定的物点的坐标,判断所述N个第一物点是否在K个所述虚拟光心和所述已标定的物点的连线中的至少一个上,若在K个所述连线中的至少一个上,则确定所述N个第一物点在所述点阵投射器的发射光线上,若不在K个所述连线中的至少一个上,则确定所述N个第一物点不在所述点阵投射器的发射光线上。In a possible design, the terminal determining whether the N first object points are on the emission light of the dot matrix projector includes: the terminal according to the K virtual lights of the dot matrix transmitter The coordinates of the center and the coordinates of the calibrated object points are used to determine whether the N first object points are on at least one of the K lines of the virtual optical center and the calibrated object points. On at least one of the K lines, it is determined that the N first object points are on the emission light of the dot matrix projector, and if it is not on at least one of the K lines, it is determined The N first object points are not on the emission light of the dot matrix projector.
在本申请实施例中,终端在标定过程中,得到K个发光点中每个发光点的光心和已标定的物点。终端得到待验证图像后,确定待验证图像上的一个像点对应的物点是否在某个发光点的光心和已标定的物点的连线的至少一个上。通过这种方式,可以尽可能的避免黑客将其它人脸图像输入到手机中而通过验证。这样的人脸识别方式,安全性较高。In the embodiment of the present application, during the calibration process, the terminal obtains the light center and the calibrated object point of each of the K light emitting points. After the terminal obtains the image to be verified, it is determined whether an object point corresponding to an image point on the image to be verified is on at least one of the light center of a light emitting point and a line connecting the calibrated object point. In this way, it is possible to prevent hackers from entering other face images into the mobile phone and passing the verification as much as possible. Such a face recognition method has high security.
在一种可能的设计中,所述终端上的点阵投射器发射光线,所述终端上的摄像头采集到待验证图像之前,所述终端处于锁屏状态;所述终端确定人脸识别成功之后,所述方法还包括:所述终端解锁。In a possible design, the dot-matrix projector on the terminal emits light, the terminal is in a lock screen state before the camera on the terminal collects the image to be verified, and after the terminal determines that the face recognition is successful The method further includes: unlocking the terminal.
通过本申请实施例提供的人脸识别方法可以解锁设备,有助于提高人脸解锁的安全性。The face recognition method provided in the embodiment of the present application can unlock the device, which helps improve the security of face unlocking.
在一种可能的设计中,所述终端上的点阵投射器发射光线,所述终端上的摄像头采集到待验证图像之前,所述终端显示支付验证界面;所述终端确定人脸识别成功之后,所述方法还包括:所述终端执行支付流程。In a possible design, the dot matrix projector on the terminal emits light, and before the camera on the terminal collects the image to be verified, the terminal displays a payment verification interface; after the terminal determines that the face recognition is successful The method further includes: the terminal executing a payment process.
通过本申请实施例提供的人脸识别方法还可以进行网上支付,有助于提高支付安全性。The face recognition method provided in the embodiment of the present application can also perform online payment, which helps improve payment security.
在一种可能的设计中,所述终端上的点阵投射器发射光线,所述终端上的摄像头采集到待验证图像之前,所述终端显示权限或密码设置界面;所述终端确定人脸识别成功之后,所述方法还包括:所述终端执行权限或密码设置操作。In a possible design, the dot-matrix projector on the terminal emits light, and before the camera on the terminal collects the image to be verified, the terminal displays a permission or password setting interface; the terminal determines face recognition After success, the method further includes: the terminal performing a permission or password setting operation.
通过本申请实施例提供的人脸识别方法还可以进行权限或密码的设置,有助于提高密码或权限设置的安全性。The face recognition method provided in the embodiment of the present application can also set permissions or passwords, which helps improve the security of the password or permission settings.
在一种可能的设计中,若所述待验证图像与预存的人脸图像一致,所述终端确定人脸识别失败。In a possible design, if the image to be verified is consistent with a pre-stored face image, the terminal determines that face recognition has failed.
第二方面,本申请实施例提供一种光心标定方法,该方法适用于具有点阵投射器或其它能够投射光线的终端中。该方法包括:终端上的点阵投射器的K个发光点中第J发光点发射光线,K为大于或等于1的整数,J为小于或等于K,大于等于1的整数;In a second aspect, an embodiment of the present application provides a method for calibrating an optical center. The method is applicable to a terminal having a dot matrix projector or other terminals capable of projecting light. The method includes: the Jth light emitting point among the K light emitting points of the dot matrix projector on the terminal emits light, K is an integer greater than or equal to 1, J is an integer less than or equal to K, and greater than or equal to 1;
所述终端上的摄像头采集到第一参考图和第二参考图;所述第一参考图是所述终端采集距离所述终端第一距离的物体得到的,所述第二参考图是所述采集处于第二距离的所述物体得到的;所述第二距离大于所述第一距离;The camera on the terminal collects a first reference picture and a second reference picture; the first reference picture is obtained by the terminal collecting objects at a first distance from the terminal, and the second reference picture is the Obtained by collecting the object at a second distance; the second distance is greater than the first distance;
所述终端选定第一参考图上的M个第二像点,M为大于或等于2的整数;The terminal selects M second image points on the first reference picture, where M is an integer greater than or equal to 2;
所述终端根据所述M个第二像点和匹配算法,确定第二参考图上与分别所述M个第二像点中每个第二像点一一匹配的M个第三像点,其中,第P个第二像点与所述第P个第三像点匹配,P为大于或等于1,且小于或等于M的整数;Determining, by the terminal according to the M second image points and the matching algorithm, M third image points on the second reference picture that are one-to-one matched with each of the M second image points, Wherein, the P-th second image point matches the P-th third image point, and P is an integer greater than or equal to 1 and less than or equal to M;
所述终端根据所述M个第二像点和所述摄像头的参数确定对应的M个第二物点,根据所述M个第三像点和所述摄像头的参数确定对应的M个第三物点,其中,所述第P个第二物点与所述第P个第三物点匹配;The terminal determines corresponding M second object points according to the M second image points and parameters of the camera, and determines corresponding M third objects according to the M third image points and parameters of the camera. An object point, wherein the P-th second object point matches the P-th third object point;
所述终端计算所述点阵发射器的第J个发光点的虚拟光心的坐标,所述虚拟光心为M条第一直线相交得到,所述第P个第二物点与所述第P个第三物点相连得到所述第一直线。The terminal calculates coordinates of a virtual light center of the Jth light emitting point of the dot matrix transmitter, the virtual light center is obtained by intersecting M first straight lines, and the Pth second object point and the The P-th third object point is connected to obtain the first straight line.
第三方面,本申请实施例还提供一种光心标定方法,该方法适用于具有点阵投射器或其 它能够投射光线的终端中。该方法包括:终端上的点阵投射器的K个发光点中第J发光点发射光线,K为大于或等于1的整数,J为小于或等于K,大于等于1的整数;所述终端上的摄像头采集到第一参考图和第二参考图;所述第一参考图是所述终端采集距离所述终端第一距离的物体得到的,所述第二参考图是所述采集处于第二距离的所述物体得到的;所述第二距离大于所述第一距离;所述终端选定第一参考图上的M个第二像点,M为大于或等于2的整数;所述终端根据所述M个第二像点和匹配算法,确定第二参考图上与分别所述M个第二像点中每个第二像点一一匹配的M个第三像点,其中,第P个第二像点与所述第P个第三像点匹配,P为大于或等于1,且小于或等于M的整数;所述终端根据所述M个第二像点和所述摄像头的参数确定对应的M个第二物点,根据所述M个第三像点和所述摄像头的参数确定对应的M个第三物点,其中,所述第P个第二物点与所述第P个第三物点匹配;所述终端计算所述点阵发射器的第J个发光点的虚拟光心的坐标,所述虚拟光心为M条第一直线相交得到,经过温漂补偿后的第P个第二物点与经过温漂补偿后的所述第P个第三物点相连得到所述第一直线。In a third aspect, an embodiment of the present application further provides a method for calibrating an optical center, which is applicable to a terminal having a dot matrix projector or other terminals capable of projecting light. The method includes: the Jth light emitting point among the K light emitting points of the dot matrix projector on the terminal emits light, K is an integer greater than or equal to 1, J is an integer less than or equal to K, and is greater than or equal to 1; Camera captures a first reference picture and a second reference picture; the first reference picture is obtained by the terminal collecting objects at a first distance from the terminal, and the second reference picture is that the acquisition is in the second The second distance is greater than the first distance; the terminal selects M second image points on the first reference picture, where M is an integer greater than or equal to 2; the terminal According to the M second image points and the matching algorithm, M third image points on the second reference picture that are matched one by one with each second image point of the M second image points, respectively, The P second image points match the Pth third image point, where P is an integer greater than or equal to 1 and less than or equal to M; the terminal is based on the M second image points and the camera's The corresponding M second object points are determined by parameters, and are determined according to the M third image points and the parameters of the camera. Corresponding M third object points, wherein the Pth second object point matches the Pth third object point; the terminal calculates a virtual number of the Jth light emitting point of the dot matrix transmitter The coordinates of the optical center. The virtual optical center is obtained by the intersection of M first straight lines. The P-th second object point after temperature drift compensation is connected to the P-th third object point after temperature drift compensation. The first straight line is obtained.
第四方面,本申请实施例还提供一种人脸识别方法,该方法适用于终端(比如手机、ipad等),该方法包括:所述终端上的点阵投射器发射光线,所述终端上的摄像头采集到待验证图像;所述终端判断所述待验证图像是否与预存的人脸图像一致;若所述待验证图像与预存的人脸图像一致,所述终端确定所述待验证图像上的N个第一像点;N为大于等于1的整数;所述终端根据所述N个第一像点和所述摄像头的参数,确定所述N个第一像点对应的N个第一物点;所述终端判断所述N个第一物点是否在所述点阵投射器的发射光线上;若所述N个第一物点在所述点阵投射器的发射光线上,确定人脸识别成功。In a fourth aspect, an embodiment of the present application further provides a face recognition method, which is applicable to a terminal (such as a mobile phone, an iPad, etc.). The method includes: a dot matrix projector on the terminal emits light, and the terminal on the terminal Camera detects the image to be verified; the terminal determines whether the image to be verified is consistent with a pre-stored face image; if the image to be verified is consistent with a pre-stored face image, the terminal determines that the N first image points of N; N is an integer greater than or equal to 1; the terminal determines the N first image points corresponding to the N first image points according to the N first image points and parameters of the camera The object point; the terminal judges whether the N first object points are on the emission light of the dot matrix projector; if the N first object points are on the emission light of the dot matrix projector, determining Face recognition succeeded.
第五方面,本申请实施例提供一种终端,包括点阵投射器、摄像头、处理器和存储器。其中,所述点阵投射器:用于发射光线;所述摄像头:用于采集待验证图像;存储器用于存储一个或多个计算机程序;当存储器存储的一个或多个计算机程序被处理器执行时,使得终端能够实现第一方面或者第一方面的任意一种可能的设计的方法;或者,当存储器存储的一个或多个计算机程序被处理器执行时,使得终端能够实现第四方面或者第四方面的任意一种可能的设计的方法。In a fifth aspect, an embodiment of the present application provides a terminal, including a dot matrix projector, a camera, a processor, and a memory. Wherein, the dot matrix projector is used to emit light; the camera is used to collect images to be verified; the memory is used to store one or more computer programs; when the one or more computer programs stored in the memory are executed by a processor When enabling the terminal to implement the first aspect or any possible design method of the first aspect; or when the one or more computer programs stored in the memory are executed by the processor, enable the terminal to implement the fourth aspect or the first aspect Any of the four possible design methods.
第六方面,本申请实施例提供一种终端,包括点阵投射器、摄像头、处理器和存储器。其中,所述点阵投射器的K个发光点中第J发光点发射光线;所述摄像头:用于采集到第一参考图和第二参考图;存储器用于存储一个或多个计算机程序;当存储器存储的一个或多个计算机程序被处理器执行时,使得终端能够实现第二方面或者第二方面的任意一种可能的设计的方法;或者,当存储器存储的一个或多个计算机程序被处理器执行时,使得终端能够实现第三方面或者第三方面的任意一种可能的设计的方法。According to a sixth aspect, an embodiment of the present application provides a terminal, including a dot matrix projector, a camera, a processor, and a memory. Wherein, the Jth light emitting point among the K light emitting points of the dot matrix projector emits light; the camera is used to collect the first reference picture and the second reference picture; the memory is used to store one or more computer programs; When the one or more computer programs stored in the memory are executed by the processor, the terminal can implement the second aspect or any one of the possible design methods of the second aspect; or when the one or more computer programs stored in the memory are implemented by the processor When the processor executes, the terminal can implement the third aspect or any one of the possible design methods of the third aspect.
第七方面,本申请实施例还提供了一种终端,所述终端包括执行第一方面或者第一方面的任意一种可能的设计的方法的模块/单元;或者,所述终端包括执行第二方面或者第二方面的任意一种可能的设计的方法的模块/单元;或者,所述终端包括执行第三方面或者第三方面的任意一种可能的设计的方法的模块/单元;或者,所述终端包括执行第四方面或者第四方面的任意一种可能的设计的方法的模块/单元;According to a seventh aspect, an embodiment of the present application further provides a terminal, where the terminal includes a module / unit that executes the first aspect or a method of any possible design of the first aspect; or the terminal includes performing the second aspect Aspect or the module / unit of any possible design method of the second aspect; or, the terminal includes a module / unit that executes the third aspect or any possible design method of the third aspect; or The terminal includes a module / unit that executes the fourth aspect or the method of any possible design of the fourth aspect;
这些模块/单元可以通过硬件实现,也可以通过硬件执行相应的软件实现。These modules / units can be implemented by hardware, and can also be implemented by hardware executing corresponding software.
第八方面,本申请实施例中还提供一种计算机可读存储介质,所述计算机可读存储介质包括计算机程序,当计算机程序在终端上运行时,使得所述终端执行第一方面或上述第一方 面的任意一种可能的设计的方法;或者,当计算机程序在终端上运行时,使得所述终端执行第二方面或上述第二方面的任意一种可能的设计的方法;或者,当计算机程序在终端上运行时,使得所述终端执行第三方面或上述第三方面的任意一种可能的设计的方法;或者,当计算机程序在终端上运行时,使得所述终端执行第四方面或上述第四方面的任意一种可能的设计的方法。According to an eighth aspect, a computer-readable storage medium is further provided in an embodiment of the present application. The computer-readable storage medium includes a computer program, and when the computer program is run on a terminal, the terminal executes the first aspect or the foregoing first aspect. Any one of the possible design methods in one aspect; or, when the computer program runs on the terminal, causing the terminal to execute the second aspect or any one of the foregoing possible design methods in the second aspect; or, when the computer When the program is run on the terminal, the terminal is caused to execute the third aspect or any one of the foregoing third aspect possible design methods; or when the computer program is run on the terminal, the terminal is caused to execute the fourth aspect or Any one of the possible design methods of the fourth aspect.
第九方面,本申请实施例还提供一种包含计算机程序产品,当所述计算机程序产品在终端上运行时,使得所述终端执行第一方面或上述第一方面的任意一种可能的设计的方法;或者,当所述计算机程序产品在终端上运行时,使得所述终端执行第二方面或上述第二方面的任意一种可能的设计的方法;或者,当所述计算机程序产品在终端上运行时,使得所述终端执行第三方面或上述第三方面的任意一种可能的设计的方法;或者,当所述计算机程序产品在终端上运行时,使得所述终端执行第四方面或上述第四方面的任意一种可能的设计的方法。In a ninth aspect, an embodiment of the present application further provides a computer program product that, when the computer program product runs on a terminal, causes the terminal to execute the first aspect or any one of the foregoing possible designs of the first aspect. A method; or when the computer program product is run on a terminal, causing the terminal to execute the second aspect or any one of the foregoing possible design methods of the second aspect; or when the computer program product is on a terminal When running, causing the terminal to execute the third aspect or any one of the foregoing possible design methods of the third aspect; or when the computer program product is running on the terminal, causing the terminal to execute the fourth aspect or the foregoing Any possible design method of the fourth aspect.
附图说明BRIEF DESCRIPTION OF THE DRAWINGS
图1为本申请一实施例提供的像平面坐标系、摄像头坐标系和世界坐标系的示意图;1 is a schematic diagram of an image plane coordinate system, a camera coordinate system, and a world coordinate system according to an embodiment of the present application;
图2为本申请一实施例提供的一种手机的结构示意图;2 is a schematic structural diagram of a mobile phone according to an embodiment of the present application;
图3为本申请一实施例提供的一种应用场景的示意图;3 is a schematic diagram of an application scenario according to an embodiment of the present application;
图4为本申请一实施例提供的一种手机的结构示意图;4 is a schematic structural diagram of a mobile phone according to an embodiment of the present application;
图5为本申请一实施例提供的一种标定点阵投射器虚拟光心的流程示意图;5 is a schematic flowchart of calibrating a virtual optical center of a dot matrix projector provided by an embodiment of the present application;
图6为本申请一实施例提供的一种点阵投射器虚拟光心标定过程的示意图;6 is a schematic diagram of a virtual optical center calibration process of a dot matrix projector provided by an embodiment of the present application;
图7为本申请一实施例提供的第一参考图和第二参考图的示意图;7 is a schematic diagram of a first reference diagram and a second reference diagram provided by an embodiment of the present application;
图8为本申请一实施例提供的点阵投射器上一个发光点发射的光线的示意图;8 is a schematic diagram of light emitted from a light emitting point on a dot matrix projector provided by an embodiment of the present application;
图9为本申请一实施例提供的一种点阵投射器的结构示意图;9 is a schematic structural diagram of a dot matrix projector provided by an embodiment of the present application;
图10为本申请一实施例提供的一种人脸识别方法的流程示意图;10 is a schematic flowchart of a face recognition method according to an embodiment of the present application;
图11为本申请一实施例提供的另一种人脸识别方法的流程示意;11 is a schematic flowchart of another face recognition method according to an embodiment of the present application;
图12为本申请一实施例提供的手机显示界面的示意图;12 is a schematic diagram of a mobile phone display interface according to an embodiment of the present application;
图13为本申请一实施例提供的手机显示界面的示意图;13 is a schematic diagram of a mobile phone display interface according to an embodiment of the present application;
图14位本申请一实施例提供的一种终端的结构示意图。FIG. 14 is a schematic structural diagram of a terminal according to an embodiment of the present application.
具体实施方式detailed description
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行描述。The technical solutions in the embodiments of the present application will be described below with reference to the drawings in the embodiments of the present application.
以下,对本申请中的部分用语进行解释说明,以便于本领域技术人员理解。In the following, some terms in this application are explained so as to facilitate understanding by those skilled in the art.
本申请实施例涉及的像平面坐标系,即建立于摄像头的成像平面上的坐标系,通常,像平面坐标系的原点是成像平面的中心。摄像头采集到物体反射的光线,将这些光线呈现在成像平面上,得到物体的图像。在下文中,像平面坐标系以o1-u-v表示。请参见图1所示,图1中,o1是成像平面的中心,也是像平面坐标系是原点,u轴和v轴分别是像平面坐标系的坐标轴。比如,u轴是像平面坐标系的横轴,v轴是像平面坐标系的纵轴。The image plane coordinate system involved in the embodiment of the present application is a coordinate system established on the imaging plane of the camera. Generally, the origin of the image plane coordinate system is the center of the imaging plane. The camera collects the light reflected by the object and presents these light on the imaging plane to obtain an image of the object. In the following, the image plane coordinate system is represented by o1-u-v. Please refer to FIG. 1. In FIG. 1, o1 is the center of the imaging plane, and the image plane coordinate system is the origin, and the u axis and the v axis are the coordinate axes of the image plane coordinate system, respectively. For example, the u axis is the horizontal axis of the image plane coordinate system, and the v axis is the vertical axis of the image plane coordinate system.
本申请实施例涉及的摄像头坐标系,即原点为摄像头中心的坐标系,在下文中,摄像头坐标系以o2-x-y-z表示。请继续参见图1所示,图1中,o2是摄像头中心,也是摄像头坐标系的原点,x轴、y轴和z轴分别是摄像头坐标系的坐标轴。The camera coordinate system involved in the embodiment of the present application, that is, the coordinate system whose origin is the center of the camera, and hereinafter, the camera coordinate system is represented by o2-x-y-z. Please continue to refer to FIG. 1. In FIG. 1, o2 is the camera center and the origin of the camera coordinate system. The x-axis, y-axis, and z-axis are the coordinate axes of the camera coordinate system.
本申请实施例涉及的世界坐标系,即绝对坐标系,可以用于标定摄像头或者点阵投射器的位置。在下文中,世界坐标系以o3-X-Y-Z表示。请继续参见图1所示,图1中,o3是世界坐标系的原点,X轴、Y轴、Z轴分别是世界坐标系的坐标轴。The world coordinate system, that is, the absolute coordinate system involved in the embodiments of the present application, can be used to calibrate the position of a camera or a dot matrix projector. In the following, the world coordinate system is represented by o3-X-Y-Z. Please continue to refer to FIG. 1. In FIG. 1, o3 is the origin of the world coordinate system, and the X axis, Y axis, and Z axis are the coordinate axes of the world coordinate system.
通常,像平面坐标系中的点可以通过相应的转换公式转换(后文介绍)转换到摄像头坐标系或世界坐标系中。当然,摄像头坐标系或世界坐标系中的点可以通过相应的转换公式转换(后文介绍)转换到像平面坐标系中。Generally, the points in the image plane coordinate system can be converted into the camera coordinate system or the world coordinate system through corresponding conversion formula conversion (described later). Of course, the points in the camera coordinate system or the world coordinate system can be converted into the image plane coordinate system through corresponding conversion formula conversion (described later).
本申请实施例涉及的像点,摄像头拍摄得到的图像上的点,即在像平面坐标系中的点。The image points involved in the embodiments of the present application are points on an image obtained by a camera, that is, points in an image plane coordinate system.
本申请实施例涉及的物点,待拍摄物体上的点,通常,物点和像点存在转换关系。比如,摄像头可以根据拍摄得到的图像上的一个像点,以及包含摄像头的光学参数的转换公式,确定与该像点对应的物点。The object points involved in the embodiments of the present application, the points on the object to be photographed, usually, there is a conversion relationship between the object points and the image points. For example, the camera can determine an object point corresponding to the image point according to an image point on the captured image and a conversion formula containing the optical parameters of the camera.
请继续参见图1所示,物点W在成像平面上对应的像点为W’。其中,物点W可以在摄像头坐标系中表示,像点W’可以在像平面坐标系中表示。Please continue to refer to FIG. 1, the corresponding image point of the object point W on the imaging plane is W ′. The object point W can be represented in a camera coordinate system, and the image point W 'can be represented in an image plane coordinate system.
本申请实施例涉及的图像,比如第一参考图、第二参考图、待验证图像或预存的人脸图像等可以是图片形式,也可以是数据的集合,比如是一些参数(比如像素点、颜色信息等)的集合。The images involved in this embodiment of the present application, such as the first reference image, the second reference image, the image to be verified, or a pre-stored face image, can be in the form of a picture or a collection of data, such as some parameters (such as pixels, Color information, etc.).
本申请实施例涉及的多个,是指大于或等于两个。The multiple involved in the embodiments of the present application refers to two or more.
另外,需要理解的是,在本申请的描述中,“第一”、“第二”等词汇,仅用于区分描述的目的,而不能理解为指示或暗示相对重要性,也不能理解为指示或暗示顺序。In addition, it should be understood that in the description of this application, the words "first" and "second" are used only for the purpose of distinguishing descriptions, and cannot be understood as indicating or implying relative importance, nor as indicating Or imply order.
以下介绍终端、用于这样的终端的图形用户界面(graphical user interface,GUI)、和用于使用这样的终端的实施例。在本申请一些实施例中,终端可以是包含点阵投射器和红外摄像头的便携式终端,诸如手机、平板电脑、具备无线通讯功能的可穿戴设备(如智能手表)等。便携式终端的示例性实施例包括但不限于搭载
Figure PCTCN2019084183-appb-000001
或者其它操作系统的便携式终端。上述便携式终端也可以是其它便携式终端,只要能够实现光线的投射和图像采集功能(采集本机投射的光线得到图像的功能)即可。还应当理解的是,在本申请其他一些实施例中,上述终端也可以不是便携式终端,而是能够实现光线的投射和图像采集功能(采集本机投射的光线得到图像的功能)的台式计算机。
A terminal, a graphical user interface (GUI) for such a terminal, and embodiments for using such a terminal are described below. In some embodiments of the present application, the terminal may be a portable terminal including a dot matrix projector and an infrared camera, such as a mobile phone, a tablet computer, a wearable device (such as a smart watch) with a wireless communication function, and the like. Exemplary embodiments of portable terminals include, but are not limited to,
Figure PCTCN2019084183-appb-000001
Or a portable terminal with another operating system. The above portable terminal may also be another portable terminal, as long as it can realize the function of projecting light and image acquisition (function of acquiring light projected by the local machine to obtain an image). It should also be understood that, in some other embodiments of the present application, the above-mentioned terminal may not be a portable terminal, but a desktop computer capable of implementing light projection and image acquisition functions (functions of acquiring light projected by the local machine to obtain images).
通常情况下,终端支持多种应用。比如以下应用中的一个或多个:相机应用、即时消息收发应用等。其中,即时消息收发应用可以有多种。比如微信、腾讯聊天软件(QQ)、WhatsApp Messenger、连我(Line)、Kakao Talk、钉钉等。用户通过即时消息收发应用,可以将文字、语音、图片、视频文件以及其他各种文件等信息发送给其他联系人;或者用户可以通过即时消息收发应用与其他联系人实现语音、视频通话等。Generally, the terminal supports multiple applications. For example, one or more of the following applications: a camera application, an instant messaging application, and the like. Among them, there can be multiple instant messaging applications. Such as WeChat, Tencent chat software (QQ), WhatsApp Messenger, Line Me (Line), Kakao talk, Ding Ding and so on. Through instant messaging applications, users can send text, voice, pictures, video files, and other files to other contacts; or users can use instant messaging applications to implement voice and video calls with other contacts.
以终端是手机为例,图2示出了手机100的结构示意图。Taking the terminal as a mobile phone as an example, FIG. 2 shows a schematic structural diagram of the mobile phone 100.
手机100可以包括处理器110,外部存储器接口120,内部存储器121,通用串行总线(universal serial bus,USB)接口130,充电管理模块140,电源管理模块141,电池142,天线1,天线2,移动通信模块151,无线通信模块152,点阵投射器160、音频模块170,扬声器170A,受话器170B,麦克风170C,耳机接口170D,传感器模块180,按键190,马达191,指示器192,摄像头193,显示屏194,以及用户标识模块(subscriber identification module,SIM)卡接口195等。其中传感器模块180可以包括压力传感器180A,陀螺仪传感器180B,气压传感器180C,磁传感器180D,加速度传感器180E,距离传感器180F,接近光传感器180G,指纹传感器180H,温度传感器180J,触摸传感器180K,环境光传感器180L,骨传导传感器 180M等。The mobile phone 100 may include a processor 110, an external memory interface 120, an internal memory 121, a universal serial bus (USB) interface 130, a charge management module 140, a power management module 141, a battery 142, an antenna 1, an antenna 2, Mobile communication module 151, wireless communication module 152, dot matrix projector 160, audio module 170, speaker 170A, receiver 170B, microphone 170C, headphone interface 170D, sensor module 180, button 190, motor 191, indicator 192, camera 193, A display screen 194, and a subscriber identification module (SIM) card interface 195. The sensor module 180 may include a pressure sensor 180A, a gyro sensor 180B, an air pressure sensor 180C, a magnetic sensor 180D, an acceleration sensor 180E, a distance sensor 180F, a proximity light sensor 180G, a fingerprint sensor 180H, a temperature sensor 180J, a touch sensor 180K, and ambient light. Sensor 180L, bone conduction sensor 180M, etc.
可以理解的是,本发明实施例示意的结构并不构成对手机100的具体限定。在本申请另一些实施例中,手机100可以包括比图示更多或更少的部件,或者组合某些部件,或者拆分某些部件,或者不同的部件布置。图示的部件可以以硬件,软件或软件和硬件的组合实现。It can be understood that the structure illustrated in the embodiment of the present invention does not constitute a specific limitation on the mobile phone 100. In other embodiments of the present application, the mobile phone 100 may include more or fewer parts than shown, or some parts may be combined, or some parts may be split, or different parts may be arranged. The illustrated components can be implemented in hardware, software, or a combination of software and hardware.
其中,处理器110可以包括一个或多个处理单元,例如:处理器110可以包括应用处理器(application processor,AP),调制解调处理器,图形处理器(graphics processing unit,GPU),图像信号处理器(image signal processor,ISP),控制器,存储器,视频编解码器,数字信号处理器(digital signal processor,DSP),基带处理器,和/或神经网络处理器(neural-network processing unit,NPU)等。其中,不同的处理单元可以是独立的器件,也可以集成在一个或多个处理器中。The processor 110 may include one or more processing units. For example, the processor 110 may include an application processor (AP), a modem processor, a graphics processing unit (GPU), and an image signal. Processor (image signal processor, ISP), controller, memory, video codec, digital signal processor (DSP), baseband processor, and / or neural-network processing unit, NPU) and so on. Among them, different processing units may be independent devices or integrated in one or more processors.
其中,控制器可以是手机100的神经中枢和指挥中心。控制器可以根据指令操作码和时序信号,产生操作控制信号,完成取指令和执行指令的控制。The controller may be a nerve center and a command center of the mobile phone 100. The controller can generate operation control signals according to the instruction operation code and timing signals, and complete the control of fetching and executing instructions.
处理器110中还可以设置存储器,用于存储指令和数据。在一些实施例中,处理器110中的存储器为高速缓冲存储器。该存储器可以保存处理器110刚用过或循环使用的指令或数据。如果处理器110需要再次使用该指令或数据,可从所述存储器中直接调用。避免了重复存取,减少了处理器110的等待时间,因而提高了系统的效率。The processor 110 may further include a memory for storing instructions and data. In some embodiments, the memory in the processor 110 is a cache memory. The memory may store instructions or data that the processor 110 has just used or used cyclically. If the processor 110 needs to use the instruction or data again, it can be directly called from the memory. Repeated accesses are avoided and the waiting time of the processor 110 is reduced, thereby improving the efficiency of the system.
手机100通过GPU,显示屏194,以及应用处理器等实现显示功能。GPU为图像处理的微处理器,连接显示屏194和应用处理器。GPU用于执行数学和几何计算,用于图形渲染。处理器110可包括一个或多个GPU,其执行程序指令以生成或改变显示信息。The mobile phone 100 implements a display function through a GPU, a display screen 194, and an application processor. The GPU is a microprocessor for image processing and is connected to the display 194 and an application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. The processor 110 may include one or more GPUs that execute program instructions to generate or change display information.
显示屏194用于显示图像,视频等。显示屏194包括显示面板。显示面板可以采用液晶显示屏(liquid crystal display,LCD),有机发光二极管(organic light-emitting diode,OLED),有源矩阵有机发光二极体或主动矩阵有机发光二极体(active-matrix organic light emitting diode的,AMOLED),柔性发光二极管(flex light-emitting diode,FLED),Miniled,MicroLed,Micro-oLed,量子点发光二极管(quantum dot light emitting diodes,QLED)等。在一些实施例中,手机100可以包括1个或N个显示屏194,N为大于1的正整数。The display screen 194 is used to display images, videos, and the like. The display screen 194 includes a display panel. The display panel can use a liquid crystal display (LCD), an organic light-emitting diode (OLED), an active matrix organic light emitting diode or an active matrix organic light emitting diode (active-matrix organic light-emitting diode). emitting diodes (AMOLED), flexible light-emitting diodes (FLEDs), Miniled, MicroLed, Micro-oLed, quantum dot light emitting diodes (QLEDs), etc. In some embodiments, the mobile phone 100 may include one or N display screens 194, where N is a positive integer greater than 1.
摄像头193用于捕获静态图像或视频。通常,摄像头193可以包括感光元件比如镜头组和图像传感器,其中,镜头组包括多个透镜(凸透镜或凹透镜),用于采集待拍摄物体反射的光信号,并将采集的光信号传递给图像传感器。图像传感器根据所述光信号生成待拍摄物体的图像。若手机100当前处于锁屏状态,图像传感器将生成的图像发送给处理器110,处理器110运行本申请实施例提供的人脸识别算法对所述图像进行识别。若手机100当前显示相机应用的取景界面,则显示屏194在所述取景界面中显示所述图像。The camera 193 is used to capture still images or videos. Generally, the camera 193 may include a photosensitive element such as a lens group and an image sensor, where the lens group includes a plurality of lenses (convex lenses or concave lenses) for collecting the light signals reflected by the object to be captured and transmitting the collected light signals to the image sensor. . The image sensor generates an image of an object to be captured according to the light signal. If the mobile phone 100 is currently in a lock screen state, the image sensor sends the generated image to the processor 110, and the processor 110 runs the face recognition algorithm provided in the embodiment of the present application to identify the image. If the mobile phone 100 currently displays a viewfinder interface of a camera application, the display screen 194 displays the image in the viewfinder interface.
其中,点阵投射器160用于投射光线。点阵投射器160投射的光线可以是可见光,或者红外光(例如是红外激光)等。以红外光为例,摄像头193可以是红外摄像头,以采集点阵投射器发射的红外激光。另外,在图1所示的摄像头193包括1-N个摄像头。如果包括一个摄像头,即相机应用所使用的用于拍照和录像的可见光摄像头和人脸识别所使用的摄像头是同一摄像头。如果包括多个摄像头,相机应用所使用的摄像头和人脸识别所使用的摄像头可以是不同的摄像头。比如,相机应用所使用的摄像头是可见光摄像头,人脸识别所使用的摄像头是红外摄像头。The dot matrix projector 160 is used to project light. The light projected by the dot matrix projector 160 may be visible light, or infrared light (for example, infrared laser light). Taking infrared light as an example, the camera 193 may be an infrared camera to collect infrared laser light emitted by a dot matrix projector. The camera 193 shown in FIG. 1 includes 1-N cameras. If a camera is included, the visible light camera used for taking pictures and videos used by the camera application and the camera used for face recognition are the same camera. If multiple cameras are included, the cameras used by the camera application and the cameras used for face recognition may be different cameras. For example, the camera used by the camera application is a visible light camera, and the camera used for face recognition is an infrared camera.
内部存储器121可以用于存储计算机可执行程序代码,所述可执行程序代码包括指令。处理器110通过运行存储在内部存储器121的指令,从而执行手机100的各种功能应用以及 数据处理。内部存储器121可以包括存储程序区和存储数据区。其中,存储程序区可存储操作系统,至少一个功能所需的应用程序(比如声音播放功能,图像播放功能等)等。存储数据区可存储手机100使用过程中所创建的数据(比如音频数据,电话本等)等。此外,内部存储器121可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件,闪存器件,通用闪存存储器(universal flash storage,UFS)等。The internal memory 121 may be used to store computer executable program code, where the executable program code includes instructions. The processor 110 executes various functional applications and data processing of the mobile phone 100 by executing instructions stored in the internal memory 121. The internal memory 121 may include a storage program area and a storage data area. The storage program area may store an operating system, at least one application required by a function (such as a sound playback function, an image playback function, etc.) and the like. The storage data area can store data (such as audio data, phone book, etc.) created during the use of the mobile phone 100. In addition, the internal memory 121 may include a high-speed random access memory, and may also include a non-volatile memory, such as at least one magnetic disk storage device, a flash memory device, a universal flash memory (UFS), and the like.
其中,距离传感器180F,用于测量距离。手机100可以通过红外或激光测量距离。在一些实施例中,拍摄场景,手机100可以利用距离传感器180F测距以实现快速对焦。在另一些实施例中,手机100还可以利用距离传感器180F检测是否有人或物体靠近。Among them, the distance sensor 180F is used to measure distance. The mobile phone 100 can measure the distance by infrared or laser. In some embodiments, when shooting a scene, the mobile phone 100 may use a distance sensor 180F to measure the distance to achieve fast focusing. In other embodiments, the mobile phone 100 may also use the distance sensor 180F to detect whether a person or an object is approaching.
其中,接近光传感器180G可以包括例如发光二极管(LED)和光检测器,例如光电二极管。发光二极管可以是红外发光二极管。手机100通过发光二极管向外发射红外光。手机100使用光电二极管检测来自附近物体的红外反射光。当检测到充分的反射光时,可以确定手机100附近有物体。当检测到不充分的反射光时,手机100可以确定手机100附近没有物体。手机100可以利用接近光传感器180G检测用户手持手机100贴近耳朵通话,以便自动熄灭屏幕达到省电的目的。接近光传感器180G也可用于皮套模式,口袋模式自动解锁与锁屏。Among them, the proximity light sensor 180G may include, for example, a light emitting diode (LED) and a light detector, such as a photodiode. The light emitting diode may be an infrared light emitting diode. The mobile phone 100 emits infrared light through a light emitting diode. The mobile phone 100 uses a photodiode to detect infrared reflected light from nearby objects. When sufficient reflected light is detected, it can be determined that there is an object near the mobile phone 100. When insufficient reflected light is detected, the mobile phone 100 may determine that there is no object near the mobile phone 100. The mobile phone 100 can use the proximity light sensor 180G to detect that the user is holding the mobile phone 100 close to the ear to talk, so as to automatically turn off the screen to save power. The proximity light sensor 180G can also be used in holster mode, and the pocket mode automatically unlocks and locks the screen.
环境光传感器180L用于感知环境光亮度。手机100可以根据感知的环境光亮度自适应调节显示屏194亮度。环境光传感器180L也可用于拍照时自动调节白平衡。环境光传感器180L还可以与接近光传感器180G配合,检测手机100是否在口袋里,以防误触。The ambient light sensor 180L is used to sense ambient light brightness. The mobile phone 100 can adaptively adjust the brightness of the display screen 194 according to the perceived ambient light brightness. Ambient light sensor 180L can also be used to automatically adjust white balance when taking pictures. The ambient light sensor 180L can also cooperate with the proximity light sensor 180G to detect whether the mobile phone 100 is in a pocket to prevent accidental touch.
指纹传感器180H用于采集指纹。手机100可以利用采集的指纹特性实现指纹解锁,访问应用锁,指纹拍照,指纹接听来电等。The fingerprint sensor 180H is used to collect fingerprints. The mobile phone 100 can use the collected fingerprint characteristics to realize fingerprint unlocking, access application lock, fingerprint photographing, fingerprint answering calls, etc.
温度传感器180J用于检测温度。在一些实施例中,手机100利用温度传感器180J检测的温度,执行温度处理策略。The temperature sensor 180J is used to detect the temperature. In some embodiments, the mobile phone 100 executes a temperature processing strategy using the temperature detected by the temperature sensor 180J.
触摸传感器180K,也称“触控面板”。触摸传感器180K可以设置于显示屏194,由触摸传感器180K与显示屏194组成触摸屏,也称“触控屏”。触摸传感器180K用于检测作用于其上或附近的触摸操作。触摸传感器可以将检测到的触摸操作传递给应用处理器,以确定触摸事件类型。可以通过显示屏194提供与触摸操作相关的视觉输出。在另一些实施例中,触摸传感器180K也可以设置于手机100的表面,与显示屏194所处的位置不同。The touch sensor 180K is also called "touch panel". The touch sensor 180K may be disposed on the display screen 194, and the touch screen is composed of the touch sensor 180K and the display screen 194, which is also referred to as a "touch screen". The touch sensor 180K is used to detect a touch operation acting on or near it. The touch sensor can pass the detected touch operation to the application processor to determine the type of touch event. A visual output related to the touch operation may be provided through the display screen 194. In other embodiments, the touch sensor 180K may also be disposed on the surface of the mobile phone 100, which is different from the position where the display screen 194 is located.
另外,手机100可以通过音频模块170,扬声器170A,受话器170B,麦克风170C,耳机接口170D,以及应用处理器等实现音频功能。例如音乐播放,录音等。手机100可以接收按键190输入,产生与手机100的用户设置以及功能控制有关的键信号输入。手机100可以利用马达191产生振动提示(比如来电振动提示)。手机100中的指示器192可以是指示灯,可以用于指示充电状态,电量变化,也可以用于指示消息,未接来电,通知等。手机100中的SIM卡接口195用于连接SIM卡。SIM卡可以通过插入SIM卡接口195,或从SIM卡接口195拔出,实现和手机100的接触和分离。In addition, the mobile phone 100 can implement audio functions through an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, a headphone interface 170D, and an application processor. Such as music playback, recording, etc. The mobile phone 100 can receive the input of the key 190 and generate a key signal input related to the user settings and function control of the mobile phone 100. The mobile phone 100 may use the motor 191 to generate a vibration prompt (such as an incoming call vibration prompt). The indicator 192 in the mobile phone 100 can be an indicator light, which can be used to indicate the charging status, power change, and can also be used to indicate messages, missed calls, notifications, and so on. The SIM card interface 195 in the mobile phone 100 is used to connect a SIM card. The SIM card can be inserted and removed from the SIM card interface 195 to achieve contact and separation with the mobile phone 100.
请参见图3,为本申请实施例提供的一种应用场景的示意图。图3中的手机以图2所示的手机100为例。如图3所示,用户将手机100朝向用户,手机100上的距离传感器180F(图3中未示出)检测到有物体(即人)靠近时,点阵投射器160发射光线。物体对点阵投射器160的发射光线反射,反射光线被摄像头193采集,得到待验证图像(即人脸图像)。Please refer to FIG. 3, which is a schematic diagram of an application scenario provided by an embodiment of the present application. The mobile phone in FIG. 3 uses the mobile phone 100 shown in FIG. 2 as an example. As shown in FIG. 3, when the user points the mobile phone 100 toward the user, when the distance sensor 180F (not shown in FIG. 3) on the mobile phone 100 detects that an object (ie, a person) is approaching, the dot matrix projector 160 emits light. The object reflects the emitted light of the dot matrix projector 160, and the reflected light is collected by the camera 193 to obtain an image to be verified (that is, a face image).
手机100中的处理器110运行本申请实施例提供的人脸识别算法的代码(人脸识别算法的代码可以存储在内部存储器121中,或者,存储在摄像头193自身的存储空间中),判断待验证图像是否是采集由点阵投射器160的发射光线在物体上反射的光线得到的;若是,则进 一步将待验证图像与事先存储的人脸图像进行匹配;若否,则输出提示信息,以提示用户待验证图像不是采集由本机的点阵投射器160的发射光线在物体上反射的光线得到的,存在安全隐患。The processor 110 in the mobile phone 100 runs the code of the face recognition algorithm provided in the embodiment of the present application (the code of the face recognition algorithm may be stored in the internal memory 121 or stored in the storage space of the camera 193 itself), and judges whether the Verify whether the image is obtained by collecting the light reflected by the emitted light from the dot matrix projector 160 on the object; if it is, then the image to be verified is further matched with the face image stored in advance; if not, a prompt message is output to The user is reminded that the image to be verified is not obtained by collecting the light reflected by the emitted light from the local dot matrix projector 160 on the object, which poses a security risk.
由此可见,本申请实施例提供的人脸识别方法相对于前述的现有技术中的人脸识别方法,新增了一个步骤,该步骤即:判断待验证图像(比如,人脸图像)是否是采集由本机的点阵投射器的发射光线在物体上反射的光线得到的。在实际应用中,该新增的步骤可以在比较待验证图像与事先存储的人脸图像之前进行,也可以在比较待验证图像与事先存储的人脸图像之后进行。比如,手机100中的处理器110运行本申请实施例提供的人脸识别算法的代码,比较待验证图像(比如人脸图像)是否与事先存储的人脸图像是否一致;若是,则进一步判断待验证图像是否是采集由点阵投射器160的发射光线在物体上反射的光线得到的。It can be seen that the face recognition method provided in the embodiment of the present application has a new step compared with the aforementioned face recognition method in the prior art. This step is to determine whether an image to be verified (for example, a face image) is It is obtained by collecting the light reflected by the emitted light from the local dot matrix projector on the object. In practical applications, the newly added step may be performed before comparing the image to be verified with a face image stored in advance, or may be performed after comparing the image to be verified with a face image stored in advance. For example, the processor 110 in the mobile phone 100 executes the code of the face recognition algorithm provided in the embodiment of the present application, and compares whether the image to be verified (such as a face image) is consistent with the face image stored in advance; if it is, further judges whether It is verified whether the image is obtained by collecting the light reflected by the emitted light from the dot matrix projector 160 on the object.
通过以上描述可知,本申请实施例提供的手机100在进行人脸识别时,可以判断待验证图像(即人脸图像)是否是采集由本机的点阵投射器160的发射光线在物体上反射的光线得到的。通过这种方式,可以尽可能的避免黑客将其它人脸图像(并非是采集本机的发射光线在物体上反射的光线得到的图像)输入到手机中而通过验证。这样的人脸识别方式,安全性较高。From the above description, it can be known that when the mobile phone 100 provided in the embodiment of the present application performs face recognition, it can be determined whether the image to be verified (that is, the face image) is collected and reflected by the light emitted by the local dot matrix projector 160 on the object. Get the light. In this way, it is possible to prevent hackers from inputting other face images (not images obtained by collecting the light reflected by the emitted light on the object) into the mobile phone to pass the verification. Such a face recognition method has high security.
为了方便描述本申请实施例提供的人脸识别算法,下文将通过与本申请实施例提供的人脸识别算法相关的部件介绍本申请实施例的人脸识别算法,具体请参见图4,图4中的部件可参考关于图1的相关描述。需要说明的是,在图4中,以处理器110集成应用处理器110-1为例。In order to facilitate the description of the face recognition algorithm provided in the embodiment of the present application, the face recognition algorithm of the embodiment of the present application will be described below through components related to the face recognition algorithm provided in the embodiment of the present application. For the components in FIG. 1, refer to the related description about FIG. 1. It should be noted that in FIG. 4, the processor 110 is integrated with the application processor 110-1 as an example.
如图4所示,手机100中的距离传感器180F可以检测是否有物体靠近。当距离传感器180F检测到有物体(比如人脸)靠近时,向应用处理器101-1发送一指令,该指令用于指示有物体靠近。应用处理器110-1接收到指令后,启动点阵投射器160,以投射光线。应用处理器110-1触发启动摄像头193,以采集物体反射的光线,得到待验证图像(比如人脸图像)。As shown in FIG. 4, the distance sensor 180F in the mobile phone 100 can detect whether an object approaches. When the distance sensor 180F detects that an object (such as a human face) is approaching, it sends an instruction to the application processor 101-1, which is used to indicate that an object is approaching. After receiving the instruction, the application processor 110-1 activates the dot matrix projector 160 to project light. The application processor 110-1 triggers the activation of the camera 193 to collect the light reflected by the object to obtain an image to be verified (such as a face image).
在实际应用中,距离传感器180F检测到有物体靠近时,也可以产生触发启动点阵投射器160和摄像头193的指令,并将该指令发送给应用处理器110-1,以通知应用处理器110-1启动点阵投射器160和摄像头193。当然,手机100中的点阵投射器160和摄像头193也可以周期性的开启或者一直开启着。比如,点阵投射器160可以周期性投射光线,摄像头193周期性的采集待验证图像。In practical applications, when the proximity sensor 180F detects an object approaching, it can also generate a command to trigger the dot matrix projector 160 and the camera 193, and send the command to the application processor 110-1 to notify the application processor 110. -1 activates the dot matrix projector 160 and the camera 193. Of course, the dot matrix projector 160 and the camera 193 in the mobile phone 100 may also be turned on periodically or always on. For example, the dot matrix projector 160 may periodically project light, and the camera 193 periodically collects images to be verified.
摄像头193拍摄得到待验证图像后,将待验证图像发送给应用处理器110-1。应用处理器110-1运行存储在内部存储器121(图4中未示出)中的人脸识别算法的代码,对所述待验证图像进行识别。若识别通过,显示屏194解除锁定,例如,可以从锁屏界面切换显示手机100的主界面。After the camera 193 captures the image to be verified, it sends the image to be verified to the application processor 110-1. The application processor 110-1 runs the code of the face recognition algorithm stored in the internal memory 121 (not shown in FIG. 4) to recognize the image to be verified. If the identification is passed, the display screen 194 is unlocked. For example, the main interface of the mobile phone 100 may be displayed by switching from the lock screen.
应用处理器110-1运行本申请实施例提供的人脸识别算法的代码,对所述待验证图像进行识别,可以包括两个过程。第一个过程。应用处理器110-1判断待验证图像是否是采集由本机点阵投射器160的发射光线在物体反射的光线得到的。第二个过程,应用处理器110-1判断待验证图像和事先存储的人脸图像是否一致。The application processor 110-1 runs the code of the face recognition algorithm provided in the embodiment of the present application, and the identification of the image to be verified may include two processes. The first process. The application processor 110-1 determines whether the image to be verified is obtained by collecting light reflected from the object by the light emitted from the local dot matrix projector 160. In the second process, the application processor 110-1 determines whether the image to be verified is consistent with the face image stored in advance.
请继续参见图3所示,摄像头193是采集反射光线,而反射光线是物体(例如:人脸)对点阵投射器160的发射光线反射的。因此,在上述第一个过程中,手机100可以判断物体上的物点是否在点阵投射器160的发射光线上即可。因此,手机100可以先确定点阵投射器160的发射光线在空间中的位置。Please continue to refer to FIG. 3, the camera 193 collects reflected light, and the reflected light is reflected by an object (for example, a human face) on the emitted light of the dot matrix projector 160. Therefore, in the first process described above, the mobile phone 100 can determine whether the object point on the object is on the light emitted by the dot matrix projector 160. Therefore, the mobile phone 100 may first determine the position of the emitted light of the dot matrix projector 160 in space.
下面介绍手机100确定点阵投射器160的发射光线在空间中的位置的过程。The following describes the process by which the mobile phone 100 determines the position of the emitted light of the dot matrix projector 160 in space.
在本申请实施例中,手机100确定点阵投射器160的发射光线在空间中的位置的过程,可以通过虚拟光心标定来实现。手机100在出厂之前,可以对点阵投射器的虚拟光心进行标定,即,手机100确定点阵投身器的虚拟光心位置坐标的过程。请参见图5,为本申请实施例提供的点阵投射器的虚拟光心标定过程的示意图,如图5所示,该流程包括:In the embodiment of the present application, the process in which the mobile phone 100 determines the position of the emitted light of the dot matrix projector 160 in space can be implemented by virtual optical center calibration. Before the mobile phone 100 leaves the factory, the virtual optical center of the dot matrix projector can be calibrated, that is, the process in which the mobile phone 100 determines the position coordinates of the virtual optical center of the dot matrix projector. Please refer to FIG. 5, which is a schematic diagram of a virtual optical center calibration process of a dot matrix projector provided by an embodiment of the present application. As shown in FIG. 5, the process includes:
第一步:手机100获取第一参考图和第二参考图,所述第一参考图和所述第二参考图包含相同的拍摄对象。Step 1: The mobile phone 100 obtains a first reference picture and a second reference picture, and the first reference picture and the second reference picture include the same photographic subject.
请参见图6所示,在手机100出厂之前,设计人员可以在距离点阵投射器160第一距离(比如20cm)的Znear处设置一个标定板。手机100上的点阵投射器160将光线投射到该标定板上。标定板反射的光线被摄像头193采集,得到第一参考图。设计人员移动标定板的位置,将标定板移动到距离点阵投射器160第二距离(比如70cm)的Zfar处。位于Zfar处的标定板反射的光线被摄像头193再次采集,得到第二参考图。可见,第一参考图和第二参考图是对处于不同位置的同一拍摄对象拍摄得到的图像。Please refer to FIG. 6, before the mobile phone 100 leaves the factory, the designer may set a calibration plate at a Znear at a first distance (for example, 20 cm) from the dot matrix projector 160. The dot matrix projector 160 on the mobile phone 100 projects light onto the calibration plate. The light reflected by the calibration plate is collected by the camera 193 to obtain a first reference image. The designer moves the position of the calibration plate, and moves the calibration plate to Zfar at a second distance (for example, 70 cm) from the dot matrix projector 160. The light reflected by the calibration plate located at Zfar is collected again by the camera 193 to obtain a second reference picture. It can be seen that the first reference picture and the second reference picture are images obtained by photographing the same subject at different positions.
需要说明的是,在图6中,第一参考图和第二参考图均是在成像平面上呈现的,图6中未在成像平面上区分第一参考图和第二参考图,但实际上,第一参考图和第二参考图是成像平面上形成的两张不同的图。It should be noted that in FIG. 6, the first reference picture and the second reference picture are both presented on the imaging plane. In FIG. 6, the first reference picture and the second reference picture are not distinguished on the imaging plane, but actually The first reference picture and the second reference picture are two different pictures formed on the imaging plane.
需要说明的是,标定板可以是白板,点阵投射器160投射的光线是可见光或红外光。以红外光为例,在实际应用中,点阵投射器160投射红外光时可以结构光的形式投射(即将红外光以特定形状投射出去,即投射到物体表面的光是有特定形状的,该特定形状可以是直线、散斑等);或者,标定板可以是经过特殊处理的板,比如标定板上设置有用于标识不同位置的特殊标记,那么手机100拍摄得到第一参考图和第二参考图时,可以确定第一参考图上的特殊标记和第二参考图上的特殊标记。It should be noted that the calibration plate may be a white board, and the light projected by the dot matrix projector 160 is visible light or infrared light. Taking infrared light as an example, in practical applications, the dot matrix projector 160 can project infrared light in the form of structured light (that is, the infrared light is projected in a specific shape, that is, the light projected onto the surface of the object has a specific shape. The specific shape may be a straight line, speckles, etc.); or the calibration plate may be a plate that has undergone special treatment. For example, the calibration plate is provided with special marks for identifying different positions, and the mobile phone 100 obtains the first reference picture and the second reference. In the drawing, the special mark on the first reference picture and the special mark on the second reference picture can be determined.
请参见图7所示,为本申请实施例提供的第一参考图和第二参考图的示意图。由于第一参考图和第二参考图是摄像头193对位于不同位置处的标定板拍摄得到的图像。因此,第一参考图和第二参考图上的每个对象(比如特殊标记)的显示位置不同。如图7所示,第一参考图和第二参考图上的暗黑区域即特殊标记。Please refer to FIG. 7, which are schematic diagrams of a first reference diagram and a second reference diagram provided by an embodiment of the present application. The first reference picture and the second reference picture are images obtained by the camera 193 on the calibration plates located at different positions. Therefore, the display position of each object (such as a special mark) on the first reference picture and the second reference picture is different. As shown in FIG. 7, the dark areas on the first reference picture and the second reference picture are special marks.
第二步:手机100确定第一参考图上的两个像点Q1和P1的位置坐标。Step 2: The mobile phone 100 determines position coordinates of two image points Q1 and P1 on the first reference picture.
请继续参见图7所示,第一参考图和第二参考图是对位于不同位置处的同一拍摄对象(标定板,或者点阵投射器投射的是散斑时,拍摄对象可以是标定板上投射的散斑)拍摄得到的图像。手机100可以取第一参考图上的任意两个像点即Q1和P1。或者,若点阵投射器投射的是散斑,则两个像点Q1和P1可以是第一参考图上的某两个散斑各自的中心点。Please continue to refer to FIG. 7. The first reference picture and the second reference picture are for the same photographic object (calibration plate or speckle projected by the dot-matrix projector) at different positions. The photographic object may be the calibration plate. Projected speckles). The mobile phone 100 may take any two image points on the first reference picture, that is, Q1 and P1. Alternatively, if the dot projector projects speckles, the two image points Q1 and P1 may be respective center points of some two speckles on the first reference picture.
请结合图6所示,由于第一参考图和第二参考图在成像平面上呈现,所以手机100确定的第一参考图上的像点Q1和P1,是在像平面坐标系o-u-v中的坐标。Please refer to FIG. 6, since the first reference picture and the second reference picture are presented on the imaging plane, the image points Q1 and P1 on the first reference picture determined by the mobile phone 100 are coordinates in the image plane coordinate system ouv .
第三步:手机100根据第一参考图上的Q1和P1确定第二参考图上的Q2和P2,其中,Q2与Q1对应,P2与P1对应。Step 3: The mobile phone 100 determines Q2 and P2 on the second reference diagram according to Q1 and P1 on the first reference diagram, where Q2 corresponds to Q1 and P2 corresponds to P1.
由于第一参考图和第二参考图是对处于不同位置处的标定板拍摄得到的,所以标定板上同一对象(比如特殊标记:暗黑区域)在两张参考图上的显示位置不同。手机100可以根据第一参考图上的一个像点,确定第二参考图上的与所述一个像点对应的点。手机100可以根据现有技术的匹配算法(比如相似度匹配算法)确定第二参考图上的Q2和P2,关于匹配算法,本申请不多赘述。Since the first reference picture and the second reference picture are obtained by photographing the calibration boards at different positions, the same object (such as a special mark: dark area) on the calibration board has different display positions on the two reference pictures. The mobile phone 100 may determine a point corresponding to the one image point on the second reference image according to one image point on the first reference image. The mobile phone 100 may determine Q2 and P2 on the second reference diagram according to a prior art matching algorithm (such as a similarity matching algorithm). Regarding the matching algorithm, this application will not go into details.
请继续参见图7所示,手机100确定第一参考图上的Q1和P1后,可以确定第二参考图上的与Q1对应的Q2,与P1对应的P2。Please continue to refer to FIG. 7. After the mobile phone 100 determines Q1 and P1 on the first reference picture, it can determine Q2 corresponding to Q1 and P2 corresponding to P1 on the second reference picture.
需要说明的是,请结合图6所示,虽然图6中,在成像平面上未区分第一参考图和第二参考图。但是第一参考图和第二参考图上的像点均是在像平面坐标系中。因此,手机100根据第一参考图上的Q1和P1,确定第二参考图上的Q2和P2后,Q1、P1、Q2和P2可以同时在像平面坐标系中标出。It should be noted that, please refer to FIG. 6, although in FIG. 6, the first reference picture and the second reference picture are not distinguished on the imaging plane. However, the image points on the first reference picture and the second reference picture are in the image plane coordinate system. Therefore, after the mobile phone 100 determines Q2 and P2 on the second reference diagram according to Q1 and P1 on the first reference diagram, Q1, P1, Q2, and P2 can be marked in the image plane coordinate system at the same time.
第四步:手机100根据第一参考图上的像点Q1确定物点Q3,根据像点P1确定物点P3;手机100根据第二参考图上的像点Q2确定物点Q4,根据像点P2确定物点P4。The fourth step: the mobile phone 100 determines the object point Q3 according to the image point Q1 on the first reference picture, and determines the object point P3 based on the image point P1; the mobile phone 100 determines the object point Q4 based on the image point Q2 on the second reference picture, P2 determines the object point P4.
请继续参见图6所示,点阵投射器的一条发射光线投射到Znear处的标定板上的物点Q3。物点Q3反射光线,发射光线成像于成像平面上的像点Q1。所述一条光线投射到Zfar处的标定板上的物点Q4。物点Q4反射光线,发射光线成像于成像平面上的像点Q2。点阵投射器的另一条发射光线投射到Znear处的标定板上的物点P3。物点P3反射光线,发射光线成像于成像平面上的像点P1。所述另一条发射光线投射到Zfar处的标定板上的物点P4。物点P4反射光线,发射光线成像于成像平面上的像点P2。因此,手机100在第一~三步中,当手机100确定像点Q1和P1的坐标之后,可以根据Q1确定对应的物点Q3,根据P1确定对应的物点P3;根据Q2确定对应的物点Q4,根据P2确定对应的物点P4。比如,手机100可以根据像点坐标和预设算法计算物点坐标,其中预设算法可以有多种,比如,逆透视成像算法或者其他算法,本申请实施例不多赘述。下面以逆透视成像算法为例。Please continue to refer to FIG. 6, an emission light of a dot matrix projector is projected to an object point Q3 on a calibration plate at Znear. The object point Q3 reflects light, and the emitted light forms an image point Q1 on the imaging plane. The one ray is projected to the object point Q4 on the calibration plate at Zfar. The object point Q4 reflects light, and the emitted light forms an image point Q2 on the imaging plane. Another emitting light of the dot matrix projector is projected to the object point P3 on the calibration plate at Znear. The object point P3 reflects light, and the emitted light forms an image point P1 on the imaging plane. The other emitted light is projected onto the object point P4 on the calibration plate at Zfar. The object point P4 reflects light, and the emitted light forms an image point P2 on the imaging plane. Therefore, in the first to third steps of the mobile phone 100, after the mobile phone 100 determines the coordinates of the image points Q1 and P1, it can determine the corresponding object point Q3 according to Q1, determine the corresponding object point P3 according to P1, and determine the corresponding object according to Q2. At point Q4, the corresponding object point P4 is determined according to P2. For example, the mobile phone 100 may calculate the object point coordinates according to the image point coordinates and a preset algorithm, and there may be multiple preset algorithms, such as an inverse perspective imaging algorithm or other algorithms, which are not described in detail in the embodiments of the present application. The following takes the inverse perspective imaging algorithm as an example.
举例来说,以Q1为例,手机100可以根据公式(1)确定与Q1对应的物点Q3。For example, taking Q1 as an example, the mobile phone 100 may determine the object point Q3 corresponding to Q1 according to formula (1).
Z1*Q1=P*Q3          公式(1)Z1 * Q1 = P * Q3 Equation (1)
其中,Q1是第一参考图上的像点,即在像平面坐标系中,所以Q1坐标为(u1,v1)。Z1是物点Q3到成像平面的距离即20cm,P为像平面坐标系到摄像头坐标系的转换公式:Among them, Q1 is the image point on the first reference picture, that is, in the image plane coordinate system, so the Q1 coordinate is (u1, v1). Z1 is the distance from the object point Q3 to the imaging plane, that is, 20 cm, and P is the conversion formula of the image plane coordinate system to the camera coordinate system:
Figure PCTCN2019084183-appb-000002
Figure PCTCN2019084183-appb-000002
其中,f为摄像头的焦距。Among them, f is the focal length of the camera.
将上述各个参数的取值带入公式(1),得到公式(2)Bring the values of the above parameters into formula (1) to get formula (2)
Figure PCTCN2019084183-appb-000003
Figure PCTCN2019084183-appb-000003
由于Q1坐标为(u1,v1)、f是已知的,所以通过公式(2)可以确定物点Q3的坐标(x1,y1,z1),且该Q3的坐标是在摄像头坐标系中的坐标。Since the coordinates of Q1 are (u1, v1) and f are known, the coordinates (x1, y1, z1) of the object point Q3 can be determined by formula (2), and the coordinates of this Q3 are the coordinates in the camera coordinate system .
类似的,手机100可以通过上述公式,确定P1对应的物点P3、Q2对应的物点Q4,P2对应的物点P4,在此不多赘述。Similarly, the mobile phone 100 can determine the object point P3 corresponding to P1, the object point Q4 corresponding to P2, and the object point P4 corresponding to P2 through the above formula, which is not described in detail here.
第五步,手机100根据Q3与Q4确定一条直线,根据P3与P4确定一条直线,确定这两条直线的交点Q,即点阵投射器160的虚拟光心。In the fifth step, the mobile phone 100 determines a straight line according to Q3 and Q4, determines a straight line according to P3 and P4, and determines the intersection Q of the two straight lines, that is, the virtual optical center of the dot matrix projector 160.
由于Q3、Q4、P3和P4都是在摄像头坐标系得到的点,所以第五步,标定出来的点阵投射器160的虚拟光心,也是摄像头坐标系下的点。Since Q3, Q4, P3, and P4 are all points obtained in the camera coordinate system, in the fifth step, the virtual optical center of the calibrated dot matrix projector 160 is also a point in the camera coordinate system.
在上述的五步中,是以第一参考图和第二参考图上的两个像点为例的。实际上,手机100 还可以在第一参考图上多取一些点。In the above five steps, two image points on the first reference picture and the second reference picture are taken as examples. In fact, the mobile phone 100 can also take a few more points on the first reference picture.
比如,在上述第二步中,手机100确定第一参考图上的1000个像点。在第三步中,手机100确定第二参考图上与所述1000个像点对应的1000个像点。在第四步中,手机100根据第一参考图上的1000个像点,确定与这个1000个像点对应的1000个物点(即Znear处的标定板上的1000物点)。手机100根据第二参考图上的1000个像点,确定与这1000个像点对应的1000个物点(即Zfar处的标定板上的1000个物点)。在第五步,手机100将第四步得到的Znear处的标定板上的1000个物点和Zfar处的标定板上的1000个物点对应连接,得到1000条直线,这100条直线的交点即点阵投射器的虚拟光心。For example, in the above second step, the mobile phone 100 determines 1000 image points on the first reference picture. In the third step, the mobile phone 100 determines 1000 pixels corresponding to the 1000 pixels on the second reference picture. In the fourth step, the mobile phone 100 determines 1000 object points corresponding to the 1000 image points (that is, 1000 object points on the calibration plate at Znear) according to the 1000 image points on the first reference picture. The mobile phone 100 determines 1000 object points corresponding to the 1000 image points (that is, 1000 object points on the calibration plate at Zfar) according to the 1000 image points on the second reference picture. In the fifth step, the mobile phone 100 connects the 1000 object points on the calibration board at Znear obtained in the fourth step with the 1000 object points on the calibration board at Zfar to obtain 1000 straight lines. The intersection points of the 100 straight lines That is, the virtual optical center of the dot matrix projector.
请参见图8所示,为点阵投射器160投射出的1000条光线的示意图,这1000条光线的交点即点阵投射器160的虚拟光心。Please refer to FIG. 8, which is a schematic diagram of 1000 rays projected by the dot matrix projector 160. The intersection of these 1000 rays is the virtual optical center of the dot matrix projector 160.
需要说明的是,若这1000条光线的交点是一个,该交点即为点阵投射器160的虚拟光心。若这1000条光线的交点有多个(比如,其中两条光线的交点是一个,另外两条光线的交点是另一个),手机100可以确定这多个交点中的一个交点(比如处于多个交点的中心位置的交点)是点阵投射器160的虚拟光心;或者,手机100还可以有其他方式根据这多个交点确定点(即点阵投射器160的虚拟光心),本申请实施例不作限定。It should be noted that if the intersection point of the 1000 rays is one, the intersection point is the virtual optical center of the dot matrix projector 160. If there are multiple intersections of the 1000 rays (for example, the intersection of two rays is one, and the intersection of the other two rays is another), the mobile phone 100 may determine one of the intersections (for example, at multiple The intersection point at the center of the intersection point) is the virtual light center of the dot matrix projector 160; or, the mobile phone 100 may also have other ways to determine the point based on the multiple intersection points (that is, the virtual light center of the dot matrix projector 160). Examples are not limited.
请参见图9所示,为本申请实施例提供的点阵投射器160的结构示意图。如图8所示,点阵投射器160上设置有多个发光点(图8中以12个为例)。图5所示的标定虚拟光心的过程是确定多个发光点中的一个发光点(比如发光点1)的虚拟光心的过程。对于其他发光点,可以采用类似的过程来标定虚拟光心。手机100每标定一个发光点的虚拟光心时,会得到该发光点的多条发射光线。Please refer to FIG. 9, which is a schematic structural diagram of a dot matrix projector 160 according to an embodiment of the present application. As shown in FIG. 8, a plurality of light emitting points are provided on the dot matrix projector 160 (take 12 as an example in FIG. 8). The process of calibrating the virtual light center shown in FIG. 5 is a process of determining a virtual light center of one light emitting point (for example, light emitting point 1) among a plurality of light emitting points. For other light emitting points, a similar process can be used to calibrate the virtual light center. Each time the mobile phone 100 calibrates a virtual light center of a light emitting point, a plurality of emitted light rays of the light emitting point will be obtained.
作为一种示例,通过图5所示的流程,手机100完成点阵投射器160的一个发光点的虚拟光心标定过程,且得到该发光点的发射光线的方程(比如Q3和Q4之间的直线方程、P3和P4之间的直线方程),手机100可以将发光点的虚拟光心坐标和发射光线的方程存储,以便使用。需要说明的是,通过图5所示的流程,确定的虚拟光心坐标和发射光线的方程都是在摄像头坐标系下的。As an example, through the process shown in FIG. 5, the mobile phone 100 completes the virtual light center calibration process of a light emitting point of the dot matrix projector 160 and obtains the equation of the emitted light of the light emitting point (such as between Q3 and Q4). Linear equation, linear equation between P3 and P4), the mobile phone 100 can store the virtual optical center coordinates of the light emitting point and the equation of the emitted light for use. It should be noted that, through the process shown in FIG. 5, the determined virtual optical center coordinates and the equations of the emitted light are in the camera coordinate system.
作为另一种示例,手机100可以不存储点阵投射器160的发射光线的方程,而存储发光点的虚拟光心坐标以及已标定的物点的坐标,这些物点的坐标在出厂前,根据发光的虚拟光心和在设定距离处标定。例如,在如前面的实施例描述的,该设定距离处(如Znear)接收点阵投射器发出的散光时,可以采集到一些光点或光斑,这些光点或光斑的坐标即可作为表1中的物点的坐标。以发光点1为例,手机100可以将发光点1的虚拟光心坐标、Q3和P3(或者Q4、P4)对应存储。示例性的,请参见表1,为本申请实施例提供的发光点的虚拟光心坐标和物点坐标之间的映射关系。As another example, the mobile phone 100 may not store the equation of the emitted light of the dot matrix projector 160, but store the virtual optical center coordinates of the light emitting points and the coordinates of the calibrated object points. Luminous virtual light center and calibration at a set distance. For example, as described in the previous embodiment, when receiving astigmatism from a dot matrix projector at a set distance (such as Znear), some light spots or spots can be collected, and the coordinates of these spots or spots can be used as a table. The coordinates of the object point in 1. Taking the light-emitting point 1 as an example, the mobile phone 100 can store the virtual light-center coordinates, Q3 and P3 (or Q4, P4) of the light-emitting point 1 correspondingly. Exemplarily, referring to Table 1, the mapping relationship between the virtual optical center coordinates of the light emitting point and the object point coordinates provided by the embodiment of the present application.
表1Table 1
Figure PCTCN2019084183-appb-000004
Figure PCTCN2019084183-appb-000004
需要说明的是,在表1中,仅示出了以图9所示的点阵投射器160上的前4个发光点(发 光点1-4)的虚拟光心坐标,以及对应的物点坐标。如果手机100确定第一参考图上的1000个像点,对应的有1000物点,则表1中与发光点1对应的物点有1000个(在表1中仅以Q3和P3两个物点为例)。It should be noted that in Table 1, only the virtual optical center coordinates of the first four light emitting points (light emitting points 1-4) on the dot matrix projector 160 shown in FIG. 9 and the corresponding object points are shown. coordinate. If the mobile phone 100 determines 1000 image points on the first reference picture, corresponding to 1000 object points, then there are 1000 object points corresponding to the light emitting point 1 in Table 1 (only Q3 and P3 objects are used in Table 1). Point for example).
请继续参见图9所示,手机100在使用过程中,可以点亮点阵投射器160上的多个发光点中部分或者全部发光点。需要说明的是,手机100点亮哪几个发光点,可以是在手机100出厂时设置好的。那么不同的手机点亮的发光点不同,有助于提高每个手机的安全性。或者,在某个时刻(如第一时刻),距离传感器180F检测到有物体靠近时,点阵投射器160点亮前4个发光点,这4个发光点发射的光线投射到物体上,物体反射光线被摄像头193采集,得到待验证图像。在第二时刻,距离传感器180F检测到有物体靠近时,点阵投射器160点亮后4个发光点,后4个发光点发射的光线投射到物体上,物体反射的光线被摄像头193采集,得到待验证图像。由此可见,手机100在使用时,每次点亮的发光点可以不同,有助于提高安全性。Please continue to refer to FIG. 9, during use of the mobile phone 100, some or all of the multiple light emitting points on the dot matrix projector 160 may be lighted. It should be noted that which light emitting points are illuminated by the mobile phone 100 may be set when the mobile phone 100 leaves the factory. Then different mobile phones have different lighting points, which helps improve the security of each mobile phone. Or, at a certain time (such as the first time), when the distance sensor 180F detects that an object is approaching, the dot matrix projector 160 lights up the first 4 light emitting points, and the light emitted by these 4 light emitting points is projected onto the object, and the object The reflected light is collected by the camera 193 to obtain an image to be verified. At the second moment, when the proximity sensor 180F detects that an object is approaching, the four light-emitting points after the dot-matrix projector 160 lights up, and the light emitted by the latter four light-emitting points is projected onto the object. Get the image to be verified. It can be seen that, when the mobile phone 100 is in use, the light-emitting points that are lit each time may be different, which helps to improve security.
假设手机100本次点亮的是前4个发光点,由于手机100中存储了前4个发光点中每个发光点的光心和物点的坐标位置。因此,手机100得到待验证图像后,判断待验证图像是否是采集由点阵投射器160前4个发光点的发射光线在物体上反射的光线得到的。Assume that the mobile phone 100 lights up the first 4 light emitting points this time, because the mobile phone 100 stores the coordinate positions of the light center and the object point of each of the first 4 light emitting points. Therefore, after the mobile phone 100 obtains the image to be verified, it is determined whether the image to be verified is obtained by collecting light reflected on the object from the emission light of the first four light emitting points of the dot matrix projector 160.
以一个发光点为例,下面介绍手机100得到待验证图像后,应用处理器110-1运行本申请实施例提供的人脸识别算法的代码,进行待验证图像进行验证的过程。请参见图10所示,为本申请实施例提供的人脸识别方法的流程示意图。如图10所示,所述方法的流程包括:Taking a light emitting point as an example, after the mobile phone 100 obtains the image to be verified, the application processor 110-1 runs the code of the face recognition algorithm provided in the embodiment of the present application to perform the process of verifying the image to be verified. Please refer to FIG. 10, which is a schematic flowchart of a face recognition method according to an embodiment of the present application. As shown in FIG. 10, the process of the method includes:
S1001:手机100处于锁屏状态,手机100上的距离传感器检测到有物体靠近时,点阵投射器投射160光线,摄像头193采集光线,得到待验证图像。S1001: When the mobile phone 100 is in the lock screen state, when the distance sensor on the mobile phone 100 detects that an object is approaching, the dot matrix projector projects 160 light rays, and the camera 193 collects the light rays to obtain an image to be verified.
在S1001中,摄像头193得到待验证图像后,将待验证图像发送给手机100中的应用处理器110-1,应用处理器110-1运行本申请实施例提供的图像校正算法的代码以执行如下S1002-S1005的过程。In S1001, after the camera 193 obtains the image to be verified, the camera 193 sends the image to be verified to the application processor 110-1 in the mobile phone 100. The application processor 110-1 runs the code of the image correction algorithm provided in the embodiment of the present application to execute as follows S1002-S1005 process.
S1002:确定待验证图像上的一个像点E’。S1002: Determine an image point E 'on the image to be verified.
应用处理器110-1可以确定待验证图像上的任意的一个像点即E’。像点E’是像平面坐标系的点。The application processor 110-1 may determine any one image point on the image to be verified, that is, E '. The image point E 'is a point of the image plane coordinate system.
S1003:判断该像点E’对应的物点E是否所述点阵投射器的虚拟光心的发射光线上,若是,则执行S1004,若否,则执行S1005。S1003: Determine whether the object point E corresponding to the image point E 'is on the light emitted from the virtual light center of the dot matrix projector. If yes, execute S1004; if not, execute S1005.
应用处理器110-1可以前述的公式(1)确定像点E’对应的物点E,得到物点E是在摄像头坐标系下的点。The application processor 110-1 may determine the object point E corresponding to the image point E 'according to the foregoing formula (1), and obtain that the object point E is a point in the camera coordinate system.
若手机100中存储的是每个发光点的虚拟光心和该发光点的发射光线的方程,则应用处理器110-1确定物点E是否在所述发光点的发射光线的方程中即可。If the mobile phone 100 stores the virtual light center of each luminous point and the equation of the emitted light of the luminous point, the application processor 110-1 determines whether the object point E is in the equation of the emitted light of the luminous point. .
若手机100存储的是每个发光点的虚拟光心和对应的物点的坐标(比如,表1),应用处理器110-1可以根据虚拟光心和该虚拟光心对应的多个物点的坐标确定多条直线(由于虚拟光心和物点均是摄像头坐标系中的点,得出的多条直线也是摄像头坐标系中的直线)。应用处理器110-1判断物点E是否在所述多条直线中的某一条直线上。若在,说明待验证图像是采集由点阵投射器160的发射光线在物体上反射的光线得到的。若不在,说明待验证图像是采集由点阵投射器160的发射光线在物体上反射的光线得到的。If the mobile phone 100 stores the coordinates of the virtual light center of each luminous point and the corresponding object point (for example, Table 1), the application processor 110-1 may according to the virtual light center and multiple object points corresponding to the virtual light center. Coordinates determine multiple straight lines (since the virtual optical center and the object point are both points in the camera coordinate system, the multiple straight lines obtained are also straight lines in the camera coordinate system). The application processor 110-1 determines whether the object point E is on a certain straight line among the plurality of straight lines. If yes, it means that the image to be verified is obtained by collecting the light reflected on the object by the emission light from the dot matrix projector 160. If not, it means that the image to be verified is obtained by collecting the light reflected on the object by the light emitted from the dot matrix projector 160.
需要说明的是,如前述内容可知,点阵投射器160中的全部或部分发光点可以发光。因此,在实际应用中,手机100可以为每个发光点设置标识。当某个发光点被点亮时,应用处 理器110-1确定该被点亮的发光点的标识(方式一,哪个发光点点亮是应用处理器110-1决定的,所以应用处理器110-1知道被点亮的发光点的标识,方式二,那个发光点点亮是点阵投射器160自己决定的,点阵投射器160点亮哪个发光点就把该发光点的标识发给应用处理器110-1)。应用处理器110-1根据表2确定该被点亮的发光点的虚拟光心坐标。如果点亮的发光点有多个,那么应用处理器110-1可以确定每个发光点的虚拟光心以及对应的物点。以发光点1为例,应用处理器110-1根据表1确定发光点1的虚拟光心为A、以及与虚拟光心A对应的多个物点。应用处理器110-1可以根据虚拟光心A和该虚拟光心A对应的多个物点的坐标确定多条直线。It should be noted that, as can be seen from the foregoing, all or part of the light emitting points in the dot matrix projector 160 can emit light. Therefore, in practical applications, the mobile phone 100 can set a logo for each light emitting point. When a certain light emitting point is lighted, the application processor 110-1 determines the identification of the lighted light emitting point (mode one, which light point is lit is determined by the application processor 110-1, so the application processor 110 -1 knows the identification of the illuminated light point. In the second way, the lighting of that light point is determined by the dot matrix projector 160. Which dot is illuminated by the dot matrix projector 160 sends the identification of the light point to the application. Processor 110-1). The application processor 110-1 determines the virtual optical center coordinates of the light-emitting point according to Table 2. If there are multiple lighting points, the application processor 110-1 may determine the virtual light center of each lighting point and the corresponding object point. Taking the light emitting point 1 as an example, the application processor 110-1 determines that the virtual light center of the light emitting point 1 is A and a plurality of object points corresponding to the virtual light center A according to Table 1. The application processor 110-1 may determine a plurality of straight lines according to the coordinates of the virtual light center A and a plurality of object points corresponding to the virtual light center A.
需要说明的是,表1中的发光点1-4的虚拟光心,以及每个虚拟光心对应的物点都是在摄像头坐标系下的。在实际应用中,手机100可以将虚拟光心,以及每个虚拟光心对应的物点转换到世界坐标系下,本申请实施例不作限定。若表1中的虚拟光心,以及每个虚拟光心对应的物点是实际坐标系中的点,那么在后续使用手机100的过程中,手机100拍摄得到待验证图像后,将待验证图像上的像点E’转换到世界坐标系下。总之,应用处理器110-1将待验证图像上的像点E’的坐标系转换到与表1中的虚拟光心、物点是同一坐标系即可;或者,应用处理器110-1将待验证图像上的像点E’转换到与发光点的发射光线是同一坐标系即可。It should be noted that the virtual optical centers of the luminous points 1-4 in Table 1 and the object points corresponding to each virtual optical center are in the camera coordinate system. In practical applications, the mobile phone 100 can convert the virtual light center and the object point corresponding to each virtual light center into the world coordinate system, which is not limited in the embodiment of the present application. If the virtual light centers in Table 1 and the object points corresponding to each virtual light center are points in the actual coordinate system, then in the subsequent process of using the mobile phone 100, after the mobile phone 100 captures the image to be verified, the image to be verified will be The upper image point E 'is converted into the world coordinate system. In short, the application processor 110-1 may convert the coordinate system of the image point E ′ on the image to be verified to the same coordinate system as the virtual optical center and the object point in Table 1. Or, the application processor 110-1 may The image point E 'on the image to be verified can be converted into the same coordinate system as the emitted light of the light emitting point.
可选的,由前述内容可知,摄像头193中的图像传感器采集到第一参考图和第二参考图。由于图像传感器在采集图像时,会受到温度变化的影响,即温漂现象,导致手机100的虚拟光心标定过程中,得到的虚拟光心坐标以及物点坐标存在误差。因此,手机100在使用表1中的某个虚拟光心和该虚拟光心对应的物点之前,可以先对某个虚拟光心和该虚拟光心对应的物点进行温漂补偿,得到经过温漂补偿后的虚拟光心和该虚拟光心对应的物点的坐标,然后执行图10或图12所示的流程。其中,温漂补偿方式可以有多种热敏补偿法或非热敏元件补偿法等,本申请实施例不限定。Optionally, it can be known from the foregoing that the image sensor in the camera 193 captures the first reference picture and the second reference picture. Because the image sensor will be affected by temperature changes when acquiring images, that is, the temperature drift phenomenon, resulting in errors in the virtual optical center coordinates and object point coordinates obtained during the virtual optical center calibration process of the mobile phone 100. Therefore, the mobile phone 100 may perform temperature drift compensation on a virtual optical center and an object point corresponding to the virtual optical center before using a virtual optical center and an object point corresponding to the virtual optical center in Table 1 to obtain a pass. The coordinates of the virtual optical center after temperature drift compensation and the object point corresponding to the virtual optical center are then executed as shown in FIG. 10 or FIG. 12. Among them, the temperature drift compensation method may have multiple thermal compensation methods or non-thermal component compensation methods, etc., which are not limited in the embodiments of the present application.
S1004:判断所述待验证图像与预存的人脸图像是否一致,若一致,则解锁设备,若不一致,不响应所述待验证图像。S1004: Determine whether the image to be verified is consistent with a pre-stored face image, and if they are consistent, unlock the device, and if not, do not respond to the image to be verified.
可选的,若手机100是处于锁屏且黑屏时,解锁设备可以是手机100由黑屏点亮,并显示主界面。若手机100是出于锁屏且亮屏时,解锁设备是手机100由锁屏界面切换到主界面。Optionally, if the mobile phone 100 is in the lock screen and the black screen, the unlocking device may be that the mobile phone 100 is lit by the black screen and displays the main interface. If the mobile phone 100 is locked and bright, the unlocking device is that the mobile phone 100 switches from the lock screen interface to the main interface.
S1005:输出提示信息,该提示信息用于提示用户待验证图像是采集由点阵投射器160的发射光线在物体上反射的光线得到的,存在安全隐患。S1005: Output a prompt message, which is used to prompt the user that the image to be verified is obtained by collecting the light reflected on the object from the emitted light of the dot matrix projector 160, which poses a security risk.
比如,提示信息可以是显示屏194显示文字信息或图标信息,或者是扬声器播放的语音信息。为了提高安全性,手机100可以自动删除所述待验证图像。For example, the prompt information may be text information or icon information displayed on the display screen 194, or voice information played by a speaker. To improve security, the mobile phone 100 may automatically delete the image to be verified.
需要说明的是,图10所示的流程中,是以待验证图像上的一个像点E’为例的。在实际应用中,应用处理器110-1可以在待验证图像上选定N个像点(比如,S1002中确定待验证图像上的N的像点)。对于这N个像点中每个像点可以执行一遍S1003。可选的,当应用处理器110-1确定N个像点对应的N个物点中,处于发光点的发射光线上的物点的数量大于预设数量时,应用处理器110-1执行S1004。It should be noted that, in the process shown in FIG. 10, an image point E 'on the image to be verified is taken as an example. In an actual application, the application processor 110-1 may select N image points on the image to be verified (for example, determine the N image points on the image to be verified in S1002). For each of the N image points, S1003 can be performed once. Optionally, when the application processor 110-1 determines that among the N object points corresponding to the N image points, the number of object points on the emission light of the light emitting point is greater than a preset number, the application processor 110-1 executes S1004 .
需要说明的是,点阵投射器160的发射光线在投射到物体之前,可以经过光学元件比如,衍射光学元件(diffractive optical elements,DOE)。光学元件对点阵投射器160的发射光线产生衍射,形成衍射光斑。比如请参见图11所示,为经过光学元件衍射后的形成的衍射光斑。衍射光斑被投射到人脸上,摄像头193采集该衍射光斑在物体上的反射光线,得到待验证图像。通常,通过光学衍射形成的衍射光斑中,处于衍射光斑中心位置的是0级,从中心位置 向边缘位置依次是1级、2级等。因此,手机100拍摄得到的待验证图像上处于中心位置的是0级,从中心位置向边缘位置依次是1级、2级等。因此,手机100在待验证图像上选择像点时,可以先取待验证图像上处于中心位置即0级的某个像点,然后执行S1003。当手机100执行S1003,确定0级的该像点对应的物点不在点阵投射器160的发射光线上时,手机100可以再次确定0级的其它像点,然后执行S1003。若0级的多个像点对应的多个物点中在点阵投射器160的发射光线上的物点数量大于预设数量时,手机100确定待验证图像是采集由点阵投射器160的发射光线在物体上反射的光线得到的。或者,手机100可以继续确定1级上的像点,然后执行S1003。当手机100确定0-1级上的多个像点对应的多个物点中在点阵投射器160的发射光线上的物点数量大于预设数量时,手机100确定待验证图像是采集由点阵投射器160的发射光线在物体上反射的光线得到的。It should be noted that, before the light emitted by the dot matrix projector 160 is projected onto an object, it may pass through optical elements such as diffractive optical elements (DOE). The optical element diffracts the emitted light from the dot matrix projector 160 to form a diffracted light spot. For example, please refer to FIG. 11, which is a diffracted light spot formed after being diffracted by an optical element. The diffracted light spot is projected on the human face, and the camera 193 collects the reflected light of the diffracted light spot on the object to obtain an image to be verified. Generally, among the diffracted light spots formed by optical diffraction, the 0th order is at the center position of the diffracted light spot, and the 1st order, the 2nd order, and the like from the center position to the edge position. Therefore, the center position on the to-be-verified image captured by the mobile phone 100 is level 0, and the order from the center position to the edge position is level 1, level 2, and so on. Therefore, when the mobile phone 100 selects an image point on the image to be verified, it may first take a certain image point at the center position, that is, level 0, on the image to be verified, and then execute S1003. When the mobile phone 100 executes S1003 and determines that the object point corresponding to the image point of level 0 is not on the emission light of the dot matrix projector 160, the mobile phone 100 may determine other image points of level 0 again, and then execute S1003. If the number of object points on the emission light of the dot matrix projector 160 among multiple object points corresponding to multiple image points of level 0 is greater than a preset number, the mobile phone 100 determines that the image to be verified is collected by the dot matrix projector 160 Resulting from the rays of light reflected on an object. Alternatively, the mobile phone 100 may continue to determine the image point on level 1 and then execute S1003. When the mobile phone 100 determines that the number of the object points on the light emitted by the dot matrix projector 160 among the multiple object points corresponding to the multiple image points on the 0-1 level is greater than a preset number, the mobile phone 100 determines that the image to be verified is collected by The light emitted from the dot matrix projector 160 is obtained by the light reflected on the object.
需要说明的是,图10所示的实施例中,应用处理器110-1先判断待验证图像是否是采集由点阵投射器160的发射光线在物体上反射的光线得到的,若是,再确定待验证图像和预存的人脸图像是否匹配。实际上,上述过程之间的执行顺序可以调整,即,应用处理器110-1先判断待验证图像和预存的人脸图像是否匹配,当待验证图像和预存的图像匹配时,再判断待验证图像是否是采集由点阵投射器160的发射光线在物体上反射的光线得到的。请参见图12所示,所述过程如下:It should be noted that, in the embodiment shown in FIG. 10, the application processor 110-1 first determines whether the image to be verified is obtained by collecting light reflected on the object by the emission light of the dot matrix projector 160, and if so, determines Whether the image to be verified matches the pre-stored face image. In fact, the execution order between the above processes can be adjusted, that is, the application processor 110-1 first determines whether the image to be verified matches the pre-stored face image, and when the image to be verified matches the pre-stored image, the process is then determined Whether the image is obtained by collecting the light reflected by the emitted light from the dot matrix projector 160 on the object. Please refer to FIG. 12, the process is as follows:
S1201:手机100处于锁屏状态,手机100上的距离传感器检测到有物体靠近时,点阵投射器投射光线,摄像头采集光线,得到待验证图像。S1201: When the mobile phone 100 is in the lock screen state, when the distance sensor on the mobile phone 100 detects that an object is approaching, the dot matrix projector projects light, and the camera collects light to obtain an image to be verified.
在S1201中,摄像头得到待验证图像后,将待验证图像发送给手机100中的应用处理器110-1,应用处理器110-1运行本申请实施例提供的图像校正算法的代码以执行如下S1202-S1205的过程。In S1201, after the camera obtains the image to be verified, the camera sends the image to be verified to the application processor 110-1 in the mobile phone 100. The application processor 110-1 runs the code of the image correction algorithm provided in the embodiment of the present application to execute the following S1202 -S1205 process.
S1202:判断待验证图像与预存的人脸图像是否一致,若一致,则执行S1203,若不一致,不响应所述待验证图像。S1202: Determine whether the image to be verified is consistent with the pre-stored face image. If they are consistent, execute S1203. If they are not consistent, do not respond to the image to be verified.
虚拟光心S1203:确定待验证图像上的一个像点E’。Virtual light center S1203: Determine an image point E 'on the image to be verified.
S1204:判断该像点E’对应的物点E是否在所述点阵投射器的虚拟光心的发射光线上,若是,则解锁设备,若否,则执行S1205。S1204: Determine whether the object point E corresponding to the image point E 'is on the light emitted from the virtual optical center of the dot matrix projector. If yes, unlock the device; if not, execute S1205.
S1205:输出提示信息,该提示信息用于提示用户待验证图像不是采集由点阵投射器160的发射光线在物体上反射的光线得到的,存在安全隐患。S1205: Output a prompt message, which is used to prompt the user that the image to be verified is not obtained by collecting the light reflected on the object from the light emitted by the dot matrix projector 160, which poses a security risk.
在上述的实施例中,是通过人脸识别解锁设备的场景为例。实际上,本申请实施例提供的人脸识别算法还可以应用于其它场景,比如刷脸支付(比如,手机100显示支付界面,待验证图像验证通过时,执行支付流程;待验证图像验证失败时,不执行支付流程)、刷脸打卡、密码或权限设置(比如手机100显示权限或密码设置界面,当待验证图像验证通过时,执行权限或密码设置操作,当待验证图像验证失败时,不执行权限或密码设置操作)等类似的场景中。需要说明的是,本申请实施例中,触发启动人脸识别的触发条件(比如距离传感器检测到有物体靠近或者其它条件)、获取待验证图像(即人脸图像)的过程、以及将待验证图像与预先存储的人脸图像进行匹配的过程、待验证图像验证成功后的处理过程或者待验证图像验证失败后的处理过程,可以参考现有技术的相关实现,本发明实施例对这些过程的具体实现不作限定。In the foregoing embodiment, the scenario of unlocking the device through face recognition is taken as an example. In fact, the face recognition algorithm provided in the embodiments of the present application can also be applied to other scenarios, such as face-brush payment (for example, the mobile phone 100 displays a payment interface, and executes the payment process when the image verification to be verified passes; , Do not execute the payment process), face punch, password, or permission settings (such as the phone 100 display permission or password setting interface, when the image to be verified passes the authorization or password setting operation, when the image to be verified fails, Perform permissions or password setting operations) and other similar scenarios. It should be noted that, in the embodiment of the present application, a trigger condition for triggering face recognition (such as a proximity sensor detecting an object approaching or other conditions), a process of acquiring an image to be verified (that is, a face image), and The process of matching the image with the pre-stored face image, the process after the image to be verified is successfully processed, or the process after the image to be verified fails, may refer to related implementations in the prior art. The specific implementation is not limited.
综上所述,本申请实施例提供的人脸识别方法可以判断待验证图像(即人脸图像)是否是采集由本机的点阵投射器160的发射光线在物体上反射的光线得到的。通过这种方式,可 以尽可能的避免黑客将其它人脸图像(并非是采集本机的发射光线在物体上反射的光线得到的图像)输入到手机中而通过验证。这样的人脸识别方式,安全性较高。进一步,本申请实施例提供的人脸识别方法是基于基础的光学成像原理来实现,方案简单,计算量较小,有助于接收计算量,提高效率。In summary, the face recognition method provided in the embodiment of the present application can determine whether the image to be verified (that is, the face image) is obtained by collecting the light reflected on the object by the light emitted by the local dot matrix projector 160. In this way, it is possible to prevent hackers from inputting other face images (not images obtained by collecting the light reflected by the emitted light of the machine on the object) into the mobile phone for verification. Such a face recognition method has high security. Further, the face recognition method provided in the embodiment of the present application is implemented based on the basic optical imaging principle, the scheme is simple, the calculation amount is small, it is helpful to receive the calculation amount, and improve the efficiency.
需要说明的是,本申请实施例提供的人脸识别方法,可以是手机100默认的功能,即用户激活手机100后,手机100便自动以本申请实施例提供的人脸识别方法进行人脸识别。或者,该人脸识别方法可以是用户设置的,即用户在使用手机100的过程中,将手机100的人脸识别方式设置成本申请实施例提供的人脸识别方式。或者,该人脸识别方法中部分步骤是用户设置的,部分步骤是手机100默认的。比如,以图10为例,图10中手机100默认只执行S1004,不执行S1001-S1003。当用户激活该功能(检测人脸图像是否是本机投射的光产生的功能)时,手机100对人脸识别时,执行S1001-S1005。It should be noted that the face recognition method provided in the embodiment of the present application may be a default function of the mobile phone 100, that is, after the user activates the mobile phone 100, the mobile phone 100 automatically performs face recognition using the face recognition method provided in the embodiment of the present application. . Alternatively, the face recognition method may be set by the user, that is, when the user uses the mobile phone 100, the face recognition mode of the mobile phone 100 is set to the face recognition mode provided in the embodiment of the application. Alternatively, some steps in the face recognition method are set by a user, and some steps are defaulted by the mobile phone 100. For example, taking FIG. 10 as an example, the mobile phone 100 in FIG. 10 executes only S1004 by default, and does not execute S1001-S1003. When the user activates this function (a function of detecting whether a face image is generated by light projected by the machine), when the mobile phone 100 recognizes a face, S1001-S1005 is performed.
示例性的,请参见图13所示,为本申请实施例提供的手机设置应用的界面示意图。Exemplarily, please refer to FIG. 13, which is a schematic diagram of an interface of a mobile phone setting application according to an embodiment of the present application.
如图13中的(a)所示,手机100显示设置应用的设置界面1301,在设置界面1301中包含密码设置选项1302。当用户激活密码设置选项1302后,手机100的显示界面如图13中的(b)所示。手机100显示密码设置选项1302的界面1303,在界面1303中包括图形密码、数字密码、指纹密码、人脸密码。当人脸密码选项1304被激活时,手机100的界面如图13中的(c)所示。手机100显示人脸密码的设置界面1305,在界面1305中包括仅人脸匹配和本机发射光确定两个选项。当用户激活仅人脸匹配对应的控件1306时,手机100只将人脸图像与存储的人脸图像进行比对,若一致,则认证通过,若不一致,则认证不通过。当用户激活本机发射光确定对应的控件1307时,手机100执行一遍图10或图12所示的流程。As shown in (a) of FIG. 13, the mobile phone 100 displays a setting interface 1301 of a setting application, and the setting interface 1301 includes a password setting option 1302. After the user activates the password setting option 1302, the display interface of the mobile phone 100 is as shown in FIG. 13 (b). The mobile phone 100 displays an interface 1303 of the password setting option 1302, and the interface 1303 includes a graphic password, a digital password, a fingerprint password, and a face password. When the face password option 1304 is activated, the interface of the mobile phone 100 is as shown in (c) of FIG. 13. The mobile phone 100 displays a face password setting interface 1305, and the interface 1305 includes only two options of face matching and local light emission determination. When the user activates the control 1306 corresponding to face matching only, the mobile phone 100 only compares the face image with the stored face image. If they are the same, the authentication is passed, and if they are not, the authentication is not passed. When the user activates the local light emission to determine the corresponding control 1307, the mobile phone 100 executes the process shown in FIG. 10 or FIG. 12 again.
本申请的各个实施方式可以任意进行组合,以实现不同的技术效果。Various embodiments of the present application can be arbitrarily combined to achieve different technical effects.
上述本申请提供的实施例中,从终端(手机100)作为执行主体的角度对本申请实施例提供的方法进行了介绍。为了实现上述本申请实施例提供的方法中的各功能,终端可以包括硬件结构和/或软件模块,以硬件结构、软件模块、或硬件结构加软件模块的形式来实现上述各功能。上述各功能中的某个功能以硬件结构、软件模块、还是硬件结构加软件模块的方式来执行,取决于技术方案的特定应用和设计约束条件。In the embodiment provided by the present application, the method provided by the embodiment of the present application is described from the perspective of the terminal (the mobile phone 100) as the execution subject. In order to implement the functions in the method provided by the embodiments of the present application, the terminal may include a hardware structure and / or a software module, and implement the foregoing functions in the form of a hardware structure, a software module, or a hardware structure and a software module. Whether one of the above functions is executed by a hardware structure, a software module, or a hardware structure plus a software module depends on the specific application of the technical solution and design constraints.
基于相同的构思,图14所示为本申请提供的一种终端1400。如图14所示,该终端1400可以包括:点阵投射器1401、摄像头1402、一个或多个处理器1403;存储器1404;以及一个或多个计算机程序1405,上述各器件可以通过一个或多个通信总线1406连接。Based on the same concept, FIG. 14 illustrates a terminal 1400 provided by the present application. As shown in FIG. 14, the terminal 1400 may include a dot matrix projector 1401, a camera 1402, one or more processors 1403, a memory 1404, and one or more computer programs 1405. The communication bus 1406 is connected.
其中,点阵投射器1401用于投射光线(点阵投射器1401上的全部或部分发光点投射光线),摄像头1402用于采集图像(待验证图像、第一参考图或第二参考图);其中,所示一个或多个计算机程序1405被存储在上述存储器1404中并被配置为被该一个或多个处理器1403执行,该一个或多个计算机程序1405包括指令,上述指令可以用于执行如图5中的全部或部分步骤及相应实施例中的各个步骤;或者,执行图10所示的S1002-S1005及相应实施例中的各个步骤;或者,执行图12所示的S1202-S1205及相应实施例中的各个步骤。Among them, the dot matrix projector 1401 is used to project light (all or part of the light emitting points on the dot matrix projector 1401 are used to project light), and the camera 1402 is used to collect images (the image to be verified, the first reference picture or the second reference picture); The one or more computer programs 1405 are stored in the memory 1404 and configured to be executed by the one or more processors 1403. The one or more computer programs 1405 include instructions, and the foregoing instructions may be used to execute All or part of the steps shown in FIG. 5 and each step in the corresponding embodiment; or, S1002-S1005 shown in FIG. 10 and each step in the corresponding embodiment are performed; or S1202-S1205 and Each step in the corresponding embodiment.
本发明实施例还提供一种计算机存储介质,该存储介质可以包括存储器,该存储器可存储有程序,该程序被执行时,使得终端执行包括如前的执行如前的图5、图10和图12所示的方法实施例中记载的终端所执行的全部或部分步骤。An embodiment of the present invention further provides a computer storage medium. The storage medium may include a memory, and the memory may store a program. When the program is executed, the execution of the terminal includes the previous execution, as shown in FIG. 5, FIG. 10, and FIG. All or part of the steps performed by the terminal described in the method embodiment shown in 12.
本发明实施例还提供一种包含计算机程序产品,当所述计算机程序产品在终端上运行时,使得所述终端执行包括如前的图5、图10和图12所示的方法实施例中记载的终端所执 行的全部或部分步骤。An embodiment of the present invention further provides a computer program product that, when the computer program product runs on a terminal, causes the terminal to execute a method including the method described in the previous embodiments shown in FIG. 5, FIG. 10, and FIG. All or part of the steps performed by the terminal.
通过以上的实施方式的描述,所属领域的技术人员可以清楚地了解到本申请实施例可以用硬件实现,或固件实现,或它们的组合方式来实现。当使用软件实现时,可以将上述功能存储在计算机可读介质中或作为计算机可读介质上的一个或多个指令或代码进行传输。计算机可读介质包括计算机存储介质和通信介质,其中通信介质包括便于从一个地方向另一个地方传送计算机程序的任何介质。存储介质可以是计算机能够存取的任何可用介质。以此为例但不限于:计算机可读介质可以包括RAM、ROM、电可擦可编程只读存储器(electrically erasable programmable read only memory,EEPROM)、只读光盘(compact disc read-Only memory,CD-ROM)或其他光盘存储、磁盘存储介质或者其他磁存储设备、或者能够用于携带或存储具有指令或数据结构形式的期望的程序代码并能够由计算机存取的任何其他介质。此外。任何连接可以适当的成为计算机可读介质。例如,如果软件是使用同轴电缆、光纤光缆、双绞线、数字用户线(digital subscriber line,DSL)或者诸如红外线、无线电和微波之类的无线技术从网站、服务器或者其他远程源传输的,那么同轴电缆、光纤光缆、双绞线、DSL或者诸如红外线、无线和微波之类的无线技术包括在所属介质的定影中。如本申请实施例所使用的,盘(disk)和碟(disc)包括压缩光碟(compact disc,CD)、激光碟、光碟、数字通用光碟(digital video disc,DVD)、软盘和蓝光光碟,其中盘通常磁性的复制数据,而碟则用激光来光学的复制数据。上面的组合也应当包括在计算机可读介质的保护范围之内。Through the description of the foregoing implementation manners, those skilled in the art can clearly understand that the embodiments of the present application can be implemented by hardware, firmware, or a combination thereof. When implemented in software, the functions described above may be stored in a computer-readable medium or transmitted as one or more instructions or code on a computer-readable medium. Computer-readable media includes computer storage media and communication media including any medium that facilitates transfer of a computer program from one place to another. A storage media may be any available media that can be accessed by a computer. Take this as an example but not limited to: computer-readable media may include RAM, ROM, electrically erasable programmable read-only memory (EEPROM), read-only memory (EEPROM), compact disc-read-only memory (CD-ROM) ROM) or other optical disk storage, magnetic disk storage media or other magnetic storage devices, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and can be accessed by a computer. Also. Any connection is properly a computer-readable medium. For example, if the software is transmitted from a website, server, or other remote source using coaxial cable, fiber optic cable, twisted pair, digital subscriber line (DSL), or wireless technology such as infrared, radio, and microwave, Then coaxial cables, fiber optic cables, twisted pairs, DSL, or wireless technologies such as infrared, wireless, and microwave are included in the fixing of the media. As used in the embodiments of the present application, disks and discs include compact discs (CDs), laser discs, optical discs, digital video discs (DVDs), floppy discs, and Blu-ray discs, among which Discs usually reproduce data magnetically, while discs use lasers to reproduce data optically. The above combination should also be included in the protection scope of the computer-readable medium.
总之,以上所述仅为本申请的实施例而已,并非用于限定本申请的保护范围。凡根据本申请的揭露,所作的任何修改、等同替换、改进等,均应包含在本申请的保护范围之内。In short, the above is only an embodiment of the present application, and is not intended to limit the protection scope of the present application. Any modification, equivalent replacement, improvement, etc. made according to the disclosure of this application shall be included in the protection scope of this application.

Claims (22)

  1. 一种人脸识别方法,其特征在于,包括:A face recognition method, comprising:
    所述终端上的点阵投射器发射光线,所述终端上的摄像头采集到待验证图像;A dot matrix projector on the terminal emits light, and a camera on the terminal collects an image to be verified;
    所述终端确定所述待验证图像上的N个第一像点;N为大于等于1的整数;Determining, by the terminal, N first image points on the image to be verified; N is an integer greater than or equal to 1;
    所述终端根据所述N个第一像点和所述摄像头的参数,确定所述N个第一像点对应的N个第一物点;Determining, by the terminal, N first object points corresponding to the N first image points according to the N first image points and parameters of the camera;
    所述终端判断所述N个第一物点是否在所述点阵投射器的发射光线上;Determining, by the terminal, whether the N first object points are on the emission light of the dot matrix projector;
    若所述N个第一物点在所述点阵投射器的发射光线上,所述终端判断所述待验证图像是否与预存的人脸图像一致;If the N first object points are on the emission light of the dot matrix projector, the terminal determines whether the image to be verified is consistent with a pre-stored face image;
    若所述待验证图像与预存的人脸图像一致,所述终端确定人脸识别成功。If the image to be verified is consistent with a pre-stored face image, the terminal determines that the face recognition is successful.
  2. 如权利要求1所述的方法,其特征在于,在所述终端判断所述N个第一物点是否在所述点阵投射器的发射光线上之前,所述方法还包括:The method according to claim 1, wherein before the terminal determines whether the N first object points are on the emission light of the dot matrix projector, the method further comprises:
    所述点阵投射器的K个发光点中第J发光点发射光线,K为大于或等于1的整数,J为小于或等于K,大于等于1的整数;The J-th light emitting point among the K light emitting points of the dot matrix projector emits light, K is an integer greater than or equal to 1, J is an integer less than or equal to K, and greater than or equal to 1;
    所述终端上的摄像头采集到第一参考图和第二参考图;所述第一参考图是所述终端采集距离所述终端第一距离的物体得到的,所述第二参考图是所述终端采集处于第二距离的所述物体得到的;所述第二距离大于所述第一距离;The camera on the terminal collects a first reference picture and a second reference picture; the first reference picture is obtained by the terminal collecting objects at a first distance from the terminal, and the second reference picture is the Obtained by the terminal collecting the object at a second distance; the second distance is greater than the first distance;
    所述终端选定第一参考图上的M个第二像点,M为大于或等于2的整数;The terminal selects M second image points on the first reference picture, where M is an integer greater than or equal to 2;
    所述终端根据所述M个第二像点和匹配算法,确定第二参考图上与分别所述M个第二像点中每个第二像点一一匹配的M个第三像点,其中,第P个第二像点与所述第P个第三像点匹配,P为大于或等于1,且小于或等于M的整数;Determining, by the terminal according to the M second image points and the matching algorithm, M third image points on the second reference picture that are one-to-one matched with each of the M second image points, Wherein, the P-th second image point matches the P-th third image point, and P is an integer greater than or equal to 1 and less than or equal to M;
    所述终端根据所述M个第二像点和所述摄像头的参数确定对应的M个第二物点,根据所述M个第三像点和所述摄像头的参数确定对应的M个第三物点,其中,所述第P个第二物点与所述第P个第三物点匹配;The terminal determines corresponding M second object points according to the M second image points and parameters of the camera, and determines corresponding M third objects according to the M third image points and parameters of the camera. An object point, wherein the P-th second object point matches the P-th third object point;
    所述终端计算所述点阵发射器的第J个发光点的虚拟光心的坐标,所述虚拟光心为M条第一直线相交得到,所述第P个第二物点与所述第P个第三物点相连得到所述第一直线。The terminal calculates coordinates of a virtual light center of the Jth light emitting point of the dot matrix transmitter, the virtual light center is obtained by intersecting M first straight lines, and the Pth second object point and the The P-th third object point is connected to obtain the first straight line.
  3. 如权利要求1所述的方法,其特征在于,所述终端判断所述N个第一物点是否在所述点阵投射器的发射光线上之前,所述方法还包括:The method according to claim 1, wherein before the terminal determines whether the N first object points are on the emission light of the dot matrix projector, the method further comprises:
    所述点阵投射器的K个发光点中第J发光点发射光线,K为大于或等于1的整数,J为小于或等于K,大于等于1的整数;The J-th light emitting point among the K light emitting points of the dot matrix projector emits light, K is an integer greater than or equal to 1, J is an integer less than or equal to K, and greater than or equal to 1;
    所述终端上的摄像头采集到第一参考图和第二参考图;所述第一参考图是所述终端采集距离所述终端第一距离的物体得到的,所述第二参考图是所述采集处于第二距离的所述物体得到的;所述第二距离大于所述第一距离;The camera on the terminal collects a first reference picture and a second reference picture; the first reference picture is obtained by the terminal collecting objects at a first distance from the terminal, and the second reference picture is the Obtained by collecting the object at a second distance; the second distance is greater than the first distance;
    所述终端选定第一参考图上的M个第二像点,M为大于或等于2的整数;The terminal selects M second image points on the first reference picture, where M is an integer greater than or equal to 2;
    所述终端根据所述M个第二像点和匹配算法,确定第二参考图上与分别所述M个第二像点中每个第二像点一一匹配的M个第三像点,其中,第P个第二像点与所述第P个第三像点匹配,P为大于或等于1,且小于或等于M的整数;Determining, by the terminal according to the M second image points and the matching algorithm, M third image points on the second reference picture that are one-to-one matched with each of the M second image points, Wherein, the P-th second image point matches the P-th third image point, and P is an integer greater than or equal to 1 and less than or equal to M;
    所述终端根据所述M个第二像点和所述摄像头的参数确定对应的M个第二物点,根据所 述M个第三像点和所述摄像头的参数确定对应的M个第三物点,其中,所述第P个第二物点与所述第P个第三物点匹配;The terminal determines corresponding M second object points according to the M second image points and parameters of the camera, and determines corresponding M third objects according to the M third image points and parameters of the camera. An object point, wherein the P-th second object point matches the P-th third object point;
    所述终端计算所述点阵发射器的第J个发光点的虚拟光心的坐标,所述虚拟光心为M条第一直线相交得到,经过温漂补偿后的第P个第二物点与经过温漂补偿后的所述第P个第三物点相连得到所述第一直线。The terminal calculates coordinates of a virtual light center of the Jth light emitting point of the dot matrix transmitter, where the virtual light center is obtained by the intersection of M first straight lines, and the P second second object after temperature drift compensation. A point is connected to the P-th third object point after temperature drift compensation to obtain the first straight line.
  4. 如权利要求2或3所述的方法,其特征在于,所述终端判断所述N个第一物点是否在所述点阵投射器的发射光线上,包括:The method according to claim 2 or 3, wherein the terminal determining whether the N first object points are on the emission light of the dot matrix projector comprises:
    所述终端根据所述点阵发射器的K个虚拟光心的坐标和K*M个发射光线的方程,判断所述N个第一物点是否在K个发光点的发射光线上。The terminal determines whether the N first object points are on the emission light of the K light emitting points according to the coordinates of the K virtual light centers of the dot matrix transmitter and the equation of K * M emission light.
  5. 如权利要求2或3所述的方法,其特征在于,所述终端判断所述N个第一物点是否在所述点阵投射器的发射光线上,包括:The method according to claim 2 or 3, wherein the terminal determining whether the N first object points are on the emission light of the dot matrix projector comprises:
    所述终端根据所述K个虚拟光心的坐标和已标定的物点的坐标,判断所述N个第一物点是否在K个所述虚拟光心和所述已标定的物点的连线中的至少一个上,若在K个所述连线中的至少一个上,则确定所述N个第一物点在所述点阵投射器的发射光线上,若不在K个所述连线中的至少一个上,则确定所述N个第一物点不在所述点阵投射器的发射光线上。Determining, by the terminal according to the coordinates of the K virtual light centers and the coordinates of the calibrated object points, whether the N first object points are in the connection between the K virtual light centers and the calibrated object points If it is on at least one of the lines, if it is on at least one of the K lines, then it is determined that the N first object points are on the emission light of the dot matrix projector, and if it is not on the K lines, On at least one of the lines, it is determined that the N first object points are not on the emission light of the dot matrix projector.
  6. 如权利要求1至5任一项所述的方法,其特征在于,所述终端上的点阵投射器发射光线,所述终端上的摄像头采集到待验证图像之前,所述方法还包括:The method according to any one of claims 1 to 5, wherein the dot matrix projector on the terminal emits light, and before the camera on the terminal collects the image to be verified, the method further comprises:
    所述终端处于锁屏状态;The terminal is in a lock screen state;
    所述终端确定人脸识别成功之后,所述方法还包括:所述终端解锁。After the terminal determines that the face recognition is successful, the method further includes: unlocking the terminal.
  7. 如权利要求1至5任一项所述的方法,其特征在于,所述终端上的点阵投射器发射光线,所述终端上的摄像头采集到待验证图像之前,所述方法还包括:The method according to any one of claims 1 to 5, wherein the dot matrix projector on the terminal emits light, and before the camera on the terminal collects the image to be verified, the method further comprises:
    所述终端显示支付验证界面;The terminal displays a payment verification interface;
    所述终端确定人脸识别成功之后,所述方法还包括:所述终端执行支付流程。After the terminal determines that the face recognition is successful, the method further includes that the terminal executes a payment process.
  8. 如权利要求1至5任一项所述的方法,其特征在于,所述终端上的点阵投射器发射光线,所述终端上的摄像头采集到待验证图像之前,所述方法还包括:The method according to any one of claims 1 to 5, wherein the dot matrix projector on the terminal emits light, and before the camera on the terminal collects the image to be verified, the method further comprises:
    所述终端显示权限或密码设置界面;The terminal displays a permission or password setting interface;
    所述终端确定人脸识别成功之后,所述方法还包括:所述终端执行权限或密码设置操作。After the terminal determines that the face recognition is successful, the method further includes that the terminal performs a permission or password setting operation.
  9. 如权利要求1至8任一项所述的方法,其特征在于,若所述待验证图像与预存的人脸图像不一致,所述终端确定人脸识别失败。The method according to any one of claims 1 to 8, wherein if the image to be verified is not consistent with a pre-stored face image, the terminal determines that face recognition has failed.
  10. 一种光心标定方法,其特征在于,所述方法包括:An optical center calibration method, characterized in that the method includes:
    终端上的点阵投射器的K个发光点中第J发光点发射光线,K为大于或等于1的整数,J为小于或等于K,大于等于1的整数;The Kth light emitting point among the K light emitting points of the dot matrix projector on the terminal emits light, K is an integer greater than or equal to 1, J is an integer less than or equal to K, and greater than or equal to 1;
    所述终端上的摄像头采集到第一参考图和第二参考图;所述第一参考图是所述终端采集距离所述终端第一距离的物体得到的,所述第二参考图是所述采集处于第二距离的所述物体得到的;所述第二距离大于所述第一距离;The camera on the terminal collects a first reference picture and a second reference picture; the first reference picture is obtained by the terminal collecting objects at a first distance from the terminal, and the second reference picture is the Obtained by collecting the object at a second distance; the second distance is greater than the first distance;
    所述终端选定第一参考图上的M个第二像点,M为大于或等于2的整数;The terminal selects M second image points on the first reference picture, where M is an integer greater than or equal to 2;
    所述终端根据所述M个第二像点和匹配算法,确定第二参考图上与分别所述M个第二像点中每个第二像点一一匹配的M个第三像点,其中,第P个第二像点与所述第P个第三像点匹配,P为大于或等于1,且小于或等于M的整数;Determining, by the terminal according to the M second image points and the matching algorithm, M third image points on the second reference picture that are one-to-one matched with each of the M second image points, Wherein, the P-th second image point matches the P-th third image point, and P is an integer greater than or equal to 1 and less than or equal to M;
    所述终端根据所述M个第二像点和所述摄像头的参数确定对应的M个第二物点,根据所 述M个第三像点和所述摄像头的参数确定对应的M个第三物点,其中,所述第P个第二物点与所述第P个第三物点匹配;The terminal determines corresponding M second object points according to the M second image points and parameters of the camera, and determines corresponding M third objects according to the M third image points and parameters of the camera. An object point, wherein the P-th second object point matches the P-th third object point;
    所述终端计算所述点阵发射器的第J个发光点的虚拟光心的坐标,所述虚拟光心为M条第一直线相交得到,所述第P个第二物点与所述第P个第三物点相连得到所述第一直线。The terminal calculates coordinates of a virtual light center of the Jth light emitting point of the dot matrix transmitter, the virtual light center is obtained by intersecting M first straight lines, and the Pth second object point and the The P-th third object point is connected to obtain the first straight line.
  11. 一种终端,其特征在于,包括点阵投射器、摄像头、处理器和存储器;A terminal characterized by comprising a dot matrix projector, a camera, a processor and a memory;
    所述点阵投射器:用于发射光线;The dot matrix projector: for emitting light;
    所述摄像头:用于采集待验证图像;The camera is used to collect an image to be verified;
    所述存储器用于存储一个或多个计算机程序;当所述存储器存储的一个或多个计算机程序被所述处理器执行时,使得所述终端执行:The memory is configured to store one or more computer programs; when the one or more computer programs stored in the memory are executed by the processor, the terminal is caused to execute:
    确定所述待验证图像上的N个第一像点;N为大于等于1的整数;Determining N first image points on the image to be verified; N is an integer greater than or equal to 1;
    根据所述N个第一像点和所述摄像头的参数,确定所述N个第一像点对应的N个第一物点;Determining N first object points corresponding to the N first image points according to the N first image points and parameters of the camera;
    判断所述N个第一物点是否在所述点阵投射器的发射光线上;Determining whether the N first object points are on the emission light of the dot matrix projector;
    若所述N个第一物点在所述点阵投射器的发射光线上,判断所述待验证图像是否与预存的人脸图像一致;If the N first object points are on the emission light of the dot matrix projector, determine whether the image to be verified is consistent with a pre-stored face image;
    若所述待验证图像与预存的人脸图像一致,确定人脸识别成功。If the image to be verified is consistent with a pre-stored face image, it is determined that the face recognition is successful.
  12. 如权利要求11所述的终端,其特征在于,所述点阵投射器的K个发光点中第J发光点发射光线,K为大于或等于1的整数,J为小于或等于K,大于等于1的整数;The terminal according to claim 11, wherein the Jth light emitting point among the K light emitting points of the dot matrix projector emits light, K is an integer greater than or equal to 1, J is less than or equal to K, and is greater than or equal to An integer of 1
    所述摄像头,还用于采集到第一参考图和第二参考图;所述第一参考图是采集距离所述终端第一距离的物体得到的,所述第二参考图是采集处于第二距离的所述物体得到的;所述第二距离大于所述第一距离;The camera is further configured to collect a first reference picture and a second reference picture; the first reference picture is obtained by collecting an object at a first distance from the terminal, and the second reference picture is collected at a second Obtained from the object at a distance; the second distance is greater than the first distance;
    当所述存储器存储的一个或多个计算机程序被所述处理器执行时,还使得所述终端执行:When the one or more computer programs stored in the memory are executed by the processor, the terminal is further caused to execute:
    选定第一参考图上的M个第二像点,M为大于或等于2的整数;Selecting M second image points on the first reference picture, where M is an integer greater than or equal to 2;
    根据所述M个第二像点和匹配算法,确定第二参考图上与分别所述M个第二像点中每个第二像点一一匹配的M个第三像点,其中,第P个第二像点与所述第P个第三像点匹配,P为大于或等于1,且小于或等于M的整数;According to the M second image points and the matching algorithm, M third image points on the second reference picture that are matched one by one with each second image point of the M second image points, respectively, P second image points match the Pth third image point, and P is an integer greater than or equal to 1 and less than or equal to M;
    根据所述M个第二像点和所述摄像头的参数确定对应的M个第二物点,根据所述M个第三像点和所述摄像头的参数确定对应的M个第三物点,其中,所述第P个第二物点与所述第P个第三物点匹配;Determining corresponding M second object points according to the M second image points and parameters of the camera, and determining corresponding M third object points according to the M third image points and parameters of the camera, Wherein, the P-th second object point matches the P-th third object point;
    计算所述点阵发射器的第J个发光点的虚拟光心的坐标,所述虚拟光心为M条第一直线相交得到,所述第P个第二物点与所述第P个第三物点相连得到所述第一直线。Calculate the coordinates of the virtual light center of the Jth light emitting point of the dot matrix emitter, the virtual light center is obtained by the intersection of M first straight lines, the Pth second object point and the Pth The third object point is connected to obtain the first straight line.
  13. 如权利要求11所述的终端,其特征在于,所述点阵投射器的K个发光点中第J发光点发射光线,K为大于或等于1的整数,J为小于或等于K,大于等于1的整数;The terminal according to claim 11, wherein the Jth light emitting point among the K light emitting points of the dot matrix projector emits light, K is an integer greater than or equal to 1, J is less than or equal to K, and is greater than or equal to An integer of 1
    所述摄像头还用于:采集到第一参考图和第二参考图;所述第一参考图是采集距离所述终端第一距离的物体得到的,所述第二参考图是采集处于第二距离的所述物体得到的;所述第二距离大于所述第一距离;The camera is further configured to collect a first reference picture and a second reference picture; the first reference picture is obtained by collecting an object at a first distance from the terminal, and the second reference picture is collected at a second Obtained from the object at a distance; the second distance is greater than the first distance;
    当所述存储器存储的一个或多个计算机程序被所述处理器执行时,还使得所述终端执行:When the one or more computer programs stored in the memory are executed by the processor, the terminal is further caused to execute:
    选定第一参考图上的M个第二像点,M为大于或等于2的整数;Selecting M second image points on the first reference picture, where M is an integer greater than or equal to 2;
    根据所述M个第二像点和匹配算法,确定第二参考图上与分别所述M个第二像点中每个第二像点一一匹配的M个第三像点,其中,第P个第二像点与所述第P个第三像点匹配,P为大 于或等于1,且小于或等于M的整数;According to the M second image points and the matching algorithm, M third image points on the second reference picture that are matched one by one with each second image point of the M second image points, respectively, P second image points match the Pth third image point, and P is an integer greater than or equal to 1 and less than or equal to M;
    根据所述M个第二像点和所述摄像头的参数确定对应的M个第二物点,根据所述M个第三像点和所述摄像头的参数确定对应的M个第三物点,其中,所述第P个第二物点与所述第P个第三物点匹配;Determining corresponding M second object points according to the M second image points and parameters of the camera, and determining corresponding M third object points according to the M third image points and parameters of the camera, Wherein, the P-th second object point matches the P-th third object point;
    计算所述点阵发射器的第J个发光点的虚拟光心的坐标,所述虚拟光心为M条第一直线相交得到,经过温漂补偿后的第P个第二物点与经过温漂补偿后的所述第P个第三物点相连得到所述第一直线。Calculate the coordinates of the virtual light center of the J-th light emitting point of the dot matrix transmitter, the virtual light center is obtained by the intersection of M first straight lines, and the P-th second object point and The P-th third object point after temperature drift compensation is connected to obtain the first straight line.
  14. 如权利要求12或13所述的终端,其特征在于,当所述存储器存储的一个或多个计算机程序被所述处理器执行时,还使得所述终端执行:The terminal according to claim 12 or 13, wherein when one or more computer programs stored in the memory are executed by the processor, the terminal is further caused to execute:
    根据所述点阵发射器的K个虚拟光心的坐标和K*M个发射光线的方程,判断所述N个第一物点是否在K个发光点的发射光线上。According to the coordinates of the K virtual light centers of the dot matrix transmitter and the equation of K * M emitted light, it is determined whether the N first object points are on the emitted light of the K light emitting points.
  15. 如权利要求12或13所述的终端,其特征在于,当所述存储器存储的一个或多个计算机程序被所述处理器执行时,还使得所述终端执行:The terminal according to claim 12 or 13, wherein when one or more computer programs stored in the memory are executed by the processor, the terminal is further caused to execute:
    根据所述K个虚拟光心的坐标和已标定的物点的坐标,判断所述N个第一物点是否在K个所述虚拟光心和所述已标定的物点的连线中的至少一个上,若在K个所述连线中的至少一个上,则确定所述N个第一物点在所述点阵投射器的发射光线上,若不在K个所述连线中的至少一个上,则确定所述N个第一物点不在所述点阵投射器的发射光线上。Determine whether the N first object points are in a line connecting the K virtual optical centers and the calibrated object points according to the coordinates of the K virtual optical centers and the coordinates of the calibrated object points On at least one, if it is on at least one of the K lines, it is determined that the N first object points are on the emission light of the dot matrix projector, and if it is not on the K lines, It is determined that at least one of the N first object points is not on the emission light of the dot matrix projector.
  16. 如权利要求11至15任一项所述的终端,其特征在于,所述点阵投射器发射光线之前,所述终端处于锁屏状态;当所述存储器存储的一个或多个计算机程序被所述处理器执行时,还使得所述终端执行:当确定人脸识别成功之后,解锁所述终端。The terminal according to any one of claims 11 to 15, wherein before the dot matrix projector emits light, the terminal is in a lock screen state; when one or more computer programs stored in the memory are deleted by When the processor executes, the terminal is further caused to execute: after determining that the face recognition is successful, unlock the terminal.
  17. 如权利要求11至15任一项所述的终端,其特征在于,所述终端还包括显示屏,所述点阵投射器发射光线之前,所述显示屏显示支付验证界面;当所述存储器存储的一个或多个计算机程序被所述处理器执行时,还使得所述终端执行:当确定人脸识别成功之后,执行支付流程。The terminal according to any one of claims 11 to 15, wherein the terminal further comprises a display screen, and before the dot matrix projector emits light, the display screen displays a payment verification interface; when the memory stores When the one or more computer programs are executed by the processor, the terminal is further caused to execute: after determining that the face recognition is successful, execute a payment process.
  18. 如权利要求11至15任一项所述的终端,其特征在于,所述终端还包括显示屏,所述点阵投射器发射光线之前,所述显示屏显示权限或密码设置界面;当所述存储器存储的一个或多个计算机程序被所述处理器执行时,还使得所述终端执行:当确定人脸识别成功之后,执行权限或密码设置操作。The terminal according to any one of claims 11 to 15, wherein the terminal further comprises a display screen, and before the dot matrix projector emits light, the display screen displays a permission or password setting interface; when the When the one or more computer programs stored in the memory are executed by the processor, the terminal is further caused to execute: after determining that the face recognition is successful, perform a permission or password setting operation.
  19. 如权利要求11至18任一项所述的终端,其特征在于,当所述存储器存储的一个或多个计算机程序被所述处理器执行时,还使得所述终端执行:The terminal according to any one of claims 11 to 18, wherein when the one or more computer programs stored in the memory are executed by the processor, the terminal is further caused to execute:
    若所述待验证图像与预存的人脸图像不一致,确定人脸识别失败。If the image to be verified is inconsistent with a pre-stored face image, it is determined that face recognition has failed.
  20. 一种终端,其特征在于,包括点阵投射器、摄像头、处理器和存储器;A terminal characterized by comprising a dot matrix projector, a camera, a processor and a memory;
    所述点阵投射器的K个发光点中第J发光点发射光线,K为大于或等于1的整数,J为小于或等于K,大于等于1的整数;The J-th light emitting point among the K light emitting points of the dot matrix projector emits light, K is an integer greater than or equal to 1, J is an integer less than or equal to K, and greater than or equal to 1;
    所述摄像头用于采集到第一参考图和第二参考图;所述第一参考图是所述终端采集距离所述终端第一距离的物体得到的,所述第二参考图是所述采集处于第二距离的所述物体得到的;所述第二距离大于所述第一距离;The camera is used to collect a first reference picture and a second reference picture; the first reference picture is obtained by the terminal collecting objects at a first distance from the terminal, and the second reference picture is the acquisition Obtained by the object at a second distance; the second distance is greater than the first distance;
    所述存储器用于存储一个或多个计算机程序;当所述存储器存储的一个或多个计算机程序被所述处理器执行时,使得所述终端执行:The memory is configured to store one or more computer programs; when the one or more computer programs stored in the memory are executed by the processor, the terminal is caused to execute:
    选定第一参考图上的M个第二像点,M为大于或等于2的整数;Selecting M second image points on the first reference picture, where M is an integer greater than or equal to 2;
    根据所述M个第二像点和匹配算法,确定第二参考图上与分别所述M个第二像点中每个第二像点一一匹配的M个第三像点,其中,第P个第二像点与所述第P个第三像点匹配,P为大于或等于1,且小于或等于M的整数;According to the M second image points and the matching algorithm, M third image points on the second reference picture that are matched one by one with each second image point of the M second image points, respectively, P second image points match the Pth third image point, and P is an integer greater than or equal to 1 and less than or equal to M;
    根据所述M个第二像点和所述摄像头的参数确定对应的M个第二物点,根据所述M个第三像点和所述摄像头的参数确定对应的M个第三物点,其中,所述第P个第二物点与所述第P个第三物点匹配;Determining corresponding M second object points according to the M second image points and parameters of the camera, and determining corresponding M third object points according to the M third image points and parameters of the camera, Wherein, the P-th second object point matches the P-th third object point;
    计算所述点阵发射器的第J个发光点的虚拟光心的坐标,所述虚拟光心为M条第一直线相交得到,所述第P个第二物点与所述第P个第三物点相连得到所述第一直线。Calculate the coordinates of the virtual light center of the Jth light emitting point of the dot matrix emitter, the virtual light center is obtained by the intersection of M first straight lines, the Pth second object point and the Pth The third object point is connected to obtain the first straight line.
  21. 一种计算机存储介质,其特征在于,所述计算机可读存储介质包括计算机程序,当计算机程序在终端上运行时,使得所述终端执行如权利要求1至10任一所述的方法。A computer storage medium, wherein the computer-readable storage medium includes a computer program, and when the computer program runs on a terminal, the terminal causes the terminal to execute the method according to any one of claims 1 to 10.
  22. 一种包含指令的计算机程序产品,其特征在于,当所述指令在计算机上运行时,使得所述计算机执行如权利要求1-10任一项所述的方法。A computer program product containing instructions, wherein when the instructions are run on a computer, the computer is caused to execute the method according to any one of claims 1-10.
PCT/CN2019/084183 2018-09-30 2019-04-25 Human face identification method, photocenter calibration method and terminal WO2020062848A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201811162463.2 2018-09-30
CN201811162463.2A CN109325460B (en) 2018-09-30 2018-09-30 A kind of face identification method, optical center scaling method and terminal

Publications (1)

Publication Number Publication Date
WO2020062848A1 true WO2020062848A1 (en) 2020-04-02

Family

ID=65265491

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/084183 WO2020062848A1 (en) 2018-09-30 2019-04-25 Human face identification method, photocenter calibration method and terminal

Country Status (2)

Country Link
CN (1) CN109325460B (en)
WO (1) WO2020062848A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115200475B (en) * 2022-07-14 2024-06-07 河南埃尔森智能科技有限公司 Rapid correction method for arm-mounted multi-vision sensor

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109325460B (en) * 2018-09-30 2019-10-22 华为技术有限公司 A kind of face identification method, optical center scaling method and terminal
CN111742542B (en) * 2019-08-30 2022-04-22 深圳市汇顶科技股份有限公司 Imaging device and non-mobile terminal
CN112861764B (en) * 2021-02-25 2023-12-08 广州图语信息科技有限公司 Face recognition living body judging method
CN113111762B (en) * 2021-04-07 2024-04-05 瑞芯微电子股份有限公司 Face recognition method, detection method, medium and electronic equipment

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102831400A (en) * 2012-07-31 2012-12-19 西北工业大学 Multispectral face identification method, and system thereof
WO2015137645A1 (en) * 2014-03-13 2015-09-17 엘지전자 주식회사 Mobile terminal and method for controlling same
CN107657222A (en) * 2017-09-12 2018-02-02 广东欧珀移动通信有限公司 Face identification method and Related product
CN108594533A (en) * 2018-05-31 2018-09-28 南京禾蕴信息科技有限公司 A kind of liquid-crystal apparatus and method that infrared divergence light is become to homogenous diffusion structure light
CN109325460A (en) * 2018-09-30 2019-02-12 华为技术有限公司 A kind of face identification method, optical center scaling method and terminal

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102592115B (en) * 2011-12-26 2014-04-30 Tcl集团股份有限公司 Hand positioning method and system
CN204791066U (en) * 2015-05-21 2015-11-18 北京中科虹霸科技有限公司 A mobile terminal that is used for mobile terminal's iris recognition device and contains it
CN107403146A (en) * 2017-07-14 2017-11-28 广东欧珀移动通信有限公司 detection method and related product
CN107590461B (en) * 2017-09-12 2021-04-02 Oppo广东移动通信有限公司 Face recognition method and related product
CN108090340B (en) * 2018-02-09 2020-01-10 Oppo广东移动通信有限公司 Face recognition processing method, face recognition processing device and intelligent terminal

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102831400A (en) * 2012-07-31 2012-12-19 西北工业大学 Multispectral face identification method, and system thereof
WO2015137645A1 (en) * 2014-03-13 2015-09-17 엘지전자 주식회사 Mobile terminal and method for controlling same
CN107657222A (en) * 2017-09-12 2018-02-02 广东欧珀移动通信有限公司 Face identification method and Related product
CN108594533A (en) * 2018-05-31 2018-09-28 南京禾蕴信息科技有限公司 A kind of liquid-crystal apparatus and method that infrared divergence light is become to homogenous diffusion structure light
CN109325460A (en) * 2018-09-30 2019-02-12 华为技术有限公司 A kind of face identification method, optical center scaling method and terminal

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115200475B (en) * 2022-07-14 2024-06-07 河南埃尔森智能科技有限公司 Rapid correction method for arm-mounted multi-vision sensor

Also Published As

Publication number Publication date
CN109325460A (en) 2019-02-12
CN109325460B (en) 2019-10-22

Similar Documents

Publication Publication Date Title
WO2020062848A1 (en) Human face identification method, photocenter calibration method and terminal
KR102597680B1 (en) Electronic device for providing customized quality image and method for controlling thereof
JP7195422B2 (en) Face recognition method and electronic device
CN110024370B (en) Electronic device and method for displaying image for iris recognition in electronic device
CN108664783B (en) Iris recognition-based recognition method and electronic equipment supporting same
KR20200017072A (en) Electronic device and method for providing notification relative to image displayed via display and image stored in memory based on image analysis
CN104216117A (en) Display device
JP7203955B2 (en) Face recognition method and apparatus
EP4063203A1 (en) Authentication method and medium and electronic apparatus thereof
US20220006951A1 (en) Electronic device for recommending composition and operating method thereof
KR20170028941A (en) Method and device for recognizing biometric information
US10867202B2 (en) Method of biometric authenticating using plurality of camera with different field of view and electronic apparatus thereof
CN109068116A (en) Image processing method, device, mobile terminal and storage medium based on light filling
CN112840642B (en) Image shooting method and terminal equipment
CN110096865B (en) Method, device and equipment for issuing verification mode and storage medium
KR20160030674A (en) Method and apparatus for iris recognition
TW201828152A (en) Feature image acquisition method and apparatus, and user authentication method avoiding a phenomenon that when a facial image rather than a real user is photographed, the photographed image is still considered as an effective feature image of the face
CN115087975A (en) Electronic device and method for identifying object
CN114595437A (en) Access control method, electronic device, and computer-readable storage medium
WO2021082620A1 (en) Image recognition method and electronic device
CN112532854B (en) Image processing method and electronic equipment
KR20210086031A (en) Method for proving original based on block chain and electronic device using the same
CN108124098B (en) Electronic device and method for auto-focusing
BR102022014205A2 (en) ELECTRONIC DEVICES AND CORRESPONDING METHODS TO AUTOMATICALLY PERFORM LOGIN OPERATIONS IN MULTI-PERSON CONTENT PRESENTATION ENVIRONMENTS
EP3855358A1 (en) Object recognition method and terminal device

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19866105

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19866105

Country of ref document: EP

Kind code of ref document: A1