WO2022262408A1 - 人脸图像显示方法、可读存储介质、程序产品及电子设备 - Google Patents

人脸图像显示方法、可读存储介质、程序产品及电子设备 Download PDF

Info

Publication number
WO2022262408A1
WO2022262408A1 PCT/CN2022/087992 CN2022087992W WO2022262408A1 WO 2022262408 A1 WO2022262408 A1 WO 2022262408A1 CN 2022087992 W CN2022087992 W CN 2022087992W WO 2022262408 A1 WO2022262408 A1 WO 2022262408A1
Authority
WO
WIPO (PCT)
Prior art keywords
camera
face
coordinate system
image
electronic device
Prior art date
Application number
PCT/CN2022/087992
Other languages
English (en)
French (fr)
Inventor
赵磊
郑佳龙
张强
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to EP22823895.2A priority Critical patent/EP4336404A1/en
Publication of WO2022262408A1 publication Critical patent/WO2022262408A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G07CHECKING-DEVICES
    • G07CTIME OR ATTENDANCE REGISTERS; REGISTERING OR INDICATING THE WORKING OF MACHINES; GENERATING RANDOM NUMBERS; VOTING OR LOTTERY APPARATUS; ARRANGEMENTS, SYSTEMS OR APPARATUS FOR CHECKING NOT PROVIDED FOR ELSEWHERE
    • G07C9/00Individual registration on entry or exit
    • G07C9/20Individual registration on entry or exit involving the use of a pass
    • G07C9/22Individual registration on entry or exit involving the use of a pass in combination with an identity check of the pass holder
    • G07C9/25Individual registration on entry or exit involving the use of a pass in combination with an identity check of the pass holder using biometric data, e.g. fingerprints, iris scans or voice recognition
    • G07C9/257Individual registration on entry or exit involving the use of a pass in combination with an identity check of the pass holder using biometric data, e.g. fingerprints, iris scans or voice recognition electronically
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • GPHYSICS
    • G07CHECKING-DEVICES
    • G07CTIME OR ATTENDANCE REGISTERS; REGISTERING OR INDICATING THE WORKING OF MACHINES; GENERATING RANDOM NUMBERS; VOTING OR LOTTERY APPARATUS; ARRANGEMENTS, SYSTEMS OR APPARATUS FOR CHECKING NOT PROVIDED FOR ELSEWHERE
    • G07C9/00Individual registration on entry or exit
    • G07C9/30Individual registration on entry or exit not involving the use of a pass
    • G07C9/32Individual registration on entry or exit not involving the use of a pass in combination with an identity check
    • G07C9/37Individual registration on entry or exit not involving the use of a pass in combination with an identity check using biometric data, e.g. fingerprints, iris scans or voice recognition

Definitions

  • the present application relates to the technical field of terminals, and in particular to a face image display method, a readable storage medium, a program product and an electronic device.
  • the electronic device With the rapid development of artificial intelligence (AI) technology, more and more electronic devices support the face recognition function. Usually, in order to realize the face recognition function, the electronic device first needs to record the face. In some scenarios, for an electronic device that does not include a display screen, such as a typical smart door lock, the user cannot intuitively perceive whether the face is currently within the viewing range of the camera of the electronic device, or where it is located within the viewing range of the camera. In some solutions, the electronic device can send the image collected by the camera to other electronic devices containing the display screen for display. However, the image displayed by the other electronic device containing the display screen is different from the face in the image actually used to record the face information.
  • AI artificial intelligence
  • Embodiments of the present application provide a face image display method, a readable storage medium, a program product, and an electronic device.
  • the human face image captured by the first camera as the preview image is transformed, so that the person in the transformed preview image
  • the position of the face is the same as the position of the face in the face image used for face recognition captured by the second camera at the same moment when the first camera captures the face image as the preview image, so that the second camera externally connected to the first electronic device
  • the preview image displayed by the electronic device can accurately reflect the position, orientation, etc. of the face in the field of view of the second camera, so that the user can adjust the position, orientation, etc. of the face in time through the preview image displayed by the second electronic device, and further Enabling the second camera of the first electronic device to quickly capture effective face images for face entry can improve the efficiency and accuracy of face entry and improve user experience.
  • the embodiment of the present application provides a face image display method, which is applied to a system including a first electronic device and a second electronic device, the first electronic device includes a first camera and a second camera, and the first camera There is an imaging mapping relationship with the second camera, and the method includes:
  • the first electronic device acquires the first face image collected by the first camera at the first moment, wherein the face in the first face image is located at the first position of the first face image;
  • the second electronic device displays the second face image, wherein the second face image is an image transformed from the first face image according to the imaging mapping relationship, and the position of the face in the second face image is the same as the position of the face in the second face image.
  • the positions of the two cameras in the image collected at the first moment are the same.
  • the first camera can be a wide-angle camera, such as a cat-eye camera, a fish-eye camera, etc.
  • the second camera can be any camera in a TOF camera, a structured light camera, a binocular stereo imaging camera, a depth camera, and an infrared camera. The second camera is used to collect the depth information of the image.
  • the second face image displayed by the second electronic device may be the image obtained after the first electronic device converts the first face image according to the imaging mapping relationship, or it may be the image obtained by the second electronic device converting the first face image according to the imaging mapping relationship.
  • the image obtained after face image conversion may be the image obtained after face image conversion.
  • the face image captured by the first camera as the preview image is transformed, so that the position of the face in the transformed preview image and the position of the face image captured by the first camera as the preview image by the second camera
  • the positions of the faces in the face images taken at the same time for face recognition are the same, so that the preview image displayed by the second electronic device connected to the first electronic device can accurately reflect that the face is in the field of view of the second camera position, orientation, etc., so that the user can adjust the face position, orientation, etc. in time through the preview image displayed by the second electronic device, and then enable the second camera of the first electronic device to quickly capture an effective face image.
  • Face registration can improve the efficiency and accuracy of face registration and improve user experience.
  • the imaging mapping relationship is a mapping relationship between the first camera image coordinate system and the second camera image coordinate system
  • the first camera image coordinate system is a two-dimensional coordinate system associated with the first camera
  • the second camera image coordinate system is a two-dimensional coordinate system associated with the second camera
  • the above-mentioned imaging mapping relationship is a preset parameter, or,
  • the imaging mapping relationship is determined by the first electronic device or the second electronic device according to the first mapping relationship, the second mapping relationship and the third mapping relationship,
  • the first mapping relationship is the mapping relationship between the first camera space coordinate system and the second camera space coordinate system
  • the first camera space coordinate system is a three-dimensional coordinate system associated with the first camera
  • the second camera space coordinate system is the 3D coordinate system associated with the second camera
  • the second mapping relationship is the mapping relationship between the first camera space coordinate system and the first camera image coordinate system
  • the third mapping relationship is a mapping relationship between the second camera space coordinate system and the second camera image coordinate system.
  • the first camera image coordinate system is a two-dimensional planar Cartesian coordinate system associated with the first camera.
  • the first camera is a cat-eye camera
  • the image coordinate system of the first camera is the cat-eye image coordinate system
  • the cat-eye image coordinate system is a plane Cartesian coordinate system with the center position of the photosensitive chip of the cat-eye camera as the coordinate origin.
  • the second camera image coordinate system is a two-dimensional planar Cartesian coordinate system associated with the second camera.
  • the second camera is a TOF camera
  • the image coordinate system of the second camera is the TOF image coordinate system
  • the TOF image coordinate system is a plane Cartesian coordinate system with the center position of the photosensitive chip of the TOF camera as the coordinate origin.
  • the first camera space coordinate system is a three-dimensional coordinate system associated with the first camera.
  • the first camera is a cat-eye camera
  • the space coordinate system of the first camera is the cat-eye space coordinate system
  • the cat-eye space coordinate system is a three-dimensional space coordinate system with the center position of the lens of the cat-eye camera as the coordinate origin.
  • the second camera space coordinate system is a three-dimensional coordinate system associated with the second camera.
  • the second camera is a TOF camera
  • the space coordinate system of the second camera is the TOF space coordinate system
  • the TOF space coordinate system is a three-dimensional space coordinate system with the center position of the lens of the TOF camera as the coordinate origin.
  • the three-dimensional coordinates of point A in the cat's eye space coordinate system are: (x1, y1, z1);
  • the two-dimensional coordinates of the pixel corresponding to the point in the image taken by the Maoyan camera are: (x2, y2);
  • the three-dimensional coordinates of point A in the TOF space coordinate system are: (x1', y1', z1');
  • point A The two-dimensional coordinates of the corresponding pixel in the image captured by the TOF camera are: (x2', y2').
  • the above first mapping relationship represents the relationship between the three-dimensional coordinates (x1, y1, z1) of point A in the Maoyan space coordinate system and the three-dimensional coordinates (x1', y1', z1') of point A in the TOF space coordinate system mapping relationship between them.
  • the above-mentioned second mapping relationship represents the relationship between the three-dimensional coordinates (x1, y1, z1) of point A in the cat's eye space coordinate system and the two-dimensional coordinates (x2, y2) of the pixel corresponding to point A in the image captured by the cat's eye camera. mapping relationship between them.
  • the above third mapping relationship represents the three-dimensional coordinates (x1', y1', z1') of point A in the TOF space coordinate system and the two-dimensional coordinates (x2' , y2') between the mapping relationship.
  • the above method further includes:
  • the second electronic device receives the first face image sent by the first electronic device, processes the first face image according to the imaging mapping relationship, and obtains the second face image,
  • the second electronic device receives the second face image sent by the first electronic device.
  • the foregoing first electronic device or the second electronic device obtains the imaging mapping relationship through an application program associated with the first electronic device.
  • the first electronic device when the first electronic device establishes a connection with the second electronic device, and uses the second electronic device as an external device of the first electronic device to display a preview image of a human face, the first electronic device may store the above-mentioned imaging mapping relationship send to the second electronic device, or the first electronic device calculates the imaging mapping relationship based on the stored first mapping relationship, second mapping relationship and third mapping relationship, and then sends the calculated imaging mapping relationship to the second electronic device
  • the electronic device is used to process the first human face image based on the imaging mapping relationship by the second electronic device, so as to obtain the second human face image.
  • the first electronic device may The relationship processes the first face image to obtain the second face image, and then sends it to the second electronic device.
  • the first electronic device calculates the imaging mapping relationship based on the stored first mapping relationship, second mapping relationship, and third mapping relationship, and then processes the first face image according to the calculated imaging mapping relationship to obtain the second The face image is then sent to the second electronic device.
  • the first electronic device can also obtain the above-mentioned imaging mapping relationship by downloading and installing an application program associated with the first electronic device, and then send the obtained mapping relationship to the second electronic device, and the second electronic device can use the above-mentioned imaging mapping The relationship processes the first face image to obtain the second face image.
  • the first electronic device directly processes the first face image according to the acquired imaging mapping relationship to obtain the second face image, and then sends the second face image to the second electronic device.
  • the second electronic device can also download and install the application program associated with the first electronic device to obtain the above imaging mapping relationship, and then process the first human face image according to the above imaging mapping relationship to obtain the second human face image.
  • the above method further includes:
  • the second electronic device displays first prompt information for prompting the user to adjust the position of the face.
  • the second electronic device displays text prompts such as "please move the face to the left”, “please move the face to the right", to prompt the user to adjust the position of the face.
  • the above method further includes:
  • the second electronic device when the position of the human face is close to the center of the second human face image, the second electronic device no longer displays the first prompt information.
  • the second electronic device when the position of the human face is at the center of the second human face image or the position of the human face is within a preset distance around the center of the second human face image, the second electronic device will no longer display the first prompt information
  • the above method further includes:
  • the second electronic device displays second prompt information for prompting the user to adjust the face orientation.
  • the second electronic device determines that the human face is at the center or close to the center of the preview image, it displays "Please look up”, “Please lower your head”, “Please turn your face to the right”, “Please turn your face to the left” etc. text prompts.
  • the foregoing first prompt information and/or the second prompt information are generated according to the second human face image.
  • the angle of view of the first camera is larger than the angle of view of the second camera.
  • the above-mentioned first camera is any one of a cat-eye camera and a fish-eye camera
  • the second camera is a TOF camera, a structured light camera, a binocular stereo imaging camera, and a depth camera. 1, any one of the infrared cameras, and the second camera is used to collect the depth information of the image.
  • the foregoing first electronic device is a smart door lock
  • the second electronic device is a mobile phone.
  • the first electronic device may be any electronic device without a display screen that supports the face recognition function, including but not limited to smart door locks, robots, security devices, and the like.
  • the second electronic device may be various portable terminal devices with display and image processing functions.
  • the second electronic device may also be various portable terminal devices such as bracelets, watches, and tablet computers.
  • an embodiment of the present application provides a method for displaying a face image, which is applied to a second electronic device, and the method includes:
  • the second face image is an image transformed from the first face image according to an imaging mapping relationship
  • the imaging mapping relationship is a mapping relationship between images collected by the first camera and the second camera
  • the first camera and the second camera are included in the first electronic device different from the second electronic device
  • the first face image is an image collected by the first camera at the first moment
  • the face in the first face image is located at the first
  • the position of the human face in the second human face image is the same as the position of the human face in the image collected by the second camera at the first moment.
  • the imaging mapping relationship is a mapping relationship between the first camera image coordinate system and the second camera image coordinate system
  • the first camera image coordinate system is a two-dimensional coordinate system associated with the first camera
  • the second camera image coordinate system is a two-dimensional coordinate system associated with the second camera
  • the above-mentioned imaging mapping relationship is a preset parameter, or,
  • the imaging mapping relationship is determined by the second electronic device according to the first mapping relationship, the second mapping relationship and the third mapping relationship,
  • the first mapping relationship is the mapping relationship between the first camera space coordinate system and the second camera space coordinate system
  • the first camera space coordinate system is a three-dimensional coordinate system associated with the first camera
  • the second camera space coordinate system is the 3D coordinate system associated with the second camera
  • the second mapping relationship is the mapping relationship between the first camera space coordinate system and the first camera image coordinate system
  • the third mapping relationship is a mapping relationship between the second camera space coordinate system and the second camera image coordinate system.
  • the above method further includes: receiving the first face image sent by the first electronic device, processing the first face image according to the imaging mapping relationship to obtain the second face image,
  • the second face image sent by the first electronic device is received.
  • the foregoing second electronic device obtains the imaging mapping relationship through an application program associated with the first electronic device.
  • the above method further includes:
  • the above method further includes:
  • the second electronic device when the position of the human face is close to the center of the second human face image, the second electronic device does not display the first prompt information any more.
  • the above method further includes:
  • the above-mentioned first prompt information and/or the second prompt information are generated according to the second human face image.
  • the field angle of the first camera is larger than the field angle of the second camera.
  • the above-mentioned first camera is any one of a cat-eye camera and a fish-eye camera
  • the second camera is a TOF camera, a structured light camera, a binocular stereo imaging camera, and a depth camera. 1, any one of the infrared cameras, and the second camera is used to collect the depth information of the image.
  • the foregoing second electronic device is a mobile phone.
  • the embodiment of the present application provides a method for displaying a face image, which is applied to a first electronic device.
  • the first electronic device includes a first camera and a second camera, and imaging mapping is provided between the first camera and the second camera. relationship, the method includes:
  • the first face image collected by the first camera at the first moment wherein the face in the first face image is located at the first position of the first face image, and the first face image is used for transformation according to the imaging mapping relationship
  • a second face image is obtained, the position of the face in the second face image is the same as the position of the face in the image collected by the second camera at the first moment, and the second face image is used to displayed in the second electronic device of the device.
  • the imaging mapping relationship is a mapping relationship between the first camera image coordinate system and the second camera image coordinate system
  • the first camera image coordinate system is a two-dimensional coordinate system associated with the first camera
  • the second camera image coordinate system is a two-dimensional coordinate system associated with the second camera
  • the above imaging mapping relationship is a preset parameter, or,
  • the imaging mapping relationship is determined by the first electronic device according to the first mapping relationship, the second mapping relationship and the third mapping relationship,
  • the first mapping relationship is the mapping relationship between the first camera space coordinate system and the second camera space coordinate system
  • the first camera space coordinate system is a three-dimensional coordinate system associated with the first camera
  • the second camera space coordinate system is the 3D coordinate system associated with the second camera
  • the second mapping relationship is the mapping relationship between the first camera space coordinate system and the first camera image coordinate system
  • the third mapping relationship is a mapping relationship between the second camera space coordinate system and the second camera image coordinate system.
  • the foregoing first electronic device obtains the imaging mapping relationship through an application program associated with the first electronic device.
  • the angle of view of the first camera is larger than the angle of view of the second camera.
  • the above-mentioned first camera is any one of a cat-eye camera and a fish-eye camera
  • the second camera is a TOF camera, a structured light camera, a binocular stereo imaging camera, and a depth camera. 1, any one of the infrared cameras, and the second camera is used to collect the depth information of the image.
  • the foregoing first electronic device is a smart door lock.
  • the embodiment of the present application provides a computer-readable storage medium, and instructions are stored on the computer-readable storage medium, and when the instructions are executed on the electronic equipment, the electronic equipment executes the above-mentioned second aspect and various aspects of the second aspect. any one method in the possible implementations of the above third aspect and any one method in the various possible implementations of the third aspect.
  • the embodiment of the present application provides a computer program product, the computer program product includes instructions, and the instructions are used to implement any method in the above-mentioned second aspect and various possible implementations of the second aspect, or for Realize the above third aspect and any one method in various possible implementations of the third aspect.
  • the embodiment of the present application provides a chip device, and the chip device includes:
  • a processor configured to execute a computer-executable program, so that the device equipped with the chip device executes any one of the above-mentioned second aspect and any one of the various possible implementation methods of the second aspect, or executes the above-mentioned third aspect and the method of the third aspect Any of a variety of possible implementations.
  • the embodiment of the present application provides an electronic device, including:
  • memory for storing instructions to be executed by one or more processors of the electronic device
  • a processor when the instruction is executed by one or more processors, the processor is used to execute any one of the above-mentioned second aspect and any of the various possible implementation methods of the second aspect, or execute the above-mentioned third aspect and the method of the third aspect Any of a variety of possible implementations.
  • Fig. 1 shows a usage scene diagram of a face recognition smart door lock according to some embodiments of the present application
  • Fig. 2 (a) to Fig. 2 (f) show that in some embodiments, the position of the face in the preview interface displayed by the electronic device and the face in the face image captured in the smart door lock for face recognition In case of inconsistencies in position;
  • FIG. 3 shows a flowchart of an electronic device prompting the user to adjust the position of a face through voice, text, and a simulated picture of a face in some embodiments;
  • Figure 4(a) to Figure 4(i) show some user interface diagrams in which the electronic device prompts the user to adjust the position of the face in the technical solution shown in Figure 3 according to some embodiments of the present application;
  • Fig. 5 shows a flowchart of an electronic device prompting a user to adjust the position of a face by generating a schematic diagram that can simulate a face image in some embodiments;
  • Fig. 6 shows a structural block diagram of an intelligent door lock according to some embodiments of the present application
  • Fig. 7 shows a structural block diagram of a system composed of electronic devices and smart door locks according to some embodiments of the present application
  • Fig. 8 shows a flowchart of interaction between a mobile phone and a smart door lock according to some embodiments of the present application
  • Figure 9(a) to Figure 9(i) show user interface diagrams of some mobile phones involved in the flowchart shown in Figure 8 according to some embodiments of the present application;
  • Figure 10(a) to Figure 10(c) show that after the mobile phone and the smart door lock implement the technical solution shown in Figure 8, the position of the face in the preview interface displayed on the mobile phone and the smart door lock The situation where the face positions in the face images taken for face recognition are consistent;
  • Fig. 11 (a) shows that a kind of mobile phone will receive from the smart door lock the face image that the cat's eye camera takes is mapped to the processing flow under the TOF camera coordinate system;
  • Fig. 11 (b) according to some embodiments of the present application, has shown another kind of mobile phone and is received from the face image that the cat's eye camera that smart door lock takes is mapped to the processing flow under the TOF camera coordinate system;
  • Fig. 12 shows another flow chart of interaction between a mobile phone and a smart door lock according to some embodiments of the present application
  • Fig. 13 shows a structural block diagram of a mobile phone according to some embodiments of the present application.
  • the illustrative embodiments of the present application include but are not limited to a method for displaying a face image, a readable storage medium, a program product, and an electronic device.
  • TOF camera A camera using Time-of-Flight (TOF) technology.
  • the TOF camera has a small field of view and can obtain depth information in the image when shooting an image. Even if the photosensitive chip of the TOF camera finally images a two-dimensional image, since the depth information of each image element in the image can be obtained when the TOF camera shoots the image, the image taken by the TOF camera is generally called three-dimensional (Three Dimensions). , 3D) image.
  • the images captured by the TOF camera are black-and-white images, and according to relevant regulations, the original black-and-white images captured by the TOF camera are not allowed to be sent over the network to devices other than the device where the TOF camera is located or devices other than the device. Process and store locally.
  • Cat's eye camera The image captured by the cat's eye camera is a two-dimensional color image, and the cat's eye camera has a large field of view, and the captured image is distorted.
  • smart terminal devices With the development of smart terminal technology, the application of smart terminal devices is becoming more and more extensive. For a typical smart door lock, since users do not need to use traditional keys to unlock, they can directly use mobile phone APP or face recognition, fingerprint recognition Etc. way to unlock, simple and intelligent operation. Moreover, because smart door locks have better anti-theft performance, smart door locks are more and more used in the home or production fields. For smart door locks that support face recognition, limited by the cost of smart door locks and the pursuit of aesthetics in the appearance of smart door locks, usually no display screens are installed on the smart door locks, but voice, light, etc. Prompt the user to adjust the face position. Although voice and lighting can assist the user to adjust the facial position, the user cannot intuitively see the position of his or her face, and the user experience is poor.
  • the electronic device connected to the smart door lock usually displays a schematic diagram or a simulated face of the face in real time instead of a real face image, which will also affect the user experience. .
  • Fig. 1 shows a usage scene diagram of a face recognition smart door lock according to some embodiments of the present application. It includes the electronic device 100 and the smart door lock 200 .
  • the smart door lock 200 includes a first camera 201 and a second camera 202 .
  • the first camera 201 is used to collect 2D images that do not contain depth information, and the 2D images do not involve the user's biometric password, and can be sent to other external electronic devices by the smart door lock 200, and processed by other electronic devices as a preview The image is displayed.
  • users can remotely monitor the situation outside their home through other electronic devices.
  • he can also check the situation outside the door through the first camera 201.
  • the user can check who is knocking outside the door through the first camera 201, or check whether there are strangers staying outside the door.
  • the images captured by the first camera 201 need to cover as large a range as possible. Therefore, the first camera 201 is usually a camera with a larger field of view, such as a cat-eye camera, a fish-eye camera, and the like.
  • the second camera 202 is used to collect 3D images containing depth information, and the 3D images involve the user's biometric password. Therefore, the images captured by the second camera 202 are generally not allowed to be released, and can only be processed and stored locally in the smart camera 200. Prevent leakage of user privacy. In addition, when a 3D image containing depth information is used for face recognition, more accurate face features can be extracted from the 3D image, which helps to improve the accuracy of face recognition. Therefore, the smart door lock 200 generally needs to be equipped with a second camera 202 for the accuracy of face recognition.
  • the second camera 202 is a camera with a small field of view, such as a TOF camera, a structured light camera, a binocular stereo imaging camera, a depth camera, an infrared camera, and the like.
  • a smart door lock application is installed on the electronic device 100 .
  • the user aligns the face with the first camera 201 and the second camera 202 while holding the electronic device 100 , and the user operates the smart door lock application installed on the electronic device 100 to make the electronic device 100 and the smart door lock 200 Make a pairing connection.
  • the electronic device 100 is paired and connected with the smart door lock 200 through communication methods such as Wireless-Fidelity (WIFI), Bluetooth (BT), and Near Field Communication (NFC).
  • WIFI Wireless-Fidelity
  • BT Bluetooth
  • NFC Near Field Communication
  • the smart door lock 200 and the electronic device 100 execute the face entry method provided in this application, and the first camera 201 and the second camera 202 of the smart door lock 200 respectively take pictures of the user's face. Then the smart door lock 200 sends the face image taken by the first camera 201 to the electronic device 100 for face preview; the smart door lock 200 performs feature extraction on the face image taken by the second camera 202, evaluates the image quality and Recognize faces in images.
  • the first camera 201 used to capture preview images is usually also used to capture monitoring images around the smart door lock 200, in order to monitor the surrounding environment of the smart door lock 200 in a wider range, the field of view angle of the first camera 201 FOV1 is usually relatively large, so that there is relatively large distortion in the image captured by the first camera 201 .
  • the face recognition image taken by the second camera 202 needs to be slightly deformed, and the proportion of the face in the image is required to be relatively high. As much information as possible on the user's face requires the second camera 202 to be able to obtain the depth information of the user's face. Therefore, the second camera 202 needs to have a smaller field of view FOV2, and the second camera 202 needs to have the ability to obtain the depth information of the user's face. Function.
  • the second camera 202 can obtain the user's facial depth information as the biometric password of the human body, it is generally not allowed to send the image captured by the second camera 202 to other devices other than the smart door lock 200 through the network, that is, only in the smart door lock 200.
  • the door lock 200 is processed and stored locally.
  • the smart door lock 200 can send the image captured by the first camera 201 to the electronic device 100 via the network, but cannot send the image captured by the second camera 202 to the electronic device 100 via the network. Therefore, the electronic device 100 can acquire an image for displaying as a face registration preview image, which is an image captured by the first camera 201 . Therefore, the position of the face in the preview image displayed by the electronic device 100 is different from the position of the face in the image captured by the second camera 202 for face recognition.
  • the user's face appears within the field of view of the first camera 201 but not within the field of view of the second camera 202.
  • the image 103a captured by a camera 201 includes a human face 106a, and the human face 106a is located at the lower left side of the image 103a; similarly, the preview displayed in the human face preview interface of the electronic device 100 shown in FIG.
  • the picture 104 also includes a human face 107a, since the preview picture 104 is from the image captured by the first camera 201, the human face 107a is also located at the lower left side of the preview picture 104; however, since the first camera 201 and the second camera 202 Depending on the installation location and the angle of view, if the image captured by the first camera 201 includes a human face, the image captured by the second camera 202 may not include a human face. For example, the image 105a taken by the second camera 202 shown in FIG. When feature extraction is performed without the user's facial features, the user's facial features cannot be extracted, resulting in unsuccessful face entry.
  • the second camera 202 can fully capture the facial features of the user.
  • the face 106b in the image 103b captured by the first camera 201 is located at the center of the image 103b.
  • the face 107b in the preview picture 104 displayed on the face preview interface of the electronic device 100 shown in FIG. 2( e ) is also at the center of the preview picture 104 .
  • the smart door lock 200 in order to remind the user to adjust the face to the shooting range of the second camera 202 in real time, implements the external electronic device 100 to generate voice and text through steps 300 to 306 as shown in FIG. 3 . , pictures to remind the user to adjust the face position.
  • Figure 3 including:
  • Step 300 the electronic device 100 sends a face entry instruction to the smart door lock 200 .
  • the electronic device 100 detects that the user clicks on the face recognition icon 121 on the desktop of the electronic device 100, and then sends the face recognition icon 121 to the smart door lock 200. Enter instructions.
  • Step 301 The smart door lock 200 responds to a face entry instruction to take a face image.
  • a face entry instruction For example, the first camera 201 and the second camera 202 start shooting functions at the same time.
  • Step 302 the smart door lock 200 sends the captured face image to the electronic device 100 .
  • the smart door lock 200 sends the image captured by the first camera 201 to the electronic device 100 .
  • Step 303 The electronic device 100 generates prompt information according to the received face image, prompting the user to adjust the face to an optimal position.
  • the electronic device 100 generates prompt information such as voice, text, and picture according to the received face image, and displays it on the user interface of the electronic device 100 to prompt the user to adjust the position of the face.
  • the picture is not a real picture of the face, but a picture that can indicate the current position of the face, for example, a stick figure, a cartoon image or a schematic diagram of a dynamic face.
  • the interface of the electronic device 100 as shown in FIG. 4( b ) displays a text prompt of "Please stand at a distance of half a meter from the lock, for the lock camera, perform face recording according to the voice prompt".
  • a user interface as shown in FIG. 4(c) or FIG. 4(d) appears.
  • Both the interfaces shown in FIG. 4(c) and FIG. 4(d) include a preview frame 123 with the above pictures and corresponding text prompts.
  • the above text prompts may be, for example, text prompts 124a of "please move your face to the left” shown in FIG. 4(c) and text prompts 124b of "please move your face to the right” shown in FIG. 4(d).
  • the user moves the face according to the prompt information in FIG. 4( c) and FIG. field center.
  • Step 304 The smart door lock 200 collects the face image at the best position, and generates a notification message that the face entry is successful.
  • the optimal position may be the position where the center of the field of view of the second camera 202 is located.
  • the face image at the best position collected by the smart door lock 200 may be the face image collected by the first camera 201 of the smart door lock 200 when the user's face is located in the center of the field of view of the second camera 202 .
  • the user interface of the electronic device 100 can also generate and collect the user's face Tooltips for images from different angles.
  • the text prompt 124d of "please turn your face” shown in Figure 4(g) The text prompt 124e of "to the right” and the text prompt 124f of "please turn your face to the left” shown in FIG. 4(h).
  • the above-mentioned preview frame 123 can also remind the user of the progress of face entry.
  • the upper circular arc figure among the four circular arc figures around the preview frame 123 can change its color. Or flickering, it is used to remind the user that the face of the action of raising the head has been entered; when the user completes the action of lowering the head according to the text prompt 124d, the arc graphic at the bottom of the 4 arc figures on the periphery of the preview frame 123 can change color or flash, for Remind the user that the face of the head-down movement has been recorded, etc.
  • the intelligent The second camera 202 of the door lock 200 respectively collects the corresponding face images of the user in the aforementioned optimal position and when the user looks up, bows his head, turns his face to the left, and turns his face to the right.
  • the collected face images are subjected to feature extraction and image quality evaluation.
  • the smart door lock 200 determines that the feature information of each angle of the face in these face images collected by the second camera 202 can meet the preset requirements , to generate a notification message that the face registration is successful.
  • Step 305 the smart door lock 200 sends a notification message to the electronic device 100 that the face registration is successful.
  • Step 306 The electronic device 100 generates a prompt message of successful face registration according to the received notification message of successful face registration. For example, the text of "operation is successful" displayed in the interface of the electronic device 100 shown in Figure 4 (i) and the corresponding prompt 125, when the user clicks the "Complete" control 124e shown in Figure 4 (i), that is, the user has completed the operation. Face entry.
  • the external electronic device 100 of the smart door lock 200 is based on the face image captured by the first camera 201 of the smart door lock 200 to generate prompt information such as voice, text and pictures, and because the smart The differences in the installation positions and fields of view of the first camera 202 and the second camera 202 of the door lock 200 will make the prompt information generated by the electronic device 100 connected to the smart door lock 200 inaccurate, and affect the efficiency of face registration.
  • the picture displayed by the electronic device 100 connected to the smart door lock 200 is not a real picture of the face, but a picture that can indicate the current position of the face, such as a stick figure, cartoon image or schematic diagram of a dynamic face, etc. Provide users with the user experience of being able to see real faces during the face registration process.
  • the smart door lock 200 in order to remind the user in real time to adjust the face to the shooting range of the second camera 202, the smart door lock 200 realizes that the external electronic device 100 passes through the human body through steps 500 to 508 as shown in FIG. 5 .
  • the simulated schematic diagram of the face reminds the user to adjust the position of the face. Specifically, as shown in Figure 5, including:
  • Step 500 the electronic device 100 sends a face entry command to the smart door lock 200 to control the smart door lock 200 to start the face entry process.
  • Step 501 The smart door lock 200 responds to a face entry command to capture a face image, for example, the first camera 201 and the second camera 202 simultaneously enable the shooting function.
  • Step 502 The smart door lock 200 performs quality assessment and facial feature information extraction on the captured face image.
  • the smart door lock 200 extracts the key feature point information of the face from the image captured by the first camera 201 using a feature extraction algorithm. , such as extracting feature information of people's eyes, mouth, nose, eyebrows, etc.
  • Step 503 the smart door lock 200 sends the extracted facial feature information to the electronic device 100 .
  • Step 504 The electronic device 100 draws a schematic diagram of a human face image.
  • the electronic device 100 performs three-dimensional image modeling according to the facial feature information in the image captured by the first camera 201 received from the smart door lock 200 to simulate a human face schematic diagram. ;
  • Step 505 The electronic device 100 prompts the user to adjust the face to the best position
  • Step 506 The smart door lock 200 collects the face image in the best position
  • Step 507 the smart door lock 200 sends a message that the face registration is successful to the electronic device 100;
  • Step 508 The electronic device 100 prompts that the entry is successful.
  • the electronic device 100 draws a schematic diagram of a face image based on the image captured by the first camera 201 of the smart door lock 200 , and then prompts the user to adjust the position of the face.
  • the face preview image displayed by the external electronic device 100 connected to the smart door lock 200 still cannot accurately reflect the human face.
  • the position of the face in the field of view of the second camera 202 affects the accuracy of face recognition.
  • the simulated schematic diagram of the human face displayed by the electronic device 100 externally connected to the smart door lock 200 is not a real picture of the human face, and it is still impossible to provide users with information that can be seen during the face entry process. User experience with real faces.
  • this application provides a technical solution that can be directly displayed on the electronic device 100 connected to the smart door lock 200
  • the preview image of a real human face is more intuitive than a stick figure of a human face and a simulated schematic diagram of a human face, and the position, angle, expression, etc. of the human face reflected in the human face preview image displayed by the electronic device 100 can be It is consistent with the face in the image used for face recognition captured by the second camera 202 of the smart door lock 200 .
  • the electronic device 100 shown in FIG. 1 can convert the image captured by the first camera 201 received from the smart door lock 200 to the second camera through the preset image processing method provided by the embodiment of the present application.
  • the face preview image displayed by the electronic device 100 can accurately reflect the position, angle, expression, etc. of the face in the field of view of the second camera 202, so that the user can
  • the face position, angle, expression, etc. can be adjusted in time, so that the second camera 202 of the smart door lock 200 can quickly capture effective face images, such as multiple angles.
  • the electronic device 100 cannot block the first camera 201 and the second camera 202 and the user's face.
  • the electronic device 100 applicable to the embodiment of the present application may be various portable terminal devices with display and image processing functions, such as mobile phones, wristbands, watches, tablet computers and the like.
  • the technical solution of the present application will be described in detail below by taking the first camera 201 as a cat-eye camera and the second camera 202 as a TOF camera as an example.
  • Fig. 6 shows a block diagram of a hardware structure of a smart door lock 200 according to some embodiments of the present application.
  • the smart door lock 200 includes a processor 204 , a memory 205 , a peephole camera 201 , a TOF camera 202 , a power supply 206 , a communication module 207 , a sensor module 209 and an audio module 210 .
  • the processor 204 may include one or more processing units, for example, may include a central processing unit CPU (Central Processing Unit), an image processor GPU (Graphics Processing Unit), a digital signal processor DSP (Digital Signal Processor), a microprocessor Processing modules or processing circuits such as MCU (Micro-programmed Control Unit), AI (Artificial Intelligence, artificial intelligence) processor or programmable logic device FPGA (Field Programmable Gate Array). Wherein, different processing units may be independent devices, or may be integrated in one or more processors.
  • the processor 204 is configured to evaluate the quality of the face image captured by the TOF camera 202, for example, evaluate the clarity of the face image and whether facial features in the face image are complete.
  • the processor 204 is configured to extract face feature information from the face image captured by the TOF camera 202, and perform face recognition based on the extracted face feature information, so that when the user's face is recognized In this case, unlock the lock for the user to allow the user to enter the room.
  • the processor 204 is further configured to generate a successful face registration message and send it to the external electronic device 100 through the communication module 207 when it is determined that the face registration is successful.
  • Memory 205 is used to store software program and data, can be volatile memory (Volatile Memory), such as random access memory (Random-Access Memory, RAM), double data rate synchronous dynamic random access memory (Double Data Rate Synchronous Dynamic Random Access Memory, DDR SDRAM), can also be non-volatile memory (Non-Volatile Memory), such as read-only memory (Read-Only Memory, ROM), electrically erasable programmable read-only memory (Electrically Erasable Programmable read only memory, EEPROM), flash memory (Flash Memory), hard disk (Hard Disk Drive, HDD) or solid-state drive (Solid-State Drive, SSD).
  • volatile memory volatile memory
  • volatile memory such as random access memory (Random-Access Memory, RAM), double data rate synchronous dynamic random access memory (Double Data Rate Synchronous Dynamic Random Access Memory, DDR SDRAM
  • Non-Volatile Memory such as read-only memory (Read-Only Memory, ROM), electrically erasable programmable read-only
  • the processor 204 executes various functional applications and data processing of the smart door lock 200 by running software programs and data stored in the memory 205 .
  • the memory 205 may store photos taken by various cameras of the smart door lock 200 , extracted face feature information, unlock history records, and the like.
  • the cat's eye camera 201 is used to capture face images for preview.
  • the TOF camera 202 is used to capture face images for face recognition.
  • the sensor module 209 is used to obtain the status of the user or the smart door lock 200, and may include a pressure sensor, an infrared sensor, and the like.
  • the communication module 207 can be used for the smart door lock 200 to communicate with other electronic devices.
  • the smart door lock 200 can establish a communication connection with the electronic device 100 through WiFi, Bluetooth, NFC and other communication methods, and the smart door lock 200 can receive instructions sent by the electronic device 100 through the communication module 207 , the smart door lock 200 can send the face image taken by the peephole camera 201 to the electronic device 100 through the communication module 207, and when it is determined that the face entry is successful, the smart door lock 200 sends the face entry success to the electronic device 100 through the communication module 207. news.
  • the audio module 210 is used for converting digital audio information into analog audio signal output, or converting analog audio input into digital audio signal.
  • the audio module 210 may also be used to encode and decode audio signals.
  • the audio module 210 can be used to play voice prompts to the user.
  • the power supply 206 is used to supply power to various components of the smart door lock 200 .
  • power source 206 includes a battery.
  • the structure of the smart door lock 200 shown in FIG. 6 is only an example, and in other embodiments, the smart door lock 200 may also include more or fewer modules, and may also combine or split some modules, The embodiment of this application is not limited.
  • FIG. 7 shows a layered system architecture diagram of a smart door lock 200 and an electronic device 100 capable of implementing the technical solution of the present application.
  • the smart door lock 200 includes a processing module 212 , a transmission module 211 , a peephole camera 201 , a TOF camera 202 and the like.
  • the processing module 212 is used to generate control instructions, control the cat's eye camera 201 and the TOF camera 202 to collect face images, and perform quality assessment, face feature information extraction and face recognition on the face images collected by the TOF camera 202 . For example, evaluate the clarity of the face image and whether the facial features in the face image are complete.
  • the processing module 212 can extract the feature information of the face from the face image captured by the TOF camera 202, and then use the extracted features The information is compared with the facial feature information stored in advance by the smart door lock 200, so that when the comparison is successful, the smart door lock 200 is controlled to unlock and allow the user to enter the room.
  • the cat's eye camera 201 is used to capture a human face image for preview, and the human face image is a two-dimensional image, wherein the distortion of the human face is relatively large.
  • the TOF camera 202 is used to take a face image for face recognition, which is a black and white three-dimensional image including depth information of the face.
  • the transmission module 211 is configured to respond to the image sending instruction of the processing module 212 , and send the human face image for preview captured by the cat-eye camera 201 to the electronic device 100 .
  • the transmission module 211 may also be configured to send a message that the face registration is successful to the electronic device 100 in response to a message sending instruction of the processing module 212 when the processing module 212 determines that the face registration is successful.
  • the transmission module 211 may also be configured to receive an instruction to start a face entry process from the electronic device 100 .
  • the electronic device 100 includes a processing module 116 , a transmission module 115 , a display screen 102 , a microphone 132 , and a smart door lock application 128 , a camera application 129 , a video application 131 , and a call application 114 .
  • the processing module 116 is used to process the face image for preview received from the smart door lock 200 through a preset image processing method, and convert the face image for preview to the TOF camera of the smart door lock 200 202 in the coordinate system of the image taken, so that the processed face image for preview can accurately reflect the posture, position and other information of the face in the field of view of the TOF camera 202 of the smart door lock 200 .
  • the processing module 116 is also configured to generate an instruction to initiate a face entry process when detecting the user's operation on the smart door lock application 128 of the electronic device 100 , and send the instruction to the smart door lock 200 through the transmission module 115 .
  • the transmission module 115 is used for receiving from the smart door lock 200 a face image for preview, a message of successful face entry, and sending a face entry instruction to the smart door lock 200 .
  • the display screen 102 is used to display the face preview interface of the smart door lock application 128, prompt information, user interfaces of other application programs, and the like.
  • the microphone 132 is used to play voice prompt information to assist the user in adjusting the position, angle, expression, etc. of the face.
  • FIG. 7 is a schematic system structure and does not constitute a specific limitation on the smart door lock 200 and the electronic device 100 that can implement the technical solution of the present application.
  • the smart door lock 200 and the electronic device 100 may include more or fewer components than shown in FIG. 7 , or combine some components, or split some components, or different components layout.
  • the components shown in FIG. 7 can be implemented in hardware, software, or a combination of software and hardware.
  • the interaction diagram between the smart door lock 200 and the mobile phone 100 includes the following steps:
  • Step 800 the mobile phone 100 sends a face entry instruction to the smart door lock 200 .
  • the smart door lock application 128 of the mobile phone 100 generates an interruption message in response to a user operation. It can be understood that the interrupt message here is generated by the smart door lock application 128 of the mobile phone 100 to request the processing module 116 of the mobile phone 100 to generate a control command to enable the smart door lock 200 to enable the face entry function.
  • the user clicks the icon of the smart door lock application 128, and the mobile phone 100 responds to the user's click operation, opens the smart door lock application 128, and displays the icon shown in FIG. 9(b).
  • the login interface of the smart door lock application 128 shown in FIG. 9( b ) includes an “account” input box 111 , a “password” input box 112 and a “login” button 113 .
  • the user enters the account number in the "account number” input box 111, the password in the "password” input box 112, and clicks the "login” button 113, and the mobile phone enters the smart door lock application 128 as shown in Figure 9 (c).
  • User Interface In the user interface of the smart door lock application 128 shown in FIG. 9( c), the user clicks the face preview icon 1281 to trigger the smart door lock application 128 to generate an interrupt message, so that the mobile phone 100 generates a smart door lock based on the interrupt message.
  • Lock 200 is an instruction to start face registration.
  • the smart door lock application 128 of the mobile phone 100 sends the generated interrupt message to the processing module 116 of the mobile phone 100, and the processing module 116 of the mobile phone 100 generates a face entry instruction based on the received interrupt message, and the instruction is used to make the smart door lock 200 Turn on the face recording function.
  • the processing module 116 of the mobile phone 100 sends the generated face entry instruction to the smart door lock 200 via the transmission module 115, wherein the transmission module 115 can be a WIFI module, a Bluetooth module, or the like.
  • the processing module 116 of the mobile phone 100 sends the generated face entry instruction to the transmission module 211 of the smart door lock 200 via the transmission module 115 of the mobile phone 100 .
  • Step 801 After receiving the face input instruction, the smart door lock 200 controls the cat's eye camera 201 to collect a face image.
  • the transmission module 211 of the smart door lock 200 sends the received face entry instruction to the processing module 212 of the smart door lock 200 .
  • the processing module 212 of the smart door lock 200 After receiving the face entry instruction, the processing module 212 of the smart door lock 200 generates a control instruction for controlling the cat's eye camera 201 to collect images in response to the received face entry instruction.
  • the processing module 212 of the smart door lock 200 generates a control instruction for simultaneously controlling the cat's eye camera 201 and the TOF camera 202 to capture images in response to the received face entry instruction.
  • the processing module 212 of the smart door lock 200 can also generate and control the cat's eye camera 201 to capture an image for preview in response to the received face entry instruction, and wait for the user to adjust the face to the best position, After the best posture, the TOF camera 202 is controlled to collect control commands for images used for face recognition.
  • the processing module 212 of the smart door lock 200 sends the generated control instruction for capturing images by the peephole camera 201 to the peephole camera 201 of the smart door lock 200 .
  • the cat's eye camera 201 of the smart door lock 200 collects face images in response to the received control instructions. For example, the cat-eye camera 201 collects face images at a speed of 30 frames per second.
  • Step 802 the smart door lock 200 sends the face image collected by the cat's eye camera 201 to the mobile phone 100 .
  • the cat's eye camera 201 of the smart door lock 200 sends the collected face image to the transmission module 115 of the mobile phone 100 through the transmission module 211 of the smart door lock 200, and the transmission module 115 of the mobile phone 100 sends the received face image to The processing module 116 of the mobile phone 100 .
  • Step 803 The mobile phone 100 transforms the received face image collected by the Maoyan camera 201 into the TOF image coordinate system according to a preset image processing method, and obtains a face image as a preview image.
  • the face 118 a is at the edge of the lower left corner of the face image 117 a.
  • the mobile phone converts the face image 117a taken by the cat's eye camera 201 received from the smart door lock 200 into the TOF image coordinate system according to the preset image processing method, and displays the face image 117a of the mobile phone 100 as shown in Figure 10(b). In the face preview interface.
  • the human face image 118c in the human face preview picture 117c is in the center position of the human face preview picture 117c, and the human face 118b in the image 117b captured by the TOF camera shown in Figure 10 (c)
  • the location is the same.
  • the face image collected by the cat's eye camera 201 for preview can accurately reflect the position of the face in the image for face recognition captured by TOF, so that the user can adjust in time through the face preview interface of the mobile phone 100.
  • Facial position, angle, expression, etc., and then the TOF camera 202 of the smart door lock 200 can quickly capture an effective face image (that is, a face image including a complete face from multiple angles), and improve the facial features. Input efficiency and accuracy, improve user experience.
  • the preset image processing method will be described in detail below.
  • the above-mentioned TOF image coordinate system can be a plane Cartesian coordinate system with the center position of the photosensitive chip of the TOF camera as the coordinate origin.
  • Step 804 The mobile phone 100 displays the face preview image, and prompts the user to adjust the position of the face.
  • the processing module 116 of the mobile phone 100 converts the received face image collected by the cat's eye camera 201 into the TOF image coordinate system according to the preset image processing method, and then generates the display preview image after obtaining the face image as the preview image. command, and send it to the smart door lock application 128 of the mobile phone 100.
  • the smart door lock application 128 of the mobile phone 100 responds to the received instruction to display the preview image, displays the preview image of the face, and prompts the user to adjust the position of the face.
  • the mobile phone 100 displays a preview interface as shown in FIG. 9( d ), which includes a face preview screen 104 , a face image 109 in the face preview screen 104 , and prompt information 119 below the face preview screen 104 .
  • the human face image 109 in the human face preview screen 104 is the human face image captured by the cat's eye camera 201 transformed into the TOF image coordinate system.
  • the prompt information 119 may be a text prompt for prompting the user to adjust the face to the best position, for example, the prompt information 119 may be "please move the face to the left", "please Move the face to the right” text prompt, after the user moves the face according to the prompt information in Figure 4(c) and Figure 4(d), adjust the face to the best position, for example, the user adjusts the face to the mobile phone 100 is the center position of the displayed face preview image, or the user adjusts the face to be close to the center position of the aforementioned preview image.
  • the face preview image displayed by the mobile phone 100 is a face image captured by the cat-eye camera 201 converted to the TOF image coordinate system
  • the face in the face preview image displayed by the mobile phone 100 is at the center of the preview image position
  • the face is also located in the center of the field of view of the TOF camera 202 of the smart door lock 200 .
  • the mobile phone 100 determines that the position of the face is close to the center of the displayed face preview image, the mobile phone 100 no longer displays the prompt information prompting the user to move the position of the face.
  • Step 805 the mobile phone 100 sends an instruction to the smart door lock 200 to collect a 3D face image.
  • an instruction for capturing a 3D face image is generated.
  • the processing module 116 of the mobile phone 100 performs feature extraction on the converted image taken by the cat's eye camera 200 in the TOF image coordinate system, and determines that the face is in the image taken by the cat's eye camera 200 in the TOF image coordinate system.
  • the center position of the image is determined to adjust the user's face to the best position.
  • it means that the face is in the best position in the field of view of the TOF camera 202, and the 3D face image collected by the TOF camera 202 at the best position can obtain effective face features information for effective face recognition.
  • the processing module 116 of the mobile phone 100 sends the generated instruction of capturing a 3D face image to the smart door lock 200 .
  • the mobile phone 100 sends the instruction for collecting 3D face images generated by the processing module 116 to the transmission module 211 of the smart door lock 200 through the transmission module 115 of the mobile phone, and then the transmission module 211 of the smart door lock 200 Then, the received instruction of collecting 3D facial images is sent to the processing module 212 of the smart door lock 200 .
  • the processing module 212 of the smart door lock 200 responds to the received instruction of collecting 3D face images, generates a control instruction for controlling the TOF camera 202 to collect 3D face images, and sends them to the TOF camera 202 of the smart door lock 200
  • Step 806 After the smart door lock 200 receives the instruction to collect a 3D face image, it controls the TOF camera 202 to collect a 3D face image.
  • the TOF camera 202 of the smart door lock 200 collects a 3D face image in response to the received control instruction for collecting a 3D face image.
  • the preview interface of the mobile phone 100 displays the face information in real time. While displaying the face preview screen 104, the mobile phone 100 may also generate prompt information for prompting the user to adjust the orientation of the face or to adjust facial expressions and actions. For example, the mobile phone 100 generates prompt information for prompting to take face images from multiple angles of the user's front, left side, right side, looking up, looking down, opening mouth, closing mouth, and blinking.
  • the mobile phone 100 respectively displays "Please lower your head”, “Please raise your head”, “Please turn your face to the right” and "Please turn your face to the right” as shown in Figure 9(e) to Figure 9(h) below the face preview screen 104.
  • Step 807 When the smart door lock 200 determines that the face entry is successful, it generates a notification message that the face entry is successful.
  • the smart door lock 200 performs feature extraction on the 3D face image collected by the TOF camera 202, and evaluates the image quality, and the smart door lock 200 determines that in the 3D face image collected by the TOF camera 202
  • the processing module 212 of the smart door lock 200 extracts feature information from the 3D face image collected by the TOF camera 202.
  • Complete facial features such as eyes, mouth, nose, eyebrows, etc. under different angles such as face, head up, and down, and the ratio between these facial features meets the preset ratio threshold, it is determined that the face entry is successful, and Generate a notification message that the face registration is successful.
  • Step 808 the smart door lock 200 sends a notification message to the mobile phone 100 that the face registration is successful.
  • the processing module 212 of the smart door lock 200 sends the notification message to the mobile phone 100 through the transmission module 211 of the smart door lock 200 after generating a notification message of successful face entry.
  • Step 809 The mobile phone 100 prompts the user that the face registration is successful according to the received notification message that the face registration is successful.
  • the processing module 116 of the mobile phone 100 after the mobile phone 100 receives the notification message of successful face entry sent by the smart door lock 200 via the transmission module 115, the processing module 116 of the mobile phone 100 generates a control instruction based on the received message of successful face entry. , and send the generated control command to the smart door lock application 128 of the mobile phone 100, and the smart door lock application 128 of the mobile phone 100 prompts the user that the face registration is successful according to the received control command.
  • a prompt message 126 indicating that the face entry is successful is displayed, which includes a "check mark” symbol and a text prompt of "face entry success", and the user clicks After the "Complete” button 127, it is determined that the smart door lock 200 has completed face entry.
  • the mobile phone 100 converts the face image captured by the cat's eye camera 201 received from the smart door lock 200 into the TOF image coordinate system through the preset image processing method, so that the mobile phone 100 displays
  • the face preview image can accurately reflect the position, angle, expression, etc. of the face in the field of view of the TOF camera 202, so that the user can adjust the position of the face in time through the face preview interface and prompt information of the mobile phone 100 , angle, expression, etc., so that the TOF camera 202 of the smart door lock 200 can quickly capture an effective face image (that is, a face image including a complete face and facial features from multiple angles), and improve the efficiency of face entry and accuracy to improve user experience.
  • an effective face image that is, a face image including a complete face and facial features from multiple angles
  • Fig. 11(a) shows the processing process of the mobile phone 100 converting the image for face preview captured by the cat's eye camera 201 received from the smart door lock 200 into the TOF image coordinate system according to some embodiments of the present application.
  • the aforementioned preset image processing method specifically includes the following steps:
  • Step 1101 The mobile phone 100 obtains the mapping relationship between the Maoyan space coordinate system and the TOF space coordinate system (denoted as “the first mapping relationship”), the mapping relationship between the Maoyan space coordinate system and the Maoyan image coordinate system (denoted as “the first mapping relationship”) Two mapping relationship”) and the mapping relationship between the TOF space coordinate system and the TOF image coordinate system (referred to as "the third mapping relationship").
  • the cat's eye space coordinate system refers to the three-dimensional space coordinate system with the lens center of the cat's eye camera 201 as the coordinate origin;
  • the TOF space coordinate system refers to the three-dimensional space coordinate system with the lens center of the TOF camera 202 as the coordinate origin;
  • the Maoyan image coordinate system refers to the two-dimensional plane Cartesian coordinate system with the center of the photosensitive chip of the Maoyan camera 201 as the coordinate origin;
  • the TOF image coordinate system refers to the two-dimensional plane with the center of the photosensitive chip of the TOF camera 202 as the coordinate origin Cartesian coordinate system.
  • the three-dimensional coordinates of point A in the cat's eye space coordinate system are: (x1, y1, z1);
  • the two-dimensional coordinates of the corresponding pixel in the image taken by the cat's eye camera 201 are: (x2, y2);
  • the three-dimensional coordinates of the A point under the TOF space coordinate system are: (x1 ', y1 ', z1 ');
  • the two-dimensional coordinates of the corresponding pixel in the image captured by the TOF camera 202 are: (x2', y2').
  • the above first mapping relationship represents the relationship between the three-dimensional coordinates (x1, y1, z1) of point A in the Maoyan space coordinate system and the three-dimensional coordinates (x1', y1', z1') of point A in the TOF space coordinate system mapping relationship between them.
  • the above-mentioned second mapping relationship represents the three-dimensional coordinates (x1, y1, z1) of point A in the cat's eye space coordinate system and the two-dimensional coordinates (x2, y2) of the pixel corresponding to point A in the image taken by the cat's eye camera 201 mapping relationship between them.
  • the above-mentioned third mapping relationship represents the three-dimensional coordinates (x1', y1', z1') of point A in the TOF space coordinate system and the two-dimensional coordinates (x2 ', y2') between the mapping relationship.
  • the above-mentioned first mapping relationship to the third mapping relationship can be calibrated by a researcher using a camera calibration algorithm before the smart door lock 200 leaves the factory, and then stored in the memory of the smart door lock 200 .
  • the smart door lock 200 establishes a connection with the mobile phone 100 and uses the mobile phone 100 as an external device to display the preview image of the human face, it can send the stored first mapping relationship to the third mapping relationship to the mobile phone 100 .
  • the mobile phone 100 obtains the first mapping relationship to the third mapping relationship by downloading and installing the smart door lock application 128 .
  • the research and development personnel use Zhang Zhengyou's calibration algorithm to calibrate the above-mentioned first mapping relationship between the cat's eye camera 201 and the TOF camera 202 through a calibration board, such as a single-plane checkerboard, And the second mapping relationship of the cat-eye camera 201 itself and the third mapping relationship of the TOF camera 202 itself.
  • the first mapping relationship between the cat's eye space coordinate system shown in the following formula (1) and the TOF space coordinate system for example between the cat's eye space coordinate system shown in the following formula (2) and the cat's eye image coordinate system
  • the second mapping relationship, and the third mapping relationship between the TOF space coordinate system and the TOF image coordinate system such as shown in the following formula (3):
  • X M The three-dimensional coordinates of a point in the Maoyan space coordinate system under the coordinates of the calibration board.
  • X T the three-dimensional coordinates of a point in the TOF space coordinate system (a point related to the above X M ) under the coordinates on the calibration board.
  • R The rotation matrix of the TOF space coordinate system relative to the Maoyan space coordinate system, which can be obtained by a two-objective determination algorithm.
  • T The translation matrix of the TOF space coordinate system relative to the cat's eye space coordinate system, which can be obtained by a two-objective determination algorithm.
  • the matrix in formula (1) is the external parameter matrix between the cat-eye camera 201 and the TOF camera.
  • u m , v m the pixel coordinates of a point under the Maoyan image coordinates.
  • R m , T m camera extrinsic parameters of the cat's eye camera 201 , which are the rotation matrix and translation matrix of the calibration plate coordinate system relative to the cat's eye space coordinate system, respectively, which can be obtained through a binocular calibration algorithm.
  • X m , Y m , Z m are the specific coordinate values of X M in formula (1), that is, the coordinates of X M can be expressed as: (X m , Y m , Z m ), representing a certain point in the cat's eye space coordinate system The three-dimensional coordinates in the calibration plate coordinate system.
  • u t , v t pixel coordinates of a point in the TOF image coordinate system.
  • f tx and f ty are the focal length parameters of the TOF camera 202
  • u t0 , v t0 are the coordinate origins of the two-dimensional pixel coordinates in the image captured by the TOF camera 202, which can be determined by two objects Algorithm to get.
  • R t , T t camera extrinsics of the TOF camera 202 , which are the rotation matrix and translation matrix of the calibration plate coordinate system relative to the TOF and space coordinate systems, respectively, which can be obtained through a two-object calibration algorithm.
  • X t , Y t , Z t are the specific coordinate values of X T in formula (1), that is, the coordinates of X T can be expressed as: (X t , Y t , Z t ), representing a certain coordinate value in the TOF space coordinate system.
  • Step 1102 The mobile phone 100 based on the obtained first mapping relationship between the Maoyan space coordinate system and the TOF space coordinate system, the second mapping relationship between the Maoyan space coordinate system and the Maoyan image coordinate system, and the TOF space coordinate system and the TOF image coordinates
  • the third mapping relationship between the systems determines the mapping relationship between the Maoyan image coordinate system and the TOF image coordinate system (referred to as "the fourth mapping relationship").
  • the mobile phone 100 can obtain from the smart door lock 200 according to the first mapping relationship between the cat's eye space coordinate system and the TOF space coordinate system shown in the above formula (1), the above formula (2)
  • the second mapping relationship between the cat's eye space coordinate system and the cat's eye image coordinate system shown in and the TOF space coordinate system, and the third mapping between the TOF space coordinate system and the TOF image coordinate system shown in the above formula (3) relationship, the above formula (1), formula (2) and formula (3) are calculated by solving the equation
  • the fourth mapping between the cat's eye image coordinate system and the TOF image coordinate system as shown in formula (4) can be determined relation:
  • u t , v t is the pixel coordinate of a point in the TOF image coordinate system
  • u m and v m are the pixel coordinates of a point in the Maoyan image coordinate system
  • t 1 , t 2 the translation transformation parameters of the Maoyan image coordinate system relative to the TOF image coordinate system.
  • Step 1103 Based on the determined mapping relationship between the Maoyan image coordinate system and the TOF image coordinate system, the mobile phone 100 transfers the coordinates of each pixel in the face image captured by the Maoyan camera 201 to the TOF image coordinate system.
  • the mobile phone 100 converts the coordinates of each pixel in the face image for face preview captured by the cat's eye camera 201 received from the smart door lock 200 to a TOF image through the conversion relationship shown in the above formula (4).
  • coordinate system two-dimensional planar Cartesian coordinate system with the center of the photosensitive chip of the TOF camera 202 as the coordinate origin).
  • the mobile phone 100 can convert the face preview image captured by the cat's eye camera 201 received from the smart door lock 200 into the TOF image coordinate system, so that the face preview image displayed by the mobile phone 100 can accurately reflect The position, angle, expression, etc. of the face in the field of view of the TOF camera 202 can be obtained, so that the user can adjust the position, angle, expression, etc. of the face in time through the face preview interface and prompt information of the mobile phone 100, so that the The TOF camera 202 of the smart door lock 200 can quickly capture effective face images, improve the efficiency and accuracy of face entry, and improve user experience.
  • the mapping relationship between the cat's eye image coordinate system and the TOF image coordinate system (that is, the fourth mapping relationship) can be pre-calculated and then stored in the memory of the smart door lock 200.
  • the smart door lock 200 establishes a connection with the mobile phone 100 and uses the mobile phone 100 as an external device to display the preview image of the human face, it can send the stored fourth mapping relationship to the mobile phone 100 .
  • the mobile phone 100 obtains the above-mentioned fourth mapping relationship by downloading and installing the smart door lock application 128 .
  • the mobile phone 100 involved in step 803 will convert the image taken by the cat's eye camera 201 received from the smart door lock 200 to another image in the TOF image coordinate system through the preset image processing method.
  • the implementation process is described in detail.
  • Fig. 11(b) shows that the mobile phone 100 converts the image for face preview captured by the cat's eye camera 201 received from the smart door lock 200 into another TOF image coordinate system. process.
  • the aforementioned preset image processing method specifically includes the following steps:
  • Step 1101' The mobile phone 100 obtains the mapping relationship between the Maoyan image coordinate system and the TOF image coordinate system.
  • Zhang Zhengyou's calibration algorithm to calibrate the mapping relationship between the cat's eye space coordinate system and the TOF space coordinate system (denoted as "The first mapping relationship"), the mapping relationship between the cat's eye space coordinate system and the cat's eye image coordinate system (denoted as “the second mapping relationship”) and the mapping relationship between the TOF spatial coordinate system and the TOF image coordinate system (denoted as “Third Mapping Relationship”).
  • the first mapping relationship between the cat's eye space coordinate system shown in the above formula (1) and the TOF space coordinate system such as between the cat's eye space coordinate system shown in the above formula (2) and the cat's eye image coordinate system
  • the second mapping relationship, and the third mapping relationship between the TOF space coordinate system and the TOF image coordinate system such as shown in the above formula (3).
  • calculate the mapping relation between the cat's eye image coordinate system and the TOF image coordinate system shown in the above formula (4) (the fourth mapping relationship) calculate the mapping relation between the cat's eye image coordinate system and the TOF image coordinate system shown in the above formula (4) (the fourth mapping relationship), and then store the calculated fourth mapping relationship in the memory of the smart door lock 200.
  • the smart door lock 200 When the smart door lock 200 establishes a connection with the mobile phone 100 and uses the mobile phone 100 as an external device to display a preview image of a human face, it can send the stored fourth mapping relationship to the mobile phone 100 .
  • the mobile phone 100 obtains the above-mentioned fourth mapping relationship by downloading and installing the smart door lock application 128 .
  • Step 1102' The mobile phone 100 transfers the coordinates of each pixel in the face image captured by the cat's eye camera 201 to the TOF based on the acquired mapping relationship between the cat's eye image coordinate system and the TOF image coordinate system (that is, the fourth mapping relationship). in the image coordinate system.
  • the mobile phone 100 converts the coordinates of each pixel in the face image for face preview captured by the cat's eye camera 201 received from the smart door lock 200 to a TOF image through the conversion relationship shown in the above formula (4).
  • coordinate system two-dimensional planar Cartesian coordinate system with the center of the photosensitive chip of the TOF camera 202 as the coordinate origin).
  • the mobile phone 100 can convert the face preview image captured by the cat's eye camera 201 received from the smart door lock 200 into the TOF image coordinate system, so that the face preview image displayed by the mobile phone 100 can accurately reflect The position, angle, expression, etc. of the face in the field of view of the TOF camera 202 can be obtained, so that the user can adjust the position, angle, expression, etc. of the face in time through the face preview interface and prompt information of the mobile phone 100, so that the The TOF camera 202 of the smart door lock 200 can quickly capture effective face images, improve the efficiency and accuracy of face entry, and improve user experience.
  • the mobile phone 100 completes the process of converting each pixel in the face image captured by the cat-eye camera 201 for face preview into the TOF image coordinate system. It can be understood that this processing process can also be completed by the smart door lock 200 itself.
  • Fig. 12 shows that the intelligent door lock 200 itself realizes the interaction in the technical scheme of converting each pixel in the face image for face preview captured by the cat's eye camera 201 to the TOF image coordinate system picture.
  • step 802' and step 803' in FIG. 12 are different from steps 802 and 803 in FIG. 8 respectively.
  • the other steps are the same as in Fig. 8. Therefore, in order to avoid repetition, only step 802' and step 803' in FIG. 12 will be introduced below.
  • step 802' and step 803' in FIG. 12 please refer to the text description of the interaction diagram shown in FIG. 8 above, and will not repeat them here.
  • the specific contents of step 802' and step 803' in Fig. 12 are respectively as follows:
  • Step 802' The smart door lock 200 transforms the face image collected by the cat's eye camera 201 into the TOF image coordinate system according to a preset image processing method, and obtains a face image as a preview image.
  • the peephole camera 201 of the smart door lock 200 sends the captured face image to the processing module 212 of the smart door lock 200 .
  • the processing module 212 of the smart door lock 200 converts the received face image collected by the cat's eye camera 201 into the TOF image coordinate system according to a preset image processing method.
  • the processing module 212 of the smart door lock 200 receives the face image collected by the peephole camera 201, according to the method shown in FIG.
  • the above-mentioned first mapping relationship, the second mapping relationship and the third mapping relationship determine the mapping relationship (that is, the fourth mapping relationship) between the Maoyan image coordinate system and the TOF image coordinate system, and then according to the determined such as the above formula ( 4)
  • the fourth mapping relationship shown is to convert each pixel in the face image used for face preview captured by the cat's eye camera 201 received from the smart door lock 200 to the TOF image coordinate system.
  • each pixel in the face image for face preview taken by the cat's eye camera 201 received from the smart door lock 200 The points are transformed into the TOF image coordinate system.
  • Step 803 ′ the smart door lock 200 sends the face image as a preview image to the mobile phone 100 .
  • the processing module 212 of the smart door lock 200 sends the face image transformed into the TOF image coordinate system to the transmission module 115 of the mobile phone 100 via the transmission module 211 of the smart door lock 200, and then the mobile phone 100 The transmission module 115 of the mobile phone 100 then sends the received face image transformed into the TOF image coordinate system to the processing module of the mobile phone 100 .
  • the processing module 116 of the mobile phone 100 After receiving the converted face image, the processing module 116 of the mobile phone 100 generates a control instruction for displaying a preview image, which is used to control the smart door lock application 128 of the mobile phone 100 to display the preview image. In this way, the face preview image displayed by the mobile phone 100 can accurately reflect the position, angle, expression, etc.
  • the prompt information adjusts the position, angle, expression, etc. of the face in time, so that the TOF camera 202 of the smart door lock 200 can quickly capture effective face images, improve the efficiency and accuracy of face entry, and improve user experience.
  • FIG. 13 shows a schematic diagram of a hardware structure of a mobile phone 100 according to an embodiment of the present application.
  • the mobile phone 100 may include a processor 110, a power module 140, a memory 180, a camera 170, a mobile communication module 130, a wireless communication module 120, a sensor module 190, an audio module 150, an interface module 160, buttons 101, a display Screen 102 etc.
  • the structure shown in the embodiment of the present invention does not constitute a specific limitation on the mobile phone 100 .
  • the mobile phone 100 may include more or fewer components than shown in the figure, or combine certain components, or separate certain components, or arrange different components.
  • the illustrated components can be realized in hardware, software or a combination of software and hardware.
  • the processor 110 may include one or more processing units, for example, may include a central processing unit (Central Processing Unit, CPU), an image processor (Graphics Processing Unit, GPU), a digital signal processor (Digital Signal Processor, DSP), Processing modules or processing circuits of microprocessors (Micro-programmed Control Unit, MCU), artificial intelligence (Artificial Intelligence, AI) processors, or programmable logic devices (Field Programmable Gate Array, FPGA).
  • the processor 110 may execute the method shown in FIG. 11(a) or FIG. Go to the TOF image coordinate system, and generate prompt information according to the converted image under the TOF coordinates, prompting the user to adjust the face to a suitable position.
  • the memory 180 can be used to store data, software programs, and modules. In some embodiments of the application, the memory 180 can be used to store the software program of the method shown in FIG. 11(a) or FIG.
  • the face image captured by the Maoyan camera 201 is converted into the TOF image coordinate system, and a prompt message is generated according to the converted image under the TOF coordinate system, prompting the user to adjust the face to a suitable position.
  • the memory 180 can also be used to store the face image taken by the cat's eye camera 201 received from the smart door lock 200, and the three-dimensional coordinate points between the cat's eye camera 201 and the TOF camera 202 received from the smart door lock 200 relationship, the relationship between the three-dimensional coordinate points and the two-dimensional coordinate points of the cat's eye camera 201 and the TOF camera 202, etc.
  • the power module 140 may include a power supply, power management components, and the like.
  • the power source can be a battery.
  • the power management component is used to manage the charging of the power supply and the power supply from the power supply to other modules.
  • the charging management module is used to receive charging input from the charger; the power management module is used to connect the power supply, the charging management module and the processor 110 .
  • the mobile communication module 130 may include, but is not limited to, an antenna, a power amplifier, a filter, a low noise amplifier (Low Noise Amplify, LNA) and the like.
  • the mobile communication module 130 can provide wireless communication solutions including 2G/3G/4G/5G applied on the mobile phone 100 .
  • the mobile communication module 130 can receive electromagnetic waves through the antenna, filter and amplify the received electromagnetic waves, and send them to the modem processor for demodulation.
  • the mobile communication module 130 can also amplify the signal modulated by the modem processor, and convert it into electromagnetic wave and radiate it through the antenna.
  • at least part of the functional modules of the mobile communication module 130 may be set in the processor 110 .
  • at least part of the functional modules of the mobile communication module 130 and at least part of the modules of the processor 110 may be set in the same device.
  • the wireless communication module 120 may include an antenna, and transmit and receive electromagnetic waves via the antenna.
  • the wireless communication module 120 can provide wireless local area network (Wireless Local Area Networks, WLAN) (such as wireless fidelity (Wireless Fidelity, Wi-Fi) network), bluetooth (Bluetooth, BT), global navigation satellite system, etc. applied on the mobile phone 100.
  • WLAN Wireless Local Area Networks
  • GNSS Global Navigation Satellite System
  • FM Frequency Modulation
  • NFC Near Field Communication
  • infrared technology Infrared, IR
  • the mobile phone 100 can communicate with the smart door lock 200 through wireless communication technology.
  • the mobile communication module 130 and the wireless communication module 120 of the mobile phone 100 may also be located in the same module.
  • Camera 170 is used to capture still images or video.
  • the object generates an optical image through the lens and projects it to the photosensitive element.
  • the photosensitive element converts the light signal into an electrical signal, and then transmits the electrical signal to an ISP (Image Signal Processor, Image Signal Processor) to convert it into a digital image signal.
  • ISP Image Signal Processor, Image Signal Processor
  • the mobile phone 100 can realize the shooting function through ISP, camera 170, video codec, GPU (Graphic Processing Unit, graphics processor), display screen 102 and application processor.
  • the display screen 102 includes a display panel.
  • the display panel can adopt liquid crystal display (Liquid Crystal Display, LCD), organic light-emitting diode (Organic Light-emitting Diode, OLED), active matrix organic light-emitting diode or active-matrix organic light-emitting diode (Active-matrix Organic Light -emitting Diode, AMOLED), flexible light-emitting diode (Flex Light-emitting Diode, FLED), Mini LED, Micro LED, Micro OLED, quantum dot light-emitting diode (Quantum Dot Light-emitting Diodes, QLED), etc.
  • the display screen 102 is used to display a preview screen of the face, text prompts and picture prompts to remind the user to adjust the face position and posture, and symbol prompts and text prompts to remind the user that the face registration is successful.
  • the sensor module 190 may include a proximity light sensor, a pressure sensor, a gyro sensor, an air pressure sensor, a magnetic sensor, an acceleration sensor, a distance sensor, a fingerprint sensor, a temperature sensor, a touch sensor, an ambient light sensor, a bone conduction sensor, and the like.
  • the audio module 150 may convert digital audio information into an analog audio signal output, or convert an analog audio input into a digital audio signal.
  • the audio module 150 may also be used to encode and decode audio signals.
  • the audio module 150 may be set in the processor 110 , or some functional modules of the audio module 150 may be set in the processor 110 .
  • the audio module 150 may include a speaker, an earpiece, a microphone, and an earphone jack.
  • the audio module 150 is used to play the voice prompt information of the mobile phone 100, prompting the user to "please raise your head", "please lower your head”, “please turn your face to the left”, “please turn your face to the left”. Turn to the right", "Enter successfully” and other voice prompt messages.
  • the interface module 160 includes an external memory interface, a Universal Serial Bus (Universal Serial Bus, USB) interface, a Subscriber Identification Module (Subscriber Identification Module, SIM) card interface, and the like.
  • the external memory interface can be used to connect an external memory card, such as a Micro SD card, to realize expanding the storage capacity of the mobile phone 100.
  • the external memory card communicates with the processor 110 through the external memory interface to realize the data storage function.
  • the USB interface is used for the mobile phone 100 to communicate with other mobile phones.
  • the SIM card interface is used for communicating with the SIM card installed in the mobile phone 100, for example, reading the phone number stored in the SIM card, or writing the phone number into the SIM card.
  • the mobile phone 100 further includes buttons, motors, and indicators.
  • the key may include a volume key, an on/off key, and the like.
  • the motor is used to make the mobile phone 100 vibrate. For example, when the user's mobile phone 100 and the smart door lock 200 are connected successfully, it can vibrate when the smart door lock 200 completes the face entry to prompt the user that the face entry is successful.
  • Indicators may include laser pointers, radio frequency indicators, LED indicators, and the like.
  • Embodiments of the mechanisms disclosed in this application may be implemented in hardware, software, firmware, or a combination of these implementation methods.
  • Embodiments of the present application may be implemented as a computer program or program code executed on a programmable system comprising at least one processor, a storage system (including volatile and non-volatile memory and/or storage elements) , at least one input device, and at least one output device.
  • Program code can be applied to input instructions to perform the functions described herein and to generate output information.
  • the output information may be applied to one or more output devices in known manner.
  • a processing system includes any computer having a processor such as, for example, a Digital Signal Processor (DSP), a microcontroller, an Application Specific Integrated Circuit (ASIC), or a microprocessor. system.
  • DSP Digital Signal Processor
  • ASIC Application Specific Integrated Circuit
  • the program code can be implemented in a high-level procedural language or an object-oriented programming language to communicate with the processing system.
  • Program code can also be implemented in assembly or machine language, if desired.
  • the mechanisms described in this application are not limited in scope to any particular programming language. In either case, the language may be a compiled or interpreted language.
  • the disclosed embodiments may be implemented in hardware, firmware, software, or any combination thereof.
  • the disclosed embodiments can also be implemented as instructions carried by or stored on one or more transitory or non-transitory machine-readable (e.g., computer-readable) storage media, which can be executed by one or more processors read and execute.
  • instructions may be distributed over a network or via other computer-readable media.
  • a machine-readable medium may include any mechanism for storing or transmitting information in a form readable by a machine (e.g., a computer), including, but not limited to, floppy disks, optical disks, optical disks, read-only memories (CD-ROMs), magnetic CD-ROM, Read Only Memory (ROM), Random Access Memory (Random Access Memory, RAM), Erasable Programmable Read Only Memory (EPROM), Electrically Erasable Programmable Only Memory Read memory (Electrically Erasable Programmable Read-Only Memory, EEPROM), magnetic card or optical card, flash memory, or use the Internet to transmit information by means of electricity, light, sound or other forms of propagation signals (for example, carrier waves, infrared signals, digital signals etc.) tangible machine-readable storage.
  • a machine-readable medium includes any type of machine-readable medium suitable for storing or transmitting electronic instructions or information in a form readable by a machine (eg, a computer).
  • each unit/module mentioned in each device embodiment of this application is a logical unit/module.
  • a logical unit/module can be a physical unit/module, or a physical unit/module.
  • a part of the module can also be realized with a combination of multiple physical units/modules, the physical implementation of these logical units/modules is not the most important, the combination of functions realized by these logical units/modules is the solution The key to the technical issues raised.
  • the above-mentioned device embodiments of this application do not introduce units/modules that are not closely related to solving the technical problems proposed by this application, which does not mean that the above-mentioned device embodiments do not exist other units/modules.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Multimedia (AREA)
  • Studio Devices (AREA)

Abstract

本申请涉及一种人脸图像显示方法、可读存储介质、程序产品及电子设备。该方法包括:第一电子设备获取第一摄像头在第一时刻采集的第一人脸图像,第一人脸图像中的人脸位于第一位置;第二电子设备显示第二人脸图像,第二人脸图像是根据第一摄像头和第二摄像头之间的成像映射关系由第一人脸图像变换得到的图像,人脸在第二人脸图像中的位置与人脸在第二摄像头在第一时刻采集的图像中的位置相同。通过应用本申请提供的方法,可以使第二电子设备显示的预览图像中人脸的位置与第二摄像头拍摄的作为人脸识别的人脸图像中人脸的位置一致,使预览图像中人脸的位置能够准确反映出人脸在第二摄像头拍摄的人脸图像中的位置,可以提高人脸录入的效率和准确度。

Description

人脸图像显示方法、可读存储介质、程序产品及电子设备
本申请要求于2021年06月18日提交中国专利局、申请号为202110679625.5、申请名称为“人脸图像显示方法、可读存储介质、程序产品及电子设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及终端技术领域,特别涉及一种人脸图像显示方法、可读存储介质、程序产品及电子设备。
背景技术
随着人工智能(Artificial Intelligence,AI)技术的快速发展,支持人脸识别功能的电子设备越来越多。通常为了实现人脸识别功能,电子设备首先要进行人脸录入。在一些场景下,对于不包含显示屏的电子设备,典型的例如智能门锁,用户无法直观地感知人脸当前是否位于电子设备的摄像头的取景范围,或者位于摄像头的取景范围的什么位置。一些解决方案中,电子设备可以将摄像头采集的图像发送给包含显示屏的其他电子设备进行显示,然而,包含显示屏的其他电子设备显示的图像与实际用于录入人脸信息的图像中人脸的位置可能存在差异,无法准确反应当前人脸在用于录入人脸信息的摄像头的取景范围中的位置。当前,尚没有一种方法能让用户直观、准确地感知人脸位置,这使得人脸录入过程用户体验较差。
发明内容
本申请实施例提供了一种人脸图像显示方法、可读存储介质、程序产品及电子设备。
本申请的技术方案根据第一电子设备的第一摄像头和第二摄像头之间具有的成像映射关系,将第一摄像头拍摄的作为预览图像的人脸图像进行变换,使得变换后的预览图像中人脸的位置和第二摄像头在第一摄像头拍摄作为预览图像的人脸图像的同一时刻拍摄的用于人脸识别的人脸图像中人脸的位置相同,从而使得第一电子设备外接的第二电子设备显示的预览图像能够准确反映出人脸在第二摄像头的视场中的位置、朝向等情况,从而可以使用户能够通过第二电子设备显示的预览图像及时调整面部位置、朝向等,进而使第一电子设备的第二摄像头能够快速抓取到有效的人脸图像从而进行人脸录入,可以提高人脸录入的效率和准确度,提升用户体验。
第一方面,本申请实施例提供了一种人脸图像显示方法,应用于包含第一电子设备和第二电子设备的系统中,第一电子设备包括第一摄像头和第二摄像头,第一摄像头和第二摄像头之间具有成像映射关系,该方法包括:
第一电子设备获取第一摄像头在第一时刻采集的第一人脸图像,其中,第一人脸图像中的人脸位于第一人脸图像的第一位置;
第二电子设备显示第二人脸图像,其中,第二人脸图像是根据成像映射关系由第一人脸图像变换得到的图像,人脸在第二人脸图像中的位置与人脸在第二摄像头在第一时刻采集的图像中的位置相同。
其中,第一摄像头可以为广角摄像头,例如猫眼摄像头、鱼眼摄像头等;第二摄像头可以为TOF摄像头、结构光摄像头、双目立体成像摄像头、深度摄像头、红外摄像头中的任一种摄像头,第二摄像头用于采集图像的深度信息。
此外,第二电子设备显示的第二人脸图像可以是第一电子设备根据成像映射关系对第一人脸图像进行转换后得到的图像,也可以是第二电子设备根据成像映射关系对第一人脸图像进行转换后得到的图像。
基于上述成像映射关系,将第一摄像头拍摄的作为预览图像的人脸图像进行变换,使得变换后的预览图像中人脸的位置和第二摄像头在第一摄像头拍摄作为预览图像的人脸图像的同一时刻拍摄的用于人脸识别的人脸图像中人脸的位置相同,从而使得第一电子设备外接的第二电子设备显示的预览图像能够准确反映出人脸在第二摄像头的视场中的位置、朝向等情况,从而可以使用户能够通过第二电子设备显示的预览图像及时调整面部位置、朝向等,进而使第一电子设备的第二摄像头能够快速抓取到有效的人脸图像从而进行人脸录入,可以提高人脸录入的效率和准确度,提升用户体验。
在上述第一方面的一种可能的实现中,上述成像映射关系为第一摄像头图像坐标系和第二摄像头图像坐标系之间的映射关系,
其中,第一摄像头图像坐标系是与第一摄像头相关联的二维坐标系,第二摄像头图像坐标系是与第二摄像头相关联的二维坐标系。
在上述第一方面的一种可能的实现中,上述成像映射关系是预设的参数,或者,
成像映射关系是由第一电子设备或第二电子设备根据第一映射关系、第二映射关系以及第三映射关系确定的,
其中,第一映射关系是第一摄像头空间坐标系与第二摄像头空间坐标系之间的映射关系,第一摄像头空间坐标系是与第一摄像头关联的三维坐标系,第二摄像头空间坐标系是与第二摄像头关联的三维坐标系,
第二映射关系是第一摄像头空间坐标系与第一摄像头图像坐标系之间的映射关系,
第三映射关系是第二摄像头空间坐标系与第二摄像头图像坐标系之间的映射关系。
在一些实施例中,第一摄像头图像坐标系是与第一摄像头相关联的二维平面直角坐标系。例如,第一摄像头是猫眼摄像头,第一摄像头图像坐标系即为猫眼图像坐标系,则猫眼图像坐标系为以猫眼摄像头的感光芯片的中心位置为坐标原点的平面直角坐标系。
在一些实施例中,第二摄像头图像坐标系是与第二摄像头相关联的二维平面直角坐标系。例如,第二摄像头是TOF摄像头,第二摄像头图像坐标系即为TOF图像坐标系,则TOF图像坐标系为以TOF摄像头的感光芯片的中心位置为坐标原点的平面直角坐标系。
在一些实施例中,第一摄像头空间坐标系是与第一摄像头关联的三维坐标系。例如,第一摄像头是猫眼摄像头,第一摄像头空间坐标系即为猫眼空间坐标系,则猫眼空间坐标系为以猫眼摄像头的镜头的中心位置为坐标原点的三维空间坐标系。
在一些实施例中,第二摄像头空间坐标系是与第二摄像头关联的三维坐标系。例如, 第二摄像头是TOF摄像头,第二摄像头空间坐标系即为TOF空间坐标系,则TOF空间坐标系为以TOF摄像头的镜头的中心位置为坐标原点的三维空间坐标系。
具体地,例如,假设A点为同时出现在猫眼摄像头和TOF摄像头的视场中的三维空间中的一点,A点在猫眼空间坐标系下的三维坐标为:(x1、y1、z1);A点在猫眼摄像头拍摄的图像中对应的像素点的二维坐标为:(x2、y2);A点在TOF空间坐标系下的三维坐标为:(x1’、y1’、z1’);A点在TOF摄像头拍摄的图像中对应的像素点的二维坐标为:(x2’、y2’)。
则上述第一映射关系表征的是A点在猫眼空间坐标系下的三维坐标(x1、y1、z1)与A点在TOF空间坐标系下的三维坐标(x1’、y1’、z1’)之间的映射关系。
上述第二映射关系表征的是A点在猫眼空间坐标系下的三维坐标(x1、y1、z1)与A点在猫眼摄像头拍摄的图像中对应的像素点的二维坐标(x2、y2)之间的映射关系。
上述第三映射关系表征的是A点在TOF空间坐标系下的三维坐标(x1’、y1’、z1’)与A点在TOF摄像头拍摄的图像中对应的像素点的二维坐标(x2’、y2’)之间的映射关系。
在上述第一方面的一种可能的实现中,上述方法还包括:
第二电子设备接收第一电子设备发送的第一人脸图像,根据成像映射关系处理第一人脸图像,得到第二人脸图像,
或者,
第二电子设备接收第一电子设备发送的第二人脸图像。
在上述第一方面的一种可能的实现中,上述第一电子设备或第二电子设备通过与第一电子设备相关联的应用程序获取成像映射关系。
例如,第一电子设备在和第二电子设备建立连接后,将第二电子设备作为第一电子设备的外接设备来显示人脸的预览图像时,第一电子设备可以将存储的上述成像映射关系发送给第二电子设备,或者第一电子设备基于存储的上述第一映射关系、第二映射关系以及第三映射关系计算出上述成像映射关系,然后再将计算出来的成像映射关系发送给第二电子设备,由第二电子设备基于该成像映射关系处理第一人脸图像,从而得到第二人脸图像。
又如,第一电子设备在和第二电子设备建立连接后,将第二电子设备作为第一电子设备的外接设备来显示人脸的预览图像时,第一电子设备可以根据存储的上述成像映射关系处理第一人脸图像,得到第二人脸图像,然后发送给第二电子设备。或者,第一电子设备基于存储的上述第一映射关系、第二映射关系以及第三映射关系计算出上述成像映射关系,再根据计算出来的上述成像映射关系处理第一人脸图像,得到第二人脸图像,然后发送给第二电子设备。
此外,第一电子设备还可以通过下载安装和第一电子设备相关联的应用程序来获取上述成像映射关系,然后将获取的映射关系发送给第二电子设备,由第二电子设备根据上述成像映射关系处理第一人脸图像,得到第二人脸图像。或者,第一电子设备根据获取的成像映射关系直接处理第一人脸图像,得到第二人脸图像,然后将第二人脸图像发送给第二电子设备。
此外,第二电子设备也可以通过下载安装和第一电子设备相关联的应用程序来获取上述成像映射关系,然后根据上述成像映射关系处理第一人脸图像,得到第二人脸图像。
在上述第一方面的一种可能的实现中,上述方法还包括:
第二电子设备显示用于提示用户调整人脸位置的第一提示信息。
例如,第二电子设备显示出“请向左移动脸部”、“请向右移动脸部”等的文字提示,以提示用户调整人脸位置。
在上述第一方面的一种可能的实现中,上述方法还包括:
在第二电子设备显示的第二人脸图像中,人脸的位置靠近第二人脸图像的中心位置时,第二电子设备不再显示第一提示信息。
例如,当人脸的位置处于第二人脸图像的中心位置或者人脸的位置处于第二人脸图像的中心位置周围预设距离以内,第二电子设备不再显示第一提示信息
在上述第一方面的一种可能的实现中,上述方法还包括:
第二电子设备显示用于提示用户调整人脸朝向的第二提示信息。
例如,当第二电子设备确定出人脸位于预览图像的中心位置或靠近中心位置时,显示出“请抬头”、“请低头”、“请将脸转向右边”、“请将脸转向左边”等的文字提示。
在上述第一方面的一种可能的实现中,上述第一提示信息和/或第二提示信息是根据第二人脸图像生成的。
在上述第一方面的一种可能的实现中,上述第一摄像头的视场角大于第二摄像头的视场角。
在上述第一方面的一种可能的实现中,上述第一摄像头为猫眼摄像头、鱼眼摄像头中的任一种摄像头,第二摄像头为TOF摄像头、结构光摄像头、双目立体成像摄像头、深度摄像头、红外摄像头中的任一种摄像头,第二摄像头用于采集图像的深度信息。
在上述第一方面的一种可能的实现中,上述第一电子设备为智能门锁,第二电子设备为手机。
其中,第一电子设备可以为任意一种支持人脸识别功能的不具有显示屏的电子设备,包括但不限于智能门锁、机器人、安防设备等。
此外,第二电子设备可以为具有显示以及图像处理功能的各种便携终端设备,例如,除了手机之外,第二电子设备还可以为手环、手表、平板电脑等各种便携终端设备。
第二方面,本申请实施例提供了一种人脸图像显示方法,应用于第二电子设备,该方法包括:
显示第二人脸图像,其中,第二人脸图像是根据成像映射关系由第一人脸图像变换得到的图像,成像映射关系是第一摄像头和第二摄像头采集的图像之间的映射关系,第一摄像头和第二摄像头包含于不同于第二电子设备的第一电子设备中,第一人脸图像是第一摄像头在第一时刻采集的图像,第一人脸图像中的人脸位于第一人脸图像的第一位置,人脸在第二人脸图像中的位置与人脸在第二摄像头在第一时刻采集的图像中的位置相同。
在上述第二方面的一种可能的实现中,上述成像映射关系为第一摄像头图像坐标系和第二摄像头图像坐标系之间的映射关系,
其中,第一摄像头图像坐标系是与第一摄像头相关联的二维坐标系,第二摄像头图像坐标系是与第二摄像头相关联的二维坐标系。
在上述第二方面的一种可能的实现中,上述成像映射关系是预设的参数,或者,
成像映射关系是由第二电子设备根据第一映射关系、第二映射关系以及第三映射关系 确定的,
其中,第一映射关系是第一摄像头空间坐标系与第二摄像头空间坐标系之间的映射关系,第一摄像头空间坐标系是与第一摄像头关联的三维坐标系,第二摄像头空间坐标系是与第二摄像头关联的三维坐标系,
第二映射关系是第一摄像头空间坐标系与第一摄像头图像坐标系之间的映射关系,
第三映射关系是第二摄像头空间坐标系与第二摄像头图像坐标系之间的映射关系。
在上述第二方面的一种可能的实现中,上述方法还包括:接收第一电子设备发送的第一人脸图像,根据成像映射关系处理第一人脸图像,得到第二人脸图像,
或者,
接收第一电子设备发送的第二人脸图像。
在上述第二方面的一种可能的实现中,上述第二电子设备通过与第一电子设备相关联的应用程序获取成像映射关系。
在上述第二方面的一种可能的实现中,上述方法还包括:
显示用于提示用户调整人脸位置的第一提示信息。
在上述第二方面的一种可能的实现中,上述方法还包括:
显示的第二人脸图像中,人脸的位置靠近第二人脸图像的中心位置时,第二电子设备不再显示第一提示信息。
在上述第二方面的一种可能的实现中,上述方法还包括:
显示用于提示用户调整人脸朝向的第二提示信息。
在上述第二方面的一种可能的实现中,上述第一提示信息和/或第二提示信息是根据第二人脸图像生成的。
在上述第二方面的一种可能的实现中,上述第一摄像头的视场角大于第二摄像头的视场角。
在上述第二方面的一种可能的实现中,上述第一摄像头为猫眼摄像头、鱼眼摄像头中的任一种摄像头,第二摄像头为TOF摄像头、结构光摄像头、双目立体成像摄像头、深度摄像头、红外摄像头中的任一种摄像头,第二摄像头用于采集图像的深度信息。
在上述第二方面的一种可能的实现中,上述第二电子设备为手机。
第三方面,本申请实施例提供了一种人脸图像显示方法,应用于第一电子设备,第一电子设备包括第一摄像头和第二摄像头,第一摄像头和第二摄像头之间具有成像映射关系,该方法包括:
获取第一摄像头在第一时刻采集的第一人脸图像,其中,第一人脸图像中的人脸位于第一人脸图像的第一位置,第一人脸图像用于根据成像映射关系变换得到第二人脸图像,人脸在第二人脸图像中的位置与人脸在第二摄像头在第一时刻采集的图像中的位置相同,第二人脸图像用于在不同于第一电子设备的第二电子设备中显示。
在上述第三方面的一种可能的实现中,上述成像映射关系为第一摄像头图像坐标系和第二摄像头图像坐标系之间的映射关系,
其中,第一摄像头图像坐标系是与第一摄像头相关联的二维坐标系,第二摄像头图像坐标系是与第二摄像头相关联的二维坐标系。
在上述第三方面的一种可能的实现中,上述成像映射关系是预设的参数,或者,
成像映射关系是由第一电子设备根据第一映射关系、第二映射关系以及第三映射关系确定的,
其中,第一映射关系是第一摄像头空间坐标系与第二摄像头空间坐标系之间的映射关系,第一摄像头空间坐标系是与第一摄像头关联的三维坐标系,第二摄像头空间坐标系是与第二摄像头关联的三维坐标系,
第二映射关系是第一摄像头空间坐标系与第一摄像头图像坐标系之间的映射关系,
第三映射关系是第二摄像头空间坐标系与第二摄像头图像坐标系之间的映射关系。
在上述第三方面的一种可能的实现中,上述第一电子设备通过与第一电子设备相关联的应用程序获取成像映射关系。
在上述第三方面的一种可能的实现中,上述第一摄像头的视场角大于第二摄像头的视场角。
在上述第三方面的一种可能的实现中,上述第一摄像头为猫眼摄像头、鱼眼摄像头中的任一种摄像头,第二摄像头为TOF摄像头、结构光摄像头、双目立体成像摄像头、深度摄像头、红外摄像头中的任一种摄像头,第二摄像头用于采集图像的深度信息。
在上述第三方面的一种可能的实现中,上述第一电子设备为智能门锁。
第四方面,本申请实施例提供了一种计算机可读存储介质,计算机可读存储介质上存储有指令,该指令在电子设备上执行时使电子设备执行上述第二方面以及第二方面的各种可能实现中的任意一种方法,或者执行上述第三方面以及第三方面的各种可能实现中的任意一种方法。
第五方面,本申请实施例提供了一种计算机程序产品,计算机程序产品包括指令,指令用于实现如上述第二方面以及第二方面的各种可能实现中的任意一种方法,或者用于实现上述第三方面以及第三方面的各种可能实现中的任意一种方法。
第六方面,本申请实施例提供了一种芯片装置,芯片装置包括:
通信接口,用于输入和/或输出信息;
处理器,用于执行计算机可执行程序,使得安装有芯片装置的设备执行上述第二方面以及第二方面的各种可能实现中的任意一种方法,或者执行上述第三方面以及第三方面的各种可能实现中的任意一种方法。
第七方面,本申请实施例提供了一种电子设备,包括:
存储器,用于存储由电子设备的一个或多个处理器执行的指令,以及
处理器,当指令被一个或多个处理器执行时,处理器用于执行上述第二方面以及第二方面的各种可能实现中的任意一种方法,或者执行上述第三方面以及第三方面的各种可能实现中的任意一种方法。
附图说明
图1根据本申请的一些实施例,示出了一种人脸识别智能门锁的使用场景图;
图2(a)至图2(f)示出了一些实施例中,电子设备显示的预览界面中的人脸位置与智能门锁中拍摄的用于人脸识别的人脸图像中的人脸位置不一致的情况;
图3示出了一些实施例中,电子设备通过语音、文字、人脸的模拟图片来提示用户调整人脸位置的流程图;
图4(a)至图4(i)根据本申请的一些实施例,示出了图3所示的技术方案中,电子设备提示用户调整人脸位置的一些用户界面图;
图5示出了一些实施例中,电子设备通过生成可以模拟人脸图像的示意图来提示用户调整人脸位置的流程图;
图6根据本申请的一些实施例,示出了一种智能门锁的结构框图;
图7根据本申请的一些实施例,示出了一种由电子设备和智能门锁组成的系统的结构框图;
图8根据本申请的一些实施例,示出了一种手机和智能门锁的交互流程图;
图9(a)至图9(i)根据本申请的一些实施例,示出了图8所示的流程图中涉及的一些手机的用户界面图;
图10(a)至图10(c)根据本申请的一些实施例,示出了手机和智能门锁通过执行图8所示技术方案之后,手机显示的预览界面中人脸位置与智能门锁中拍摄的用于人脸识别的人脸图像中的人脸位置一致的情况;
图11(a)根据本申请的一些实施例,示出了一种手机将从智能门锁接收到的猫眼摄像头拍摄的人脸图像映射到TOF摄像头坐标系下的处理流程;
图11(b)根据本申请的一些实施例,示出了另一种手机将从智能门锁接收到的猫眼摄像头拍摄的人脸图像映射到TOF摄像头坐标系下的处理流程;
图12根据本申请的一些实施例,示出了另一种手机和智能门锁的交互流程图;
图13根据本申请的一些实施例,示出了一种手机的结构框图。
具体实施方式
本申请的说明性实施例包括但不限于一种人脸图像显示方法、可读存储介质、程序产品及电子设备。
为了更好地理解本申请实施例的技术方案,下面首先对本申请实施例可能涉及的相关术语和概念进行介绍。
TOF摄像头:采用飞行时间(Time-of-Flight,TOF)技术的一种摄像头,TOF摄像头的视场角较小,拍摄图像时可以获取图像中的深度信息。即使TOF摄像头的感光芯片最终成像得到的是二维的图像,但是由于TOF摄像头拍摄图像时可以获取到图像中各个图像元素的深度信息,因此,一般将TOF摄像头拍摄的图像称为三维(Three Dimensions,3D)图像。
TOF摄像头拍摄的图像是黑白图像,并且按照相关规定,TOF摄像头拍摄的原始的黑白图像作为人体的生物密码不允许被通过网络发送给TOF摄像头所在的设备或装置以外的其他设备,即只能在本地进行处理和存储。
猫眼摄像头:猫眼摄像头拍摄的图像为二维的彩色图像,并且猫眼摄像头的视场角较大,拍摄的图像畸变较大。
下面将结合附图对本申请的实施例作进一步地详细介绍。
随着智能终端技术的发展,智能终端设备的应用越来越广泛,典型的对于智能门锁来说,由于用户不需要使用传统的钥匙进行开锁,可以直接通过手机APP或人脸识别、指纹识别等方式开锁,操作方式简单、智能。并且,由于智能门锁具有更好的防盗性能,因此智能门锁在家庭或生产领域的应用越来越多。对于支持人脸识别功能的智能门锁,受限于智能门锁的成本和对智能门锁外观上的美感的追求,通常不会在智能门锁上设置显示屏,而采用语音、灯光等方式提示用户调整脸部位置。虽然语音、灯光可以辅助用户调整面部位置,但是用户无法直观地看到自己的面部位置情况,用户体验较差。
为了提升在使用智能门锁过程中的用户体验,通常借助于智能门锁外接的电子设备实时显示出人脸的示意图或人脸的模拟图,而非真实的人脸图像,也会影响用户体验。
下面将以智能门锁的应用场景为例,详细介绍本申请的技术方案。可以理解的是,本申请的技术方案可以应用到任意一种支持人脸识别功能的不具有显示屏的电子设备中,包括但不限于智能门锁、机器人、安防设备等。
图1根据本申请的一些实施例,示出了一种人脸识别智能门锁的使用场景图。其中包括电子设备100以及智能门锁200。智能门锁200包括第一摄像头201以及第二摄像头202。
其中,第一摄像头201用于采集不包含深度信息的2D图像,该2D图像不涉及用户的生物密码,可以被智能门锁200发送给外接的其他电子设备,并且由其他电子设备处理后作为预览图像进行显示。例如,用户可以通过其他电子设备远程监控家门外的情况。此外,当用户在室内时,还可以通过第一摄像头201查看门外的情况,例如,用户通过第一摄像头201查看门外是谁在敲门,或者查看门外是否有陌生人逗留等。出于监控的需求,第一摄像头201拍摄的图像需要覆盖尽量大的范围,因此,第一摄像头201通常是视场角较大的摄像头,例如猫眼摄像头、鱼眼摄像头等。
第二摄像头202用于采集包含深度信息的3D图像,该3D图像涉及用户生物密码,因此通常不允许将第二摄像头202拍摄的图像外发,只能在智能摄像头200本地进行处理和存储,以防泄露用户隐私。此外,采用包含深度信息的3D图像进行人脸识别时,可以从该3D图像中提取出更为准确的人脸特征,有助于提高人脸识别的准确度。因此出于人脸识别准确性的需求,智能门锁200通常需要安装第二摄像头202。第二摄像头202为视场角较小的摄像头,例如TOF摄像头、结构光摄像头、双目立体成像摄像头、深度摄像头、红外摄像头等。
此外,电子设备100上安装有智能门锁应用。在需要进行人脸录入时,用户将脸部对准第一摄像头201和第二摄像头202,同时手持电子设备100,用户操作电子设备100安装的智能门锁应用使电子设备100和智能门锁200进行配对连接。例如,电子设备100通过无线保真(Wireless-Fidelity,WIFI)、蓝牙(Bluetooth,BT)、近场通信(Near Field Communication,NFC)等通信方式和智能门锁200进行配对连接。电子设备100与智能门锁200连接成功后,电子设备100响应于用户的操作向智能门锁200发起人脸图像录入流程。
在人脸图像的录入过程中,智能门锁200和电子设备100通过执行本申请提供的人脸录入方法,智能门锁200的第一摄像头201和第二摄像头202分别对用户的面部进行拍摄。而后智能门锁200将第一摄像头201拍摄的人脸图像发送给电子设备100,用于人脸预览;智能门锁200对第二摄像头202拍摄的人脸图像进行特征提取、对图像质量评估以及对图像中 的人脸进行识别。
由于用于拍摄预览图像的第一摄像头201通常还用于拍摄智能门锁200周围的监控图像,因此,为了能够较大范围地监控到智能门锁200周围环境,第一摄像头201的视场角FOV1通常较大,使得第一摄像头201拍摄的图像中存在较大的畸变。
而为了能够有效地实现智能门锁200的人脸识别功能,需要第二摄像头202拍摄的作为人脸识别的图像存在较小的变形,要求人脸在图像中的比例较高,并且为了能够获取用户面部尽量多的信息,需要第二摄像头202能够获取用户面部的深度信息,因此,需要第二摄像头202具有较小的视场角FOV2,并且第二摄像头202需要具有获取用户面部的深度信息的功能。此外,由于第二摄像头202能够获取作为人体的生物密码的用户面部深度信息,因此通常不允许将第二摄像头202拍摄的图像通过网络发送给智能门锁200以外的其他设备,即只能在智能门锁200本地进行处理和存储。
可以理解,基于第一摄像头201和第二摄像头202安装位置的不同以及视场角的不同,使得在人脸录入过程中,用户的人脸在第一摄像头201和第二摄像头202的视场中所处的位置不同。如前所述,智能门锁200可以将第一摄像头201拍摄的图像通过网络发送给电子设备100,但不可以将第二摄像头202拍摄的图像通过网络发送给电子设备100。因此,电子设备100能够获取用于作为人脸录入预览图像进行显示的图像,是第一摄像头201拍摄的图像。从而使电子设备100显示的预览图像中人脸的位置和第二摄像头202拍摄的用于人脸识别的图像中人脸的位置不同。例如,用户人脸出现在第一摄像头201的视场范围内,而未出现在第二摄像头202的视场范围内,具体地,例如,在图2(a)所示的实施例中,第一摄像头201拍摄的图像103a中包括人脸106a,人脸106a位于图像103a左侧偏下的位置;同样地,图2(b)所示的电子设备100的人脸预览界面中显示出来的预览画面104也包括人脸107a,由于该预览画面104来自第一摄像头201拍摄的图像,因此人脸107a也位于预览画面104左侧偏下的位置;然而,由于第一摄像头201和第二摄像头202安装位置、视场角的不同,在第一摄像头201拍摄的图像包括人脸的情况下,第二摄像头202拍摄的图像有可能并不包括人脸。例如,图2(c)所示的第二摄像头202拍摄的图像105a就未包括人脸,即第二摄像头202未拍摄到用户的面部五官,这使得智能门锁200在对第二摄像头202拍摄的没有用户的面部五官进行特征提取时,提取不到用户的面部五官特征,从而导致人脸录入不成功。
又如,即使用户人脸同时出现在第一摄像头201以及第二摄像头202的视场范围内,但是用户人脸处于第一摄像头201的视场中心,而处于第二摄像头202的视场边缘,使得第二摄像头202无法完整拍摄到用户的面部五官。例如,在图2(d)所示的实施例中,第一摄像头201拍摄的图像103b中的人脸106b位于图像103b的中心位置。同样地,在图2(e)所示的电子设备100的人脸预览界面中显示出来的预览画面104中的人脸107b也处于预览画面104的中心位置。然而在图2(f)所示的第二摄像头202拍摄的图像105b中人脸108却处于图像105b的边缘位置,且图像105b中人脸108不包括用户的嘴巴,即第二摄像头202未完整拍摄到用户的面部五官,使得智能门锁200在对第二摄像头202拍摄的面部五官不完整的图像进行特征提取时,无法完整提到用户的面部五官特征,从而导致人脸录入不成功。
在一些智能门锁技术中,为了能实时提醒用户调整人脸到第二摄像头202的拍摄范围,智能门锁200通过如图3所示的步骤300至步骤306实现外接电子设备100产生语音、文字、图片来提醒用户调整人脸位置。具体地,如图3所示,包括:
步骤300:电子设备100向智能门锁200发送人脸录入指令。例如,如图4(a)所示,电子设备100和智能门锁200连接成功后,电子设备100检测到用户点击电子设备100桌面的人脸识别图标121后,向智能门锁200发送人脸录入指令。
步骤301:智能门锁200响应于人脸录入指令拍摄人脸图像。例如,第一摄像头201和第二摄像头202同时开启拍摄功能。
步骤302:智能门锁200向电子设备100发送拍摄的人脸图像。例如,智能门锁200将第一摄像头201拍摄的图像发送给电子设备100。
步骤303:电子设备100根据接收到的人脸图像产生提示信息,提示用户将人脸调整到最佳位置。
在一些实施例中,电子设备100根据接收到的人脸图像产生语音、文字和图片等提示信息,并在电子设备100的用户界面上显示出来,以提示用户调整人脸位置。其中,该图片并非人脸的真实图片,该图片只是一个能够示意当前人脸位置的图片,例如,动态的人脸的简笔画、卡通形象或示意图等。
例如,如图4(b)所示的电子设备100的界面中显示出的“请站在离锁半米的距离,针对锁具摄像头,根据语音提示来进行人脸录制”的文字提示。当用户点击图4(b)中所示的“下一步”控件122之后,出现如图4(c)或图4(d)所示的用户界面。图4(c)和图4(d)中显示的界面中均包括带有上述图片的预览框123以及相应的文字提示。上述文字提示可以是例如图4(c)所示的“请向左移动脸部”的文字提示124a、图4(d)所示的“请向右移动脸部”的文字提示124b。用户根据图4(c)和图4(d)中的提示信息移动脸部之后,将人脸调整到最佳位置,例如,用户将人脸调整到智能门锁200的第二摄像头202的视场中心。
步骤304:智能门锁200采集最佳位置的人脸图像,并且生成人脸录入成功的通知消息。其中,最佳位置可以是第二摄像头202的视场中心所处的位置。智能门锁200采集的最佳位置的人脸图像可以是用户的人脸位于第二摄像头202的视场中心时,智能门锁200的第一摄像头201采集的人脸图像。
在一些实施例中,当用户的人脸调整至最佳位置之后,为了使录入的人脸信息更加完整,以及为了提升人脸识别的安全性,电子设备100的用户界面还可以生成采集用户面部不同角度的图像的提示信息。例如,如图4(e)所示的“请抬头”的文字提示124c,图4(f)所示的“请低头”的文字提示124d,图4(g)所示的“请将脸转向右边”的文字提示124e,以及图4(h)所示的“请将脸转向左边”的文字提示124f。
此外,上述预览框123还可以提示用户人脸录入的进度,示例性地,当用户根据文字提示124c完成抬头动作后,预览框123外围的4个圆弧图形中上方的圆弧图形可以改变颜色或闪烁,用于提示用户抬头动作的人脸已录入;当用户根据文字提示124d完成低头动作后,预览框123外围的4个圆弧图形中下方的圆弧图形可以改变颜色或闪烁,用于提示用户低头动作的人脸已录入等等。此外,在一些实施例中,当用户将人脸调整至前述最佳位置,并且 根据电子设备100生成的提示信息完成抬头、低头、将脸部转向左边、将脸部转向右边等动作之后,智能门锁200的第二摄像头202分别采集到用户处于前述最佳位置并且用户在抬头、低头、脸部转向左边、脸部转向右边时对应的人脸图像,在智能门锁200将第二摄像头202采集的这些人脸图像进行特征提取、对图像质量进行评估,在智能门锁200确定出第二摄像头202采集的这些人脸图像中人脸的各个角度的特征信息能够满足预设要求的情况下,生成人脸录入成功的通知消息。
步骤305:智能门锁200向电子设备100发送人脸录入成功的通知消息。
步骤306:电子设备100根据接收到的人脸录入成功的通知消息产生人脸录入成功的提示信息。例如在图4(i)所示的电子设备100的界面中显示的“操作成功”的文字以及符合提示125,当用户点击图4(i)所示的“完成”控件124e之后,即完成人脸录入。
然而在如图3所示的技术方案中,由于智能门锁200外接电子设备100是基于智能门锁200的第一摄像头201拍摄的人脸图像产生语音、文字和图片等提示信息,而由于智能门锁200的第一摄像头202和第二摄像头202存在的安装位置和视场的差异,会使智能门锁200外接电子设备100产生的提示信息不准确,影响人脸录入的效率。并且,智能门锁200外接的电子设备100显示的图片并非人脸的真实图片,该图片只是一个能够示意当前人脸位置的图片,例如动态的人脸的简笔画、卡通形象或示意图等无法为用户提供能够在人脸录入过程中看到真实人脸的用户体验。
此外,在一些智能门锁技术中,为了能实时提醒用户调整人脸到第二摄像头202的拍摄范围,智能门锁200通过如图5所示的步骤500至步骤508实现外接电子设备100通过人脸的模拟示意图来提醒用户调整人脸位置。具体地,如图5所示,包括:
步骤500:电子设备100向智能门锁200发送人脸录入指令,以控制智能门锁200开启人脸录入流程。
步骤501:智能门锁200响应于人脸录入指令拍摄人脸图像,例如,第一摄像头201和第二摄像头202同时开启拍摄功能。
步骤502:智能门锁200对拍摄的人脸图像进行质量评估以及人脸特征信息提取,例如,智能门锁200将第一摄像头201拍摄的图像采用特征提取算法提取出人脸的关键特征点信息,例如提取出人的眼睛、嘴巴、鼻子、眉毛等的特征信息。
步骤503:智能门锁200向电子设备100发送提取的人脸特征信息。
步骤504:电子设备100绘制人脸图像示意图,例如,电子设备100根据从智能门锁200接收到的第一摄像头201拍摄的图像中的人脸特征信息,进行三维图像建模,模拟出人脸的示意图。;
步骤505:电子设备100提示用户将人脸调整到最佳位置;
步骤506:智能门锁200采集最佳位置的人脸图像;
步骤507:智能门锁200向电子设备100发送人脸录入成功的消息;
步骤508:电子设备100提示录入成功。
然而,如图5所示的技术方案中,电子设备100是基于智能门锁200的第一摄像头201拍摄的图像绘制出人脸图像示意图,然后进行提示用户调整人脸位置。但是,由于智能门锁200的第一摄像头202和第二摄像头202存在的安装位置和视场的差异,仍然会使智能门锁 200外接电子设备100显示出来的人脸预览图像不能准确反映出人脸在第二摄像头202的视场中所处的位置,影响人脸识别的准确度。并且,在如图5所示的技术方案中,智能门锁200外接的电子设备100显示的人脸的模拟示意图并非人脸的真实图片,依然无法为用户提供能够在人脸录入过程中看到真实人脸的用户体验。
为了解决上述技术问题,更好地为用户能够在人脸录入过程中看到真实人脸的用户体验,本申请提供了一种技术方案,可以直接在智能门锁200外接的电子设备100上显示真实人脸的预览图像,比人脸的简笔画以及的人脸的模拟示意图更加直观,并且电子设备100显示出的人脸预览图像所反映出的人脸的位置、角度、表情等情况,能够和智能门锁200的第二摄像头202拍摄的用于人脸识别的图像中人脸的情况一致。
具体地,例如,图1所示的电子设备100可以通过本申请实施例提供的预设的图像处理方法,将从智能门锁200接收到的第一摄像头201拍摄的图像,转换到第二摄像头202拍摄的图像的坐标系下,从而使电子设备100显示出的人脸预览图像能够准确反映出人脸在第二摄像头202的视场中的位置、角度、表情等情况,从而可以使用户能够通过电子设备100的人脸预览界面和提示信息及时调整面部位置、角度、表情等情况,进而使智能门锁200的第二摄像头202能够快速抓取到有效的人脸图像,例如多个角度的包含完整的人脸五官特征的人脸图像,从而提升人脸录入的效率和准确度,提升用户体验。
可以理解的是,在图1所示的智能门锁200进行人脸图像录入的过程中,为了使智能门锁200的第一摄像头201和第二摄像头202能够有效拍摄到人脸图像,用户手持的电子设备100不能遮挡第一摄像头201和第二摄像头202以及用户的面部。
此外,可以理解,适用于本申请实施例的电子设备100可以为具有显示以及图像处理功能的各种便携的终端设备,例如:手机、手环、手表、平板电脑等。
为了便于说明本申请的技术方案,下面以第一摄像头201为猫眼摄像头、第二摄像头202为TOF摄像为例,对本申请的技术方案进行详细介绍。
下面将结合附图对图1所示的智能门锁200的硬件结构以及系统结构进行详细介绍。
首先介绍智能门锁200的硬件结构。图6根据本申请的一些实施例,示出了一种智能门锁200的硬件结构框图。其中,智能门锁200包括处理器204、存储器205、猫眼摄像头201、TOF摄像头202、电源206、通信模块207、传感器模块209以及音频模块210。
处理器204可以包括一个或多个处理单元,例如,可以包括中央处理器CPU(Central Processing Unit)、图像处理器GPU(Graphics Processing Unit)、数字信号处理器DSP(Digital Signal Processor)、微处理器MCU(Micro-programmed Control Unit)、AI(Artificial Intelligence,人工智能)处理器或可编程逻辑器件FPGA(Field Programmable Gate Array)等的处理模块或处理电路。其中,不同的处理单元可以是独立的器件,也可以集成在一个或多个处理器中。在一些实施例中,处理器204用于对TOF摄像头202拍摄的人脸图像进行质量评估,例如对人脸图像的清晰度、人脸图像中面部五官特征是否完整进行评估。在一些实施例中,处理器204用于对TOF摄像头202拍摄的人脸图像进行人脸特征信息的提取,并且基于提取的人脸特征信息进行人脸识别,从而在识别出用户的人脸的情况下,为用户开锁,使用户进入室内。在一些实施例中,处理器204还用于在确定人脸录入成功的情况下,产生人脸录入成功的消息,通过通信模块207发送给外接的电子设备100。
存储器205用于存储软件程序以及数据,可以是易失性存储器(Volatile Memory),例如随机存取存储器(Random-Access Memory,RAM),双倍数据率同步动态随机存取存储器(Double Data Rate Synchronous Dynamic Random Access Memory,DDR SDRAM),也可以是非易失性存储器(Non-Volatile Memory),例如只读存储器(Read-Only Memory,ROM),电可擦可编程只读存储器(Electrically Erasable Programmable read only memory,EEPROM)、快闪存储器(Flash Memory),硬盘(Hard Disk Drive,HDD)或固态硬盘(Solid-State Drive,SSD)。处理器204通过运行存储在存储器205的软件程序以及数据,执行智能门锁200的各种功能应用以及数据处理。例如,在本申请的一些实施例中,存储器205可以存储智能门锁200的各个摄像头拍摄的照片、提取的人脸特征信息以及开锁的历史记录等等。
猫眼摄像头201用于拍摄用于预览的人脸图像。
TOF摄像头202用于拍摄用于人脸识别的人脸图像。
传感器模块209用于获取用户或智能门锁200的状态,可以包括压力传感器、红外传感器等。
通信模块207可以用来智能门锁200和其他电子设备进行通信。例如,在本申请的一些实施例中,智能门锁200可以通过WiFi、蓝牙、NFC等通信方式与电子设备100建立通信连接,智能门锁200可以通过通信模块207接收由电子设备100发送的指令,智能门锁200可以通过通信模块207向电子设备100发送猫眼摄像头201拍摄的人脸图像,以及在确定人脸录入成功时,智能门锁200通过通信模块207向电子设备100发送人脸录入成功的消息。
音频模块210用于将数字音频信息转换成模拟音频信号输出,或者将模拟音频输入转换为数字音频信号。音频模块210还可以用于对音频信号编码和解码。在一些实施例中,音频模块210可以用于向用户播放语音提示。
电源206用于为智能门锁200的各个部件进行供电。在一些实施例中,电源206包括电池。
可以理解,图6所示的智能门锁200的结构只是一种示例,在另一些实施例中,智能门锁200还可以包括更多或更少的模块,也可以组合或拆分部分模块,本申请实施例不做限定。
下面将介绍图1所示的智能门锁200和电子设备100的分层系统架构图。图7示出了能够实现本申请技术方案的一种智能门锁200和电子设备100的分层系统架构图。如图7所示,智能门锁200包括处理模块212、传输模块211、猫眼摄像头201以及TOF摄像头202等。
其中,处理模块212用于产生控制指令,控制猫眼摄像头201和TOF摄像头202采集人脸图像,以及对TOF摄像头202采集的人脸图像进行质量评估、人脸特征信息提取以及人脸识别等。例如,对人脸图像的清晰度、人脸图像中面部五官特征是否完整进行评估。对满足质量要求的图像,例如当用户的人脸处于TOF摄像头202的视场中心时,处理模块212可以对TOF摄像头202拍摄的人脸图像进行人脸的特征信息提取之后,根据提取出的特征信息与智能门锁200预先存储的人脸特征信息作比对,从而在比对成功时,控制智能门锁200开锁,使用户进入室内。
猫眼摄像头201用于拍摄用于预览的人脸图像,该人脸图像为二维图像,其中人脸存在的畸变较大。
TOF摄像头202用于拍摄用于人脸识别的人脸图像,该图像为黑白的包括人脸的深度 信息的三维图像。
传输模块211用于响应处理模块212的图像发送指令,将猫眼摄像头201拍摄的用于预览的人脸图像发送给电子设备100。传输模块211还可以用于在处理模块212确定人脸录入成功时,响应于处理模块212的消息发送指令,向电子设备100发送人脸录入成功的消息。传输模块211还可以用于从电子设备100接收开启人脸录入流程的指令等。
继续参考图7,电子设备100包括处理模块116、传输模块115、显示屏102、麦克风132以及智能门锁应用128、相机应用129、视频应用131和畅联通话应用114等。
其中,处理模块116用于将从智能门锁200接收到的用于预览的人脸图像经过预设的图像处理方法进行处理,将用于预览的人脸图像转换到智能门锁200的TOF摄像头202拍摄的图像的坐标系下,以使处理后的用于预览的人脸图像能够准确反映出人脸在智能门锁200的TOF摄像头202的视场中的姿态、位置等信息。处理模块116还用于在检测到用户对于电子设备100的智能门锁应用128的操作的情况下,产生发起人脸录入过程的指令,并经过传输模块115发送给智能门锁200。
传输模块115用于从智能门锁200接收用于预览的人脸图像、人脸录入成功的消息,以及向智能门锁200发送人脸录入指令等。
显示屏102用于显示智能门锁应用128的人脸预览界面、提示信息以及其他应用程序的用户界面等。
麦克风132用于播放语音提示信息,以辅助用户进行面部的位置、角度、表情等的调整。
可以理解的是,图7所示的系统结构作为一种示意性的系统结构,并不构成对可以实现本申请技术方案的智能门锁200和电子设备100的具体限定。在本申请的另一些实施例中,智能门锁200和电子设备100可以包括比图7所示更多或更少的部件,或者组合某些部件,或者拆分某些部件,或者不同的部件布置。图7所示的部件可以以硬件,软件或软件和硬件的组合实现。
实施例一
下面将结合图7所示的系统结构图,以电子设备100为手机为例,对本申请提供的技术方案进行详细介绍。具体地,如图8所示,智能门锁200和手机100之间的交互图包括以下步骤:
步骤800:手机100向智能门锁200发送人脸录入指令。
例如,在一些实施例中,手机100的智能门锁应用128响应于用户操作,产生中断消息。可以理解,此处中断消息是指手机100的智能门锁应用128产生的,以请求手机100的处理模块116产生使智能门锁200开启人脸录入功能的控制指令。
例如,在图9(a)所示的实施例中,用户点击智能门锁应用128的图标,手机100响应于用户的点击操作,打开智能门锁应用128,显示出如图9(b)所示的智能门锁应用128的登录界面。在图9(b)所示的智能门锁应用128的登录界面中,包括“账号”输入框111、“密码”输入框112以及“登录”按钮113。用户分别在“账号”输入框111中输入账号,在“密码”输入框112中输入密码,并且点击“登录”按钮113之后,手机进入如图9(c)所示的智能门锁应用128的用户界面。在图9(c)所示的智能门锁应用128的用户界面中,用户点击人脸预览图标1281,即可触发智能门锁应用128产生中断消息,以使手机100基 于该中断消息生成智能门锁200开始人脸录入的指令。
然后,手机100的智能门锁应用128将产生的中断消息发送给手机100的处理模块116,手机100的处理模块116基于接收到的中断消息产生人脸录入指令,该指令用于使智能门锁200开启人脸录入功能。
在一些实施例中,手机100的处理模块116将产生的人脸录入指令经由传输模块115发送到智能门锁200,其中,传输模块115可以为WIFI模块、蓝牙模块等。例如,在一些实施例中,手机100的处理模块116将产生的人脸录入指令经由手机100的传输模块115发送给智能门锁200的传输模块211。
步骤801:智能门锁200接收到人脸录入指令后,控制猫眼摄像头201采集人脸图像。
例如,在一些实施例中,智能门锁200的传输模块211将接收到的人脸录入指令发送给智能门锁200的处理模块212。智能门锁200的处理模块212接收到该人脸录入指令后,响应于接收到的人脸录入指令,产生控制猫眼摄像头201采集图像的控制指令。
在一些实施例中,智能门锁200的处理模块212响应于接收到的人脸录入指令,产生同时控制猫眼摄像头201和TOF摄像头202采集图像的控制指令。在一些实施例中,智能门锁200的处理模块212响应于接收到的人脸录入指令,还可以产生控制猫眼摄像头201先采集用于预览的图像,待用户将人脸调整到最佳位置、最佳姿态之后,再控制TOF摄像头202采集用于人脸识别的图像的控制指令。
在一些实施例中,智能门锁200的处理模块212将产生的猫眼摄像头201采集图像的控制指令发送给智能门锁200的猫眼摄像头201。智能门锁200的猫眼摄像头201响应于接收到的控制指令,采集人脸图像。例如,猫眼摄像头201以每秒30帧的速度采集人脸图像。
步骤802:智能门锁200向手机100发送猫眼摄像头201采集的人脸图像。
例如,智能门锁200的猫眼摄像头201将采集的人脸图像通过智能门锁200的传输模块211发送给手机100的传输模块115,手机100的传输模块115再将接收到的人脸图像发送给手机100的处理模块116。
步骤803:手机100将接收到的猫眼摄像头201采集的人脸图像按照预设的图像处理方法,转换到TOF图像坐标系下,得到作为预览图像的人脸图像。
例如,在图10(a)所示的实施例中,猫眼摄像头201拍摄的人脸图像117a中,人脸118a处于人脸图像117a左下角的边缘处。手机将从智能门锁200接收到的猫眼摄像头201拍摄的人脸图像117a按照预设的图像处理方法转换到TOF图像坐标系下,并且显示在如图10(b)所示的手机100的人脸预览界面中。如图10(b)所示,人脸预览画面117c中的人脸图像118c处于人脸预览画面117c的中心位置,与图10(c)所示的TOF摄像头拍摄的图像117b中的人脸118b所处的位置相同。从而使猫眼摄像头201采集的用于预览的人脸图像能够准确反映出人脸在TOF拍摄的用于人脸识别的图像中的位置情况,从而使用户能够通过手机100的人脸预览界面及时调整面部位置、角度、表情等情况,进而使智能门锁200的TOF摄像头202能够快速抓取到有效的人脸图像(即多个角度的包含完整的人脸五官的人脸图像),提高人脸录入的效率和准确度,提升用户体验。其中,预设的图像处理方法将在下文中进行详细介绍。
在一些实现方式中,上述TOF图像坐标系可以为以TOF摄像头的感光芯片的中心位置 为坐标原点的平面直角坐标系。
步骤804:手机100显示人脸预览图像,并提示用户调整人脸位置。
例如,手机100的处理模块116将接收到的猫眼摄像头201采集的人脸图像按照预设的图像处理方法,转换到TOF图像坐标系下,得到作为预览图像的人脸图像之后产生显示预览图像的指令,并且发送给手机100的智能门锁应用128。
手机100的智能门锁应用128响应于接收到的显示预览图像的指令,显示出人脸预览图像,并且提示用户调整人脸位置。例如,手机100显示出如图9(d)所示的预览界面,其中包括人脸预览画面104、人脸预览画面104中的人脸图像109以及人脸预览画面104下方的提示信息119。人脸预览画面104中的人脸图像109即为转换到TOF图像坐标系下的猫眼摄像头201拍摄的人脸图像。
其中,提示信息119可以为用于提示用户将人脸调整到最佳位置的文字提示,例如提示信息119可以为上述前述图4(c)所示的“请向左移动脸部”、“请向右移动脸部”的文字提示,用户根据图4(c)和图4(d)中的提示信息移动脸部之后,将人脸调整到最佳位置,例如,用户将人脸调整到手机100显示的人脸预览图像的中心位置,或者用户将人脸调整到靠近前述预览图像的中心位置。可以理解的是,由于手机100显示的人脸预览图像是转换到TOF图像坐标系下的猫眼摄像头201拍摄的人脸图像,因此,手机100显示的人脸预览图像中人脸处于预览图像的中心位置时,人脸也位于智能门锁200的TOF摄像头202的视场中心。在一些实施例中,当手机100确定出人脸的位置靠近显示的人脸预览图像的中心位置时,手机100不再显示上述提示用户移动人脸位置的提示信息。
步骤805:手机100向智能门锁200发送采集3D人脸图像的指令。
例如,在一些实施例中,当手机100的处理模块116确定用户的人脸调整到最佳位置之后,产生采集3D人脸图像的指令。
例如,在一些实施例中,手机100的处理模块116对转换得到的TOF图像坐标系下的猫眼摄像头200拍摄的图像进行特征提取,确定出人脸处于TOF图像坐标系下的猫眼摄像头200拍摄的图像的中心位置,则确定用户的人脸调整到最佳位置。当确定人脸调整到最佳位置之后,即表明人脸在TOF摄像头202的视场中处于最佳位置,TOF摄像头202在该最佳位置处采集的3D人脸图像可以获取有效的人脸特征信息,从而有效地进行人脸识别。
在一些实施例中,手机100的处理模块116将产生的采集3D人脸图像的指令发送给智能门锁200。例如,在一些实施例中,手机100将处理模块116产生的采集3D人脸图像的指令通过手机的传输模块115,发送至智能门锁200的传输模块211,然后智能门锁200的传输模块211再将接收到的采集3D人脸图像的指令发送给智能门锁200的处理模块212。智能门锁200的处理模块212响应于接收到的采集3D人脸图像的指令,产生控制TOF摄像头202采集3D人脸图像的控制指令,并且发送给智能门锁200的TOF摄像头202
步骤806:智能门锁200接收到采集3D人脸图像的指令后,控制TOF摄像头202采集3D人脸图像。
例如,在一些实施例中,智能门锁200的TOF摄像头202响应于接收到的采集3D人脸图像的控制指令,采集3D人脸图像。
在一些实施例中,为了使录入的人脸信息更加完整,以及为了提升人脸识别的安全性, 在TOF摄像头202采集3D人脸图像的过程中,在手机100的预览界面中实时显示出人脸预览画面104的同时,手机100还可以生成用于提示用户调整人脸朝向或者调整面部表情、动作的提示信息。例如,手机100生成用于提示拍摄用户的正面、左侧面、右侧面、仰视、俯视、张嘴、闭嘴、眨眼的多个角度的人脸图像的提示信息。例如,手机100在人脸预览画面104下方分别显示出如图9(e)至图9(h)中所示的“请低头”、“请抬头”、“请将脸转向右边”以及“请将脸转向左边”的提示信息119。
步骤807:智能门锁200在确定出人脸录入成功的情况下,生成人脸录入成功的通知消息。
例如,在一些实施例中,在智能门锁200将TOF摄像头202采集的3D人脸图像进行特征提取、对图像质量进行评估,在智能门锁200确定出TOF摄像头202采集的3D人脸图像中人脸的各个角度的特征信息能够满足预设标准的情况下,确定人脸录入成功,生成人脸录入成功的通知消息。例如,智能门锁200的处理模块212对TOF摄像头202采集的3D人脸图像进行特征信息的提取,当智能门锁200的处理模块212在确定提取出了用户的正面、左侧面、右侧面、抬头、低头等不同角度下的完整的眼睛、嘴巴、鼻子、眉毛等面部五官特征,并且这些面部五官特征之间的比例满足预设的比例阈值的情况下,确定人脸录入成功,并且生成人脸录入成功的通知消息。
步骤808:智能门锁200向手机100发送人脸录入成功的通知消息。
例如,在一些实施例中,智能门锁200的处理模块212在生成人脸录入成功的通知消息之后,将该通知消息通过智能门锁200的传输模块211发送给手机100。
步骤809:手机100根据接收到的人脸录入成功的通知消息,提示用户人脸录入成功。
例如,在一些实施例中,手机100经由传输模块115接收到智能门锁200发送的人脸录入成功的通知消息之后,手机100的处理模块116基于接收到的人脸录入成功的消息产生控制指令,并且将产生的控制指令发送给手机100的智能门锁应用128,手机100的智能门锁应用128根据接收到的控制指令,提示用户人脸录入成功。
例如,在图9(i)所示的手机100的界面中显示出表示人脸录入成功的提示信息126,其中包括一“对号”符号和一“人脸录入成功”的文字提示,用户点击“完成”按钮127之后,则确定智能门锁200完成人脸录入。
从以上关于图8的介绍中可知,手机100通过预设的图像处理方法将从智能门锁200接收到的猫眼摄像头201拍摄的人脸图像,转换到TOF图像坐标系下,从而使手机100显示出的人脸预览图像能够准确反映出人脸在TOF摄像头202的视场中的位置、角度、表情等情况,从而可以使用户能够通过手机100的人脸预览界面和提示信息及时调整面部的位置、角度、表情等,进而使智能门锁200的TOF摄像头202能够快速抓取到有效的人脸图像(即多个角度的包含完整的人脸五官的人脸图像),提升人脸录入的效率和准确度,提升用户体验。
下面将对步骤803中涉及的手机100通过预设的图像处理方法将从智能门锁200接收到的猫眼摄像头201拍摄的图像,转换到TOF图像坐标系下的过程进行详细介绍。
图11(a)根据本申请的一些实施例,示出了手机100将从智能门锁200接收到的猫眼摄像头201拍摄的用于人脸预览的图像转换到TOF图像坐标系下的处理过程。参考图11(a), 前述预设的图像处理方法具体为包括以下步骤:
步骤1101:手机100获取猫眼空间坐标系和TOF空间坐标系之间的映射关系(记作“第一映射关系”)、猫眼空间坐标系和猫眼图像坐标系之间的映射关系(记作“第二映射关系”)以及TOF空间坐标系和TOF图像坐标系之间的映射关系(记作“第三映射关系”)。
在一些实现方式中,猫眼空间坐标系是指以猫眼摄像头201的镜头中心为坐标原点的三维空间坐标系;TOF空间坐标系是指以TOF摄像头202的镜头中心为坐标原点的三维空间坐标系;猫眼图像坐标系是指以猫眼摄像头201的感光芯片的中心为坐标原点的二维的平面直角坐标系;TOF图像坐标系是指以TOF摄像头202的感光芯片的中心为坐标原点的二维的平面直角坐标系。
此外,假设A点为同时出现在猫眼摄像头201和TOF摄像头202的视场中的三维空间中的一点,A点在猫眼空间坐标系下的三维坐标为:(x1、y1、z1);A点在猫眼摄像头201拍摄的图像中对应的像素点的二维坐标为:(x2、y2);A点在TOF空间坐标系下的三维坐标为:(x1’、y1’、z1’);A点在TOF摄像头202拍摄的图像中对应的像素点的二维坐标为:(x2’、y2’)。
则上述第一映射关系表征的是A点在猫眼空间坐标系下的三维坐标(x1、y1、z1)与A点在TOF空间坐标系下的三维坐标(x1’、y1’、z1’)之间的映射关系。
上述第二映射关系表征的是A点在猫眼空间坐标系下的三维坐标(x1、y1、z1)与A点在猫眼摄像头201拍摄的图像中对应的像素点的二维坐标(x2、y2)之间的映射关系。
上述第三映射关系表征的是A点在TOF空间坐标系下的三维坐标(x1’、y1’、z1’)与A点在TOF摄像头202拍摄的图像中对应的像素点的二维坐标(x2’、y2’)之间的映射关系。
此外,需要说明的是,上述第一映射关系至第三映射关系可以在智能门锁200出厂之前,由研发人员采用相机标定算法标定出来,然后存储在智能门锁200的存储器中。智能门锁200在和手机100建立连接,采用手机100作为外接设备来显示人脸的预览图像时,可以将存储的上述第一映射关系至第三映射关系发送给手机100。或者,手机100通过下载安装智能门锁应用128获取上述第一映射关系至第三映射关系。
例如,在一些实施例中,在智能门锁200出厂之前,研发人员采用张正友标定算法通过标定板,例如单平面棋盘格,标定出猫眼摄像头201和TOF摄像头202之间的上述第一映射关系,以及猫眼摄像头201自身的第二映射关系和TOF摄像头202自身的第三映射关系。从而得到例如以下公式(1)中所示的猫眼空间坐标系和TOF空间坐标系之间的第一映射关系、例如以下公式(2)中所示的猫眼空间坐标系和猫眼图像坐标系之间的第二映射关系,以及例如以下公式(3)中所示的TOF空间坐标系和TOF图像坐标系之间的第三映射关系:
Figure PCTCN2022087992-appb-000001
Figure PCTCN2022087992-appb-000002
Figure PCTCN2022087992-appb-000003
上述公式(1)中各个符号的含义如下:
X M:猫眼空间坐标系下的某点在标定板坐标下的三维坐标。
X T:TOF空间坐标系下某点(同上述X M涉及的某点)在标定板上坐标下的三维坐标。
R:TOF空间坐标系相对于猫眼空间坐标系的旋转矩阵,可通过双目标定算法得到。
T:TOF空间坐标系相对于猫眼空间坐标系的平移矩阵,可通过双目标定算法得到。
公式(1)中的矩阵
Figure PCTCN2022087992-appb-000004
为猫眼摄像头201和TOF摄像头之间的外参矩阵。
上述公式(2)中各个符号的含义如下:
u m,v m:猫眼图像坐标下某点的像素坐标。
Z m:第一标定常量。
矩阵:
Figure PCTCN2022087992-appb-000005
为猫眼空间坐标系到标定板坐标系的外参矩阵;
R m,T m:猫眼摄像头201的相机外参,分别为标定板坐标系相对于猫眼空间坐标系的旋转矩阵和平移矩阵,可通过双目标定算法得到。
X m,Y m,Z m:为公式(1)中X M的具体坐标值,即X M的坐标可以表示为:(X m,Y m,Z m),表征猫眼空间坐标系下某点在标定板坐标系下的三维坐标。
矩阵:
Figure PCTCN2022087992-appb-000006
为猫眼摄像头201自身的相机内参矩阵,其中的参数f mx,f my为猫眼摄像头201的焦距参数,u m0,v m0为猫眼摄像头201拍摄的图像中的二维像素坐标的坐标原点,可以通过双目标定算法得到。
上述公式(3)中各个符号的含义如下:
u t,v t:TOF图像坐标系下某点的像素坐标。
Z t:第二标定常量。
矩阵:
Figure PCTCN2022087992-appb-000007
为TOF摄像头202自身的相机内参矩阵,f tx,f ty为TOF摄像头202的焦距参数,u t0,v t0为TOF摄像头202拍摄的图像中的二维像素坐标的坐标原点,可以通过双目标定算法得到。
R t,T t:TOF摄像头202的相机外参,分别为标定板坐标系相对TOF,空间坐标系的旋转矩阵和平移矩阵,可通过双目标定算法得到。
X t,Y t,Z t:为公式(1)中的X T的具体坐标值,即X T的坐标可以表示为:(X t,Y t,Z t),表征TOF空间坐标系下某点在标定板坐标系下的三维坐标。
矩阵:
Figure PCTCN2022087992-appb-000008
为TOF空间坐标系到标定板坐标系的外参矩阵。
步骤1102:手机100基于获取的猫眼空间坐标系和TOF空间坐标系之间的第一映射关系、猫眼空间坐标系和猫眼图像坐标系之间的第二映射关系以及TOF空间坐标系和TOF图像坐标系之间的第三映射关系,确定出猫眼图像坐标系和TOF图像坐标系之间的映射关系(记为“第四映射关系”)。
例如,在一些实施例中,手机100可以根据从智能门锁200获取的上述公式(1)中示出的猫眼空间坐标系和TOF空间坐标系之间的第一映射关系、上述公式(2)中示出的猫眼空间坐标系和猫眼图像坐标系之间的第二映射关系以及TOF空间坐标系,以及上述公式(3)示出的TOF空间坐标系和TOF图像坐标系之间的第三映射关系,对上述公式(1)、公式(2)以及公式(3)进行解方程计算,即可确定出如公式(4)示出的猫眼图像坐标系和TOF图像坐标系之间的第四映射关系:
Figure PCTCN2022087992-appb-000009
上述公式(4)中各个符号的含义如下:
u t,v t为TOF图像坐标系下某点的像素坐标;
u m,v m为猫眼图像坐标系下某点的像素坐标;
t 1,t 2:猫眼图像坐标系相对于TOF图像坐标系的平移转换参数。
矩阵:
Figure PCTCN2022087992-appb-000010
为猫眼图像坐标系下某点的像素坐标相对于TOF图像坐标系下同一点的像素坐标的变换矩阵。
步骤1103:手机100基于确定出的猫眼图像坐标系和TOF图像坐标系之间的映射关系,将猫眼摄像头201拍摄的人脸图像中各像素点的坐标转到TOF图像坐标系下。
例如,手机100通过上述公式(4)所示的转换关系,将从智能门锁200接收到的猫眼摄像头201拍摄的用于人脸预览的人脸图像中的各像素点的坐标转换到TOF图像坐标系(以TOF摄像头202的感光芯片的中心为坐标原点的二维的平面直角坐标系)下。
如此,可以使得手机100将从智能门锁200接收到的猫眼摄像头201拍摄的用于人脸预览的图像,转换到TOF图像坐标系下,从而使手机100显示出的人脸预览图像能够准确反映出人脸在TOF摄像头202的视场中的位置、角度、表情等情况,从而可以使用户能够通过手机100的人脸预览界面和提示信息及时调整面部的位置、角度、表情等情况,进而使智能门锁200的TOF摄像头202能够快速抓取到有效的人脸图像,提高人脸录入的效率和准确度,提升用户体验。
可以理解的是,上述公式(4)只是手机100将从智能门锁200接收到的猫眼摄像头201拍摄的用于人脸预览的人脸图像中的各个像素点转换到TOF图像坐标系下的一个转换的示例,具体如何转换,研发人员可以根据实际情况确定,本申请对此不作限定。
在其他一些实施例中,上述猫眼图像坐标系和TOF图像坐标系之间的映射关系(也即 第四映射关系)可以预先计算好之后存储到智能门锁200的存储器中。智能门锁200在和手机100建立连接,采用手机100作为外接设备来显示人脸的预览图像时,可以将存储的上述第四映射关系发送给手机100。或者,手机100通过下载安装智能门锁应用128获取上述第四映射关系。
下面具体结合图11(b),对步骤803中涉及的手机100通过预设的图像处理方法将从智能门锁200接收到的猫眼摄像头201拍摄的图像,转换到TOF图像坐标系下的另一种实现过程进行详细介绍。
图11(b)根据本申请的一些实施例,示出了手机100将从智能门锁200接收到的猫眼摄像头201拍摄的用于人脸预览的图像转换到TOF图像坐标系下的另一种处理过程。参考图11(b),前述预设的图像处理方法具体为包括以下步骤:
步骤1101':手机100获取猫眼图像坐标系和TOF图像坐标系之间的映射关系。
在一些实施例中,在智能门锁200出厂之前,研发人员可以采用张正友标定算法通过标定板,例如单平面棋盘格,标定出猫眼空间坐标系和TOF空间坐标系之间的映射关系(记作“第一映射关系”)、猫眼空间坐标系和猫眼图像坐标系之间的映射关系(记作“第二映射关系”)以及TOF空间坐标系和TOF图像坐标系之间的映射关系(记作“第三映射关系”)。从而得到例如上述公式(1)中所示的猫眼空间坐标系和TOF空间坐标系之间的第一映射关系、例如上述公式(2)中所示的猫眼空间坐标系和猫眼图像坐标系之间的第二映射关系,以及例如上述公式(3)中所示的TOF空间坐标系和TOF图像坐标系之间的第三映射关系。再基于标定出的上述第一映射关系、第二映射关系以及第三映射关系,计算出如上述公式(4)所示的猫眼图像坐标系和TOF图像坐标系之间的映射关系(第四映射关系),然后将计算出的第四映射关系存储到智能门锁200的存储器中。当智能门锁200在和手机100建立连接,采用手机100作为外接设备来显示人脸的预览图像时,可以将存储的上述第四映射关系发送给手机100。或者,手机100通过下载安装智能门锁应用128获取上述第四映射关系。
其中,第一映射关系至第四映射关系的具体含义,以及具体的计算方法请参阅图11(a)中的对应的描述,在此不再赘述。
步骤1102':手机100基于获取的猫眼图像坐标系和TOF图像坐标系之间的映射关系(也即第四映射关系),将猫眼摄像头201拍摄的人脸图像中各像素点的坐标转到TOF图像坐标系下。
例如,手机100通过上述公式(4)所示的转换关系,将从智能门锁200接收到的猫眼摄像头201拍摄的用于人脸预览的人脸图像中的各像素点的坐标转换到TOF图像坐标系(以TOF摄像头202的感光芯片的中心为坐标原点的二维的平面直角坐标系)下。
如此,可以使得手机100将从智能门锁200接收到的猫眼摄像头201拍摄的用于人脸预览的图像,转换到TOF图像坐标系下,从而使手机100显示出的人脸预览图像能够准确反映出人脸在TOF摄像头202的视场中的位置、角度、表情等情况,从而可以使用户能够通过手机100的人脸预览界面和提示信息及时调整面部的位置、角度、表情等情况,进而使智能门锁200的TOF摄像头202能够快速抓取到有效的人脸图像,提高人脸录入的效率和准确度,提升用户体验。
实施例二
以上图8所示的实施例中,将猫眼摄像头201拍摄的用于人脸预览的人脸图像中的各个像素点转换到TOF图像坐标系下的处理过程是由手机100来完成的。可以理解的是,该处理过程还可以由智能门锁200自身来完成。
下面将继续结合图7所示的系统结构图,以电子设备100为手机为例,对本申请提供的技术方案的另一种实施例进行介绍。
具体地,图12示出了由智能门锁200自身来实现将猫眼摄像头201拍摄的用于人脸预览的人脸图像中的各个像素点到TOF图像坐标系下的转换的技术方案中的交互图。步骤12所示的交互图与实施例一中图8所示的交互的区别仅在于:图12中的步骤802'和步骤803'分别与图8中的步骤802、步骤803不同,图12中的其他步骤与图8均相同。因此,为了避免重复,以下将仅对图12中的步骤802'和步骤803'进行介绍,其他步骤请参考以上关于图8所示的交互图的文字描述,在此不再赘述。图12中的步骤802'和步骤803'的具体内容分别如下:
步骤802':智能门锁200将猫眼摄像头201采集的人脸图像按照预设图像处理方法,转换到TOF图像坐标系下,得到作为预览图像的人脸图像。
例如,在一些实施例中,智能门锁200的猫眼摄像头201将采集的人脸图像发送至智能门锁200的处理模块212。智能门锁200的处理模块212将接收到的猫眼摄像头201采集的人脸图像按照预设图像处理方法,转换到TOF图像坐标系下。
例如,在一些实施例中,智能门锁200的处理模块212接收到猫眼摄像头201采集的人脸图像之后,按照如图11(a)所示的方法,基于从智能门锁200的存储器中获取的上述第一映射关系、第二映射关系以及第三映射关系确定出猫眼图像坐标系和TOF图像坐标系之间的映射关系(也即第四映射关系),然后按照确定出的例如上述公式(4)所示的第四映射关系,将从智能门锁200接收到的猫眼摄像头201拍摄的用于人脸预览的人脸图像中的各个像素点转换到TOF图像坐标系下。
在一些实施例中,智能门锁200的处理模块212接收到猫眼摄像头201采集的人脸图像之后,按照如图11(b)所示的方法,按照从智能门锁200的存储器中获取的上述猫眼图像坐标系和TOF图像坐标系之间的映射关系(也即第四映射关系),将从智能门锁200接收到的猫眼摄像头201拍摄的用于人脸预览的人脸图像中的各个像素点转换到TOF图像坐标系下。具体可参阅上述实施例一中的相关描述,在此不再赘述。
步骤803':智能门锁200向手机100发送作为预览图像的人脸图像。
例如,在一些实施例中,智能门锁200的处理模块212经由智能门锁200的传输模块211,将转换到TOF图像坐标系下的人脸图像发送给手机100的传输模块115,之后手机100的传输模块115再将接收到的转换到TOF图像坐标系下的人脸图像发送给手机100的处理模块。手机100的处理模块116接收到转换后的人脸图像之后,产生显示预览图像的控制指令,用于控制手机100的智能门锁应用128显示出预览图像。如此,可以使手机100显示出的人脸预览图像能够准确反映出人脸在TOF摄像头202的视场中的位置、角度、表情等情况,从而可以使用户能够通过手机100的人脸预览界面和提示信息及时调整面部的位置、角度、表情等情况,进而使智能门锁200的TOF摄像头202能够快速抓取到有效的人脸图像,提高人脸录入效率和准确度,提升用户体验。
下面将结合附图13对以上各实施例中涉及的手机100进行详细介绍。具体地,图13根据本申请的实施例,示出了一种手机100的硬件结构示意图。
在图13中,相似的部件具有同样的附图标记。如图13所示,手机100可以包括处理器110、电源模块140、存储器180、摄像头170、移动通信模块130、无线通信模块120、传感器模块190、音频模块150、接口模块160以及按键101、显示屏102等。
可以理解的是,本发明实施例示意的结构并不构成对手机100的具体限定。在本申请另一些实施例中,手机100可以包括比图示更多或更少的部件,或者组合某些部件,或者拆分某些部件,或者不同的部件布置。图示的部件可以以硬件,软件或软件和硬件的组合实现。
处理器110可以包括一个或多个处理单元,例如,可以包括中央处理器(Central Processing Unit,CPU)、图像处理器(Graphics Processing Unit,GPU)、数字信号处理器(Digital Signal Processor,DSP)、微处理器(Micro-programmed Control Unit,MCU)、人工智能(Artificial Intelligence,AI)处理器或可编程逻辑器件(Field Programmable Gate Array,FPGA)等的处理模块或处理电路。例如,在本申请的一些实例中,处理器110可以执行如图11(a)或图11(b)所示的方法,将从智能门锁200接收到的猫眼摄像头201拍摄的人脸图像转换到TOF图像坐标系下,并根据转换后的TOF坐标下的图像产生提示信息,提示用户将面部调整到合适的位置。
存储器180可用于存储数据、软件程序以及模块。在申请的一些实施例中,存储器180可用于存储图11(a)或图11(b)所示的方法的软件程序,以供处理器110运行该软件程序,将从智能门锁200接收到的猫眼摄像头201拍摄的人脸图像转换到TOF图像坐标系下,并根据转换后的TOF坐标下的图像产生提示信息,提示用户将面部调整到合适的位置。并且存储器180还可以用于存储从智能门锁200接收到的猫眼摄像头201拍摄的人脸图像,以及从智能门锁200接收到的猫眼摄像头201和TOF摄像头202之间的三维坐标点之间的关系、猫眼摄像头201和TOF摄像头202各自的三维坐标点和二维坐标点之间的关系等。
电源模块140可以包括电源、电源管理部件等。电源可以为电池。电源管理部件用于管理电源的充电和电源向其他模块的供电。充电管理模块用于从充电器接收充电输入;电源管理模块用于连接电源,充电管理模块与处理器110。
移动通信模块130可以包括但不限于天线、功率放大器、滤波器、低噪声放大器(Low Noise Amplify,LNA)等。移动通信模块130可以提供应用在手机100上的包括2G/3G/4G/5G等无线通信的解决方案。移动通信模块130可以由天线接收电磁波,并对接收的电磁波进行滤波,放大等处理,传送至调制解调处理器进行解调。移动通信模块130还可以对经调制解调处理器调制后的信号放大,经天线转为电磁波辐射出去。在一些实施例中,移动通信模块130的至少部分功能模块可以被设置于处理器110中。在一些实施例中,移动通信模块130至少部分功能模块可以与处理器110的至少部分模块被设置在同一个器件中。
无线通信模块120可以包括天线,并经由天线实现对电磁波的收发。无线通信模块120可以提供应用在手机100上的包括无线局域网(Wireless Local Area Networks,WLAN)(如无线保真(Wireless Fidelity,Wi-Fi)网络),蓝牙(Bluetooth,BT),全球导航卫星系统(Global Navigation Satellite System,GNSS),调频(Frequency Modulation,FM),近距离无线通信技术(Near Field Communication,NFC),红外技术(Infrared,IR)等无线通信的解决方案。在本 申请的一些实施例中,手机100可以通过无线通信技术与智能门锁200进行通信。
在一些实施例中,手机100的移动通信模块130和无线通信模块120也可以位于同一模块中。
摄像头170用于捕获静态图像或视频。物体通过镜头生成光学图像投射到感光元件。感光元件把光信号转换成电信号,之后将电信号传递给ISP(Image Signal Processor,图像信号处理器)转换成数字图像信号。手机100可以通过ISP,摄像头170,视频编解码器,GPU(Graphic Processing Unit,图形处理器),显示屏102以及应用处理器等实现拍摄功能。
显示屏102包括显示面板。显示面板可以采用液晶显示屏(Liquid Crystal Display,LCD),有机发光二极管(Organic Light-emitting Diode,OLED),有源矩阵有机发光二极体或主动矩阵有机发光二极体(Active-matrix Organic Light-emitting Diode,AMOLED),柔性发光二极管(Flex Light-emitting Diode,FLED),Mini LED,Micro LED,Micro OLED,量子点发光二极管(Quantum Dot Light-emitting Diodes,QLED)等。例如,显示屏102用于显示人脸的预览画面、提示用户调整面部位置、姿态的文字提示、图片提示以及提示用户人脸录入成功的符号提示、文字提示等。
传感器模块190可以包括接近光传感器、压力传感器,陀螺仪传感器,气压传感器,磁传感器,加速度传感器,距离传感器,指纹传感器,温度传感器,触摸传感器,环境光传感器,骨传导传感器等。
音频模块150可以将数字音频信息转换成模拟音频信号输出,或者将模拟音频输入转换为数字音频信号。音频模块150还可以用于对音频信号编码和解码。在一些实施例中,音频模块150可以设置于处理器110中,或将音频模块150的部分功能模块设置于处理器110中。在一些实施例中,音频模块150可以包括扬声器、听筒、麦克风以及耳机接口。例如,在本申请的一些实施例中,音频模块150用于播放手机100的语音提示信息,提示用户“请抬头”、“请低头”、“请将脸部转向左边”、“请将脸部转向右边”、“录入成功”等语音提示消息。
接口模块160包括外部存储器接口、通用串行总线(Universal Serial Bus,USB)接口及用户标识模块(Subscriber Identification Module,SIM)卡接口等。其中外部存储器接口可以用于连接外部存储卡,例如Micro SD卡,实现扩展手机100的存储能力。外部存储卡通过外部存储器接口与处理器110通信,实现数据存储功能。通用串行总线接口用于手机100和其他手机进行通信。用户标识模块卡接口用于与安装至手机100的SIM卡进行通信,例如读取SIM卡中存储的电话号码,或将电话号码写入SIM卡中。
在一些实施例中,手机100还包括按键、马达以及指示器等。其中,按键可以包括音量键、开/关机键等。马达用于使手机100产生振动效果,例如在用户的手机100和智能门锁200连接成功时产生振动,还可以在智能门锁200完成人脸录入时产生振动,以提示用户人脸录入成功。指示器可以包括激光指示器、射频指示器、LED指示器等。
本申请公开的机制的各实施例可以被实现在硬件、软件、固件或这些实现方法的组合中。本申请的实施例可实现为在可编程系统上执行的计算机程序或程序代码,该可编程系统包括至少一个处理器、存储系统(包括易失性和非易失性存储器和/或存储元件)、至少一个输入设备以及至少一个输出设备。
可将程序代码应用于输入指令,以执行本申请描述的各功能并生成输出信息。可以按已知方式将输出信息应用于一个或多个输出设备。为了本申请的目的,处理系统包括具有诸如例如数字信号处理器(Digital Signal Processor,DSP)、微控制器、专用集成电路(Application Specific Integrated Circuit,ASIC)或微处理器之类的处理器的任何系统。
程序代码可以用高级程序化语言或面向对象的编程语言来实现,以便与处理系统通信。在需要时,也可用汇编语言或机器语言来实现程序代码。事实上,本申请中描述的机制不限于任何特定编程语言的范围。在任一情形下,该语言可以是编译语言或解释语言。
在一些情况下,所公开的实施例可以以硬件、固件、软件或其任何组合来实现。所公开的实施例还可以被实现为由一个或多个暂时或非暂时性机器可读(例如,计算机可读)存储介质承载或存储在其上的指令,其可以由一个或多个处理器读取和执行。例如,指令可以通过网络或通过其他计算机可读介质分发。因此,机器可读介质可以包括用于以机器(例如,计算机)可读的形式存储或传输信息的任何机制,包括但不限于,软盘、光盘、光碟、只读存储器(CD-ROMs)、磁光盘、只读存储器(Read Only Memory,ROM)、随机存取存储器(Random Access Memory,RAM)、可擦除可编程只读存储器(Erasable Programmable Read Only Memory,EPROM)、电可擦除可编程只读存储器(Electrically Erasable Programmable Read-Only Memory,EEPROM)、磁卡或光卡、闪存、或用于利用因特网以电、光、声或其他形式的传播信号来传输信息(例如,载波、红外信号数字信号等)的有形的机器可读存储器。因此,机器可读介质包括适合于以机器(例如计算机)可读的形式存储或传输电子指令或信息的任何类型的机器可读介质。
在附图中,可以以特定布置和/或顺序示出一些结构或方法特征。然而,应该理解,可能不需要这样的特定布置和/或排序。而是,在一些实施例中,这些特征可以以不同于说明性附图中所示的方式和/或顺序来布置。另外,在特定图中包括结构或方法特征并不意味着暗示在所有实施例中都需要这样的特征,并且在一些实施例中,可以不包括这些特征或者可以与其他特征组合。
需要说明的是,本申请各设备实施例中提到的各单元/模块都是逻辑单元/模块,在物理上,一个逻辑单元/模块可以是一个物理单元/模块,也可以是一个物理单元/模块的一部分,还可以以多个物理单元/模块的组合实现,这些逻辑单元/模块本身的物理实现方式并不是最重要的,这些逻辑单元/模块所实现的功能的组合才是解决本申请所提出的技术问题的关键。此外,为了突出本申请的创新部分,本申请上述各设备实施例并没有将与解决本申请所提出的技术问题关系不太密切的单元/模块引入,这并不表明上述设备实施例并不存在其它的单元/模块。
需要说明的是,在本专利的示例和说明书中,诸如第一和第二等之类的关系术语仅仅用来将一个实体或者操作与另一个实体或操作区分开来,而不一定要求或者暗示这些实体或操作之间存在任何这种实际的关系或者顺序。而且,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者设备不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者设备所固有的要素。在没有更多限制的情况下,由语句“包括一个”限定的要素,并不排除在包括所述要素的过程、方法、物品或者设备中还存在另外的相同要素。
虽然通过参照本申请的某些优选实施例,已经对本申请进行了图示和描述,但本领域的普通技术人员应该明白,可以在形式上和细节上对其作各种改变,而不偏离本申请的精神和范围。

Claims (35)

  1. 一种人脸图像显示方法,应用于包含第一电子设备和第二电子设备的系统中,其特征在于,所述第一电子设备包括第一摄像头和第二摄像头,所述第一摄像头和所述第二摄像头之间具有成像映射关系,所述方法包括:
    所述第一电子设备获取所述第一摄像头在第一时刻采集的第一人脸图像,其中,所述第一人脸图像中的人脸位于所述第一人脸图像的第一位置;
    所述第二电子设备显示第二人脸图像,其中,所述第二人脸图像是根据所述成像映射关系由所述第一人脸图像变换得到的图像,所述人脸在所述第二人脸图像中的位置与所述人脸在所述第二摄像头在所述第一时刻采集的图像中的位置相同。
  2. 根据权利要求1所述的方法,其特征在于,所述成像映射关系为第一摄像头图像坐标系和第二摄像头图像坐标系之间的映射关系,
    其中,所述第一摄像头图像坐标系是与所述第一摄像头相关联的二维坐标系,所述第二摄像头图像坐标系是与所述第二摄像头相关联的二维坐标系。
  3. 根据权利要求1-2中任一项所述的方法,其特征在于,所述成像映射关系是预设的参数,或者,
    所述成像映射关系是由所述第一电子设备或所述第二电子设备根据第一映射关系、第二映射关系以及第三映射关系确定的,
    其中,所述第一映射关系是第一摄像头空间坐标系与第二摄像头空间坐标系之间的映射关系,所述第一摄像头空间坐标系是与所述第一摄像头关联的三维坐标系,所述第二摄像头空间坐标系是与所述第二摄像头关联的三维坐标系,
    所述第二映射关系是所述第一摄像头空间坐标系与所述第一摄像头图像坐标系之间的映射关系,
    所述第三映射关系是所述第二摄像头空间坐标系与所述第二摄像头图像坐标系之间的映射关系。
  4. 根据权利要求1-3中任一项所述的方法,其特征在于,所述方法还包括:
    所述第二电子设备接收所述第一电子设备发送的第一人脸图像,根据所述成像映射关系处理所述第一人脸图像,得到所述第二人脸图像,
    或者,
    所述第二电子设备接收所述第一电子设备发送的第二人脸图像。
  5. 根据权利要求1-4中任一项所述的方法,其特征在于,所述第一电子设备或所述第二电子设备通过与所述第一电子设备相关联的应用程序获取所述成像映射关系。
  6. 根据权利要求1-5中任一项所述的方法,其特征在于,所述方法还包括:
    所述第二电子设备显示用于提示用户调整人脸位置的第一提示信息。
  7. 根据权利要求6所述的方法,其特征在于,所述方法还包括:
    在所述第二电子设备显示的所述第二人脸图像中,所述人脸的位置靠近所述第二人脸图像的中心位置时,所述第二电子设备不再显示所述第一提示信息。
  8. 根据权利要求6或7所述的方法,其特征在于,所述方法还包括:
    所述第二电子设备显示用于提示用户调整人脸朝向的第二提示信息。
  9. 根据权利要求8所述的方法,其特征在于,所述第一提示信息和/或所述第二提示信息是根据所述第二人脸图像生成的。
  10. 根据权利要求1-9中任一项所述的方法,其特征在于,所述第一摄像头的视场角大于所述第二摄像头的视场角。
  11. 根据权利要求1-10中任一项所述的方法,其特征在于,所述第一摄像头为猫眼摄像头、鱼眼摄像头中的任一种摄像头,所述第二摄像头为TOF摄像头、结构光摄像头、双目立体成像摄像头、深度摄像头、红外摄像头中的任一种摄像头,所述第二摄像头用于采集图像的深度信息。
  12. 根据权利要求1-11中任一项所述的方法,其特征在于,所述第一电子设备为智能门锁,所述第二电子设备为手机。
  13. 一种人脸图像显示方法,应用于第二电子设备,其特征在于,所述方法包括:
    显示第二人脸图像,其中,所述第二人脸图像是根据成像映射关系由第一人脸图像变换得到的图像,所述成像映射关系是第一摄像头和第二摄像头采集的图像之间的映射关系,所述第一摄像头和所述第二摄像头包含于不同于所述第二电子设备的第一电子设备中,所述第一人脸图像是所述第一摄像头在第一时刻采集的图像,所述第一人脸图像中的人脸位于所述第一人脸图像的第一位置,所述人脸在所述第二人脸图像中的位置与所述人脸在所述第二摄像头在所述第一时刻采集的图像中的位置相同。
  14. 根据权利要求13所述的方法,其特征在于,所述成像映射关系为第一摄像头图像坐标系和第二摄像头图像坐标系之间的映射关系,
    其中,所述第一摄像头图像坐标系是与所述第一摄像头相关联的二维坐标系,所述第二摄像头图像坐标系是与所述第二摄像头相关联的二维坐标系。
  15. 根据权利要求13-14中任一项所述的方法,其特征在于,所述成像映射关系是预设的参数,或者,
    所述成像映射关系是由所述第二电子设备根据第一映射关系、第二映射关系以及第三映射关系确定的,
    其中,所述第一映射关系是第一摄像头空间坐标系与第二摄像头空间坐标系之间的映射关系,所述第一摄像头空间坐标系是与所述第一摄像头关联的三维坐标系,所述第二摄像头空间坐标系是与所述第二摄像头关联的三维坐标系,
    所述第二映射关系是所述第一摄像头空间坐标系与所述第一摄像头图像坐标系之间的映射关系,
    所述第三映射关系是所述第二摄像头空间坐标系与所述第二摄像头图像坐标系之间的映射关系。
  16. 根据权利要求13-15中任一项所述的方法,其特征在于,所述方法还包括:
    接收所述第一电子设备发送的第一人脸图像,根据所述成像映射关系处理所述第一人脸图像,得到所述第二人脸图像,
    或者,
    接收所述第一电子设备发送的第二人脸图像。
  17. 根据权利要求13-16中任一项所述的方法,其特征在于,所述第二电子设备通过与所述第一电子设备相关联的应用程序获取所述成像映射关系。
  18. 根据权利要求13-17中任一项所述的方法,其特征在于,所述方法还包括:
    显示用于提示用户调整人脸位置的第一提示信息。
  19. 根据权利要求18所述的方法,其特征在于,所述方法还包括:
    显示的所述第二人脸图像中,所述人脸的位置靠近所述第二人脸图像的中心位置时,所述第二电子设备不再显示所述第一提示信息。
  20. 根据权利要求18或19所述的方法,其特征在于,所述方法还包括:
    显示用于提示用户调整人脸朝向的第二提示信息。
  21. 根据权利要求20所述的方法,其特征在于,所述第一提示信息和/或所述第二提示信息是根据所述第二人脸图像生成的。
  22. 根据权利要求13-21中任一项所述的方法,其特征在于,所述第一摄像头的视场角大于所述第二摄像头的视场角。
  23. 根据权利要求13-22中任一项所述的方法,其特征在于,所述第一摄像头为猫眼摄像头、鱼眼摄像头中的任一种摄像头,所述第二摄像头为TOF摄像头、结构光摄像头、双目立体成像摄像头、深度摄像头、红外摄像头中的任一种摄像头,所述第二摄像头用于采集图像的深度信息。
  24. 根据权利要求13-23中任一项所述的方法,其特征在于,所述第二电子设备为手机。
  25. 一种人脸图像显示方法,应用于第一电子设备,其特征在于,所述第一电子设备包括第一摄像头和第二摄像头,所述第一摄像头和所述第二摄像头之间具有成像映射关系,所述方法包括:
    获取所述第一摄像头在第一时刻采集的第一人脸图像,其中,所述第一人脸图像中的人脸位于所述第一人脸图像的第一位置,所述第一人脸图像用于根据所述成像映射关系变换得到第二人脸图像,所述人脸在所述第二人脸图像中的位置与所述人脸在所述第二摄像头在所述第一时刻采集的图像中的位置相同,所述第二人脸图像用于在不同于所述第一电子设备的第二电子设备中显示。
  26. 根据权利要求25所述的方法,其特征在于,所述成像映射关系为第一摄像头图像坐标系和第二摄像头图像坐标系之间的映射关系,
    其中,所述第一摄像头图像坐标系是与所述第一摄像头相关联的二维坐标系,所述第二摄像头图像坐标系是与所述第二摄像头相关联的二维坐标系。
  27. 根据权利要求25-26中任一项所述的方法,其特征在于,所述成像映射关系是预设的参数,或者,
    所述成像映射关系是由所述第一电子设备根据第一映射关系、第二映射关系以及第三映射关系确定的,
    其中,所述第一映射关系是第一摄像头空间坐标系与第二摄像头空间坐标系之间的映射关系,所述第一摄像头空间坐标系是与所述第一摄像头关联的三维坐标系,所述第二摄像头空间坐标系是与所述第二摄像头关联的三维坐标系,
    所述第二映射关系是所述第一摄像头空间坐标系与所述第一摄像头图像坐标系之间的 映射关系,
    所述第三映射关系是所述第二摄像头空间坐标系与所述第二摄像头图像坐标系之间的映射关系。
  28. 根据权利要求25-27中任一项所述的方法,其特征在于,所述第一电子设备通过与所述第一电子设备相关联的应用程序获取所述成像映射关系。
  29. 根据权利要求25-28中任一项所述的方法,其特征在于,所述第一摄像头的视场角大于所述第二摄像头的视场角。
  30. 根据权利要求25-29中任一项所述的方法,其特征在于,所述第一摄像头为猫眼摄像头、鱼眼摄像头中的任一种摄像头,所述第二摄像头为TOF摄像头、结构光摄像头、双目立体成像摄像头、深度摄像头、红外摄像头中的任一种摄像头,所述第二摄像头用于采集图像的深度信息。
  31. 根据权利要求25-30中任一项所述的方法,其特征在于,所述第一电子设备为智能门锁。
  32. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质上存储有指令,该指令在电子设备上执行时使电子设备执行权利要求13-24中任一项所述的方法或者权利要求25-31中任一项所述的方法。
  33. 一种计算机程序产品,其特征在于,所述计算机程序产品包括指令,所述指令用于实现如权利要求13-24中任一项所述的方法或者权利要求25-31中任一项所述的方法。
  34. 一种芯片装置,其特征在于,所述芯片装置包括:
    通信接口,用于输入和/或输出信息;
    处理器,用于执行计算机可执行程序,使得安装有所述芯片装置的设备执行如权利要求13-24中任一项所述的方法或者权利要求25-31中任一项所述的方法。
  35. 一种电子设备,其特征在于,包括:
    存储器,用于存储由电子设备的一个或多个处理器执行的指令,以及
    处理器,当所述指令被一个或多个处理器执行时,所述处理器用于执行权利要求13-24中任一项所述的方法或者权利要求25-31中任一项所述的方法。
PCT/CN2022/087992 2021-06-18 2022-04-20 人脸图像显示方法、可读存储介质、程序产品及电子设备 WO2022262408A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
EP22823895.2A EP4336404A1 (en) 2021-06-18 2022-04-20 Face image display method, readable storage medium, program product, and electronic device

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110679625.5 2021-06-18
CN202110679625.5A CN115497130A (zh) 2021-06-18 2021-06-18 人脸图像显示方法、可读存储介质、程序产品及电子设备

Publications (1)

Publication Number Publication Date
WO2022262408A1 true WO2022262408A1 (zh) 2022-12-22

Family

ID=84465322

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/087992 WO2022262408A1 (zh) 2021-06-18 2022-04-20 人脸图像显示方法、可读存储介质、程序产品及电子设备

Country Status (3)

Country Link
EP (1) EP4336404A1 (zh)
CN (1) CN115497130A (zh)
WO (1) WO2022262408A1 (zh)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106447861A (zh) * 2016-09-30 2017-02-22 广西大学 一种高效智能门禁管理系统
CN108446684A (zh) * 2018-05-22 2018-08-24 深圳腾视科技有限公司 一种人脸分析系统的校准机构及其校准方法
US20180295350A1 (en) * 2015-01-21 2018-10-11 Chengdu Idealsee Technology Co., Ltd. Binocular See-Through AR Head-Mounted Display Device and Information Display Method Therefor
CN110889913A (zh) * 2019-10-30 2020-03-17 西安海云物联科技有限公司 一种人脸识别的智能门锁及其人脸录入方法
CN111311792A (zh) * 2020-02-12 2020-06-19 德施曼机电(中国)有限公司 智能锁验证系统及其方法
CN112002044A (zh) * 2020-10-30 2020-11-27 兰和科技(深圳)有限公司 一种智能门锁的人脸识别开锁系统及其判断方法

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180295350A1 (en) * 2015-01-21 2018-10-11 Chengdu Idealsee Technology Co., Ltd. Binocular See-Through AR Head-Mounted Display Device and Information Display Method Therefor
CN106447861A (zh) * 2016-09-30 2017-02-22 广西大学 一种高效智能门禁管理系统
CN108446684A (zh) * 2018-05-22 2018-08-24 深圳腾视科技有限公司 一种人脸分析系统的校准机构及其校准方法
CN110889913A (zh) * 2019-10-30 2020-03-17 西安海云物联科技有限公司 一种人脸识别的智能门锁及其人脸录入方法
CN111311792A (zh) * 2020-02-12 2020-06-19 德施曼机电(中国)有限公司 智能锁验证系统及其方法
CN112002044A (zh) * 2020-10-30 2020-11-27 兰和科技(深圳)有限公司 一种智能门锁的人脸识别开锁系统及其判断方法

Also Published As

Publication number Publication date
EP4336404A1 (en) 2024-03-13
CN115497130A (zh) 2022-12-20

Similar Documents

Publication Publication Date Title
US10679053B2 (en) Method and device for recognizing biometric information
WO2021023032A1 (zh) 设备解锁方法、系统和相关设备
CN109191549B (zh) 显示动画的方法及装置
EP3736627B1 (en) Electronic device comprising camera
WO2022001806A1 (zh) 图像变换方法和装置
WO2021057571A1 (zh) 一种生物识别方法及电子设备
WO2021238351A1 (zh) 一种图像校正方法与电子设备
WO2021017988A1 (zh) 一种多模态身份识别方法及设备
US20240095329A1 (en) Cross-Device Authentication Method and Electronic Device
WO2021013136A1 (zh) 一种设备的控制方法及电子设备
WO2021175266A1 (zh) 身份验证方法、装置和电子设备
WO2023284715A1 (zh) 一种物体重建方法以及相关设备
CN110795187A (zh) 一种图像显示方法及电子设备
WO2020249076A1 (zh) 一种人脸校正方法及电子设备
WO2022007707A1 (zh) 家居设备控制方法、终端设备及计算机可读存储介质
CN112087649B (zh) 一种设备搜寻方法以及电子设备
CN114610193A (zh) 内容共享方法、电子设备及存储介质
US20230005277A1 (en) Pose determining method and related device
US11743954B2 (en) Augmented reality communication method and electronic device
WO2021082620A1 (zh) 一种图像识别方法及电子设备
WO2022262408A1 (zh) 人脸图像显示方法、可读存储介质、程序产品及电子设备
WO2023030398A1 (zh) 图像处理方法及电子设备
WO2022037405A1 (zh) 信息验证的方法、电子设备及计算机可读存储介质
WO2022017270A1 (zh) 外表分析的方法和电子设备
CN114661258A (zh) 自适应显示方法、电子设备及存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22823895

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 2022823895

Country of ref document: EP

ENP Entry into the national phase

Ref document number: 2022823895

Country of ref document: EP

Effective date: 20231207

NENP Non-entry into the national phase

Ref country code: DE