CN107563304B - Terminal equipment unlocking method and device and terminal equipment - Google Patents

Terminal equipment unlocking method and device and terminal equipment Download PDF

Info

Publication number
CN107563304B
CN107563304B CN201710677498.9A CN201710677498A CN107563304B CN 107563304 B CN107563304 B CN 107563304B CN 201710677498 A CN201710677498 A CN 201710677498A CN 107563304 B CN107563304 B CN 107563304B
Authority
CN
China
Prior art keywords
model
user
human face
structured light
face
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710677498.9A
Other languages
Chinese (zh)
Other versions
CN107563304A (en
Inventor
周意保
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN201710677498.9A priority Critical patent/CN107563304B/en
Publication of CN107563304A publication Critical patent/CN107563304A/en
Application granted granted Critical
Publication of CN107563304B publication Critical patent/CN107563304B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Collating Specific Patterns (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a terminal equipment unlocking method and device and terminal equipment, wherein the method comprises the following steps: projecting to a user by adopting structured light equipment to obtain a human face 3D model of the user; analyzing a human face 3D model of a user, and extracting feature point information in the human face 3D model; comparing the characteristic point information with the characteristic point information of a pre-stored standard human face 3D model to obtain the similarity between the human face 3D model of the user and the standard human face 3D model; whether the terminal equipment is unlocked is determined according to the similarity, so that the face recognition and the terminal equipment unlocking can be performed based on the three-dimensional feature point information of the face 3D model of the user, the accuracy of the face recognition and the terminal equipment unlocking is improved, the face 3D model of the user is difficult to copy, the possibility that sensitive information of the terminal equipment is stolen is avoided, and the safety of the terminal equipment is improved.

Description

Terminal equipment unlocking method and device and terminal equipment
Technical Field
The invention relates to the field of terminal equipment, in particular to a terminal equipment unlocking method and device and terminal equipment.
Background
With the development of mobile communication technology, mobile terminals have become indispensable communication devices in people's daily life, and privacy and security are increasingly emphasized. Because of its advantages of convenient operation and high security, the face recognition technology is gradually applied to mobile terminal systems, for example, for system unlocking, secure payment, application login, and the like.
The system unlocking of the mobile terminal by applying the face recognition technology is taken as an example. The existing unlocking based on face recognition is mainly unlocked through two-dimensional face recognition. The recognition speed through two-dimensional face recognition is high, but the problem that the recognition accuracy is low exists when the external environment is not ideal, and the problem that sensitive information of a mobile terminal is stolen and the safety of the mobile terminal is poor due to the fact that the sensitive information is easily copied exists.
Disclosure of Invention
The present invention is directed to solving, at least to some extent, one of the technical problems in the related art.
Therefore, a first objective of the present invention is to provide an unlocking method for a terminal device, so as to safely unlock the terminal device and solve the problems of poor safety and low accuracy of the existing unlocking method for the terminal device.
The second purpose of the invention is to provide an unlocking device of the terminal equipment.
A third object of the present invention is to provide a terminal device.
A fourth object of the invention is to propose a non-transitory computer-readable storage medium.
To achieve the above object, an embodiment of a first aspect of the present invention provides a method for unlocking a terminal device, including:
projecting to a user by adopting structured light equipment to obtain a human face 3D model of the user;
analyzing the face 3D model of the user, and extracting feature point information in the face 3D model;
comparing the characteristic point information with characteristic point information of a pre-stored standard human face 3D model to obtain the similarity between the human face 3D model of the user and the standard human face 3D model;
and determining whether to unlock the terminal equipment according to the similarity.
As a possible implementation manner of the embodiment of the first aspect of the present invention, the obtaining a 3D model of a face of a user by projecting to the user by using a structured light device includes:
adopting structured light equipment to project structured light to a user;
shooting the structured light projection of the user by adopting a camera to obtain a face depth image of the user;
and calculating and acquiring a human face 3D model of the user by combining the human face depth image of the user and the position relation between the structured light equipment and the camera.
As a possible implementation manner of the embodiment of the first aspect of the present invention, the structured light generated by the structured light device is non-uniform structured light.
As a possible implementation manner of the embodiment of the first aspect of the present invention, the comparing the feature point information with feature point information of a pre-stored standard human face 3D model to obtain a similarity between the human face 3D model of the user and the standard human face 3D model includes:
analyzing the feature point information of the human face 3D model of the user to acquire the parameter information of each organ and the position information among the organs in the human face 3D model of the user;
analyzing the characteristic point information of the standard human face 3D model to obtain the parameter information of each organ and the position information among the organs in the standard human face 3D model;
comparing the parameter information of each organ and the position information among the organs in the human face 3D model of the user with the parameter information of each organ and the position information among the organs in the standard human face 3D model to obtain the similarity between the human face 3D model of the user and the standard human face 3D model.
As a possible implementation manner of the embodiment of the first aspect of the present invention, before the projecting the user with the structured light device and acquiring the 3D model of the face of the user, the method further includes:
projecting towards a terminal equipment user by adopting structured light equipment to obtain a standard human face 3D model of the terminal equipment user;
analyzing the standard human face 3D model, and extracting feature point information in the standard human face 3D model;
and storing the characteristic point information in the standard human face 3D model.
As a possible implementation manner of the embodiment of the first aspect of the present invention, the determining whether to unlock the terminal device according to the similarity includes:
if the similarity is greater than or equal to a preset threshold value, unlocking the terminal equipment;
and if the similarity is smaller than a preset threshold value, the terminal equipment is not unlocked.
According to the terminal equipment unlocking method, the structured light equipment is adopted to project to a user, and a human face 3D model of the user is obtained; analyzing a human face 3D model of a user, and extracting feature point information in the human face 3D model; comparing the characteristic point information with the characteristic point information of a pre-stored standard human face 3D model to obtain the similarity between the human face 3D model of the user and the standard human face 3D model; whether the terminal equipment is unlocked is determined according to the similarity, so that the face recognition and the terminal equipment unlocking can be performed based on the three-dimensional feature point information of the face 3D model of the user, the accuracy of the face recognition and the terminal equipment unlocking is improved, the face 3D model of the user is difficult to copy, the possibility that sensitive information of the terminal equipment is stolen is avoided, and the safety of the terminal equipment is improved.
In order to achieve the above object, a second aspect of the present invention provides an unlocking device for a terminal device, including:
the first projection module is used for projecting to a user by adopting structured light equipment to obtain a human face 3D model of the user;
the first extraction module is used for analyzing the human face 3D model of the user and extracting feature point information in the human face 3D model;
the comparison module is used for analyzing the human face 3D model of the user and extracting feature point information in the human face 3D model;
and the unlocking module is used for determining whether to unlock the terminal equipment according to the similarity.
As a possible implementation manner of the embodiment of the second aspect of the present invention, the first projection module includes:
and the projection unit is used for determining whether to unlock the terminal equipment according to the similarity.
The camera shooting unit is used for shooting the structured light projection of the user by adopting a camera to acquire a face depth image of the user;
and the calculating unit is used for calculating and acquiring a human face 3D model of the user by combining the human face depth image of the user and the position relation between the structured light equipment and the camera.
As a possible implementation manner of the embodiment of the second aspect of the present invention, the structured light generated by the structured light device is non-uniform structured light.
As a possible implementation manner of the embodiment of the second aspect of the present invention, the comparing module includes:
the first analysis module is used for analyzing the feature point information of the human face 3D model of the user to acquire the parameter information of each organ and the position information among the organs in the human face 3D model of the user;
the second analysis module is used for analyzing the characteristic point information of the standard human face 3D model to obtain the parameter information of each organ and the position information among the organs in the standard human face 3D model;
and the comparison unit is used for comparing the parameter information of each organ and the position information among the organs in the human face 3D model of the user with the parameter information of each organ and the position information among the organs in the standard human face 3D model to acquire the similarity between the human face 3D model of the user and the standard human face 3D model.
As a possible implementation manner of the embodiment of the second aspect of the present invention, the apparatus further includes:
the second projection module is used for projecting to the terminal equipment user by adopting the structured light equipment to obtain a standard human face 3D model of the terminal equipment user;
the second extraction module is used for analyzing the standard human face 3D model and extracting feature point information in the standard human face 3D model;
and the storage module is used for storing the characteristic point information in the standard human face 3D model.
As a possible implementation manner of the embodiment of the second aspect of the present invention, the unlocking module is specifically configured to,
when the similarity is greater than or equal to a preset threshold value, unlocking the terminal equipment;
and when the similarity is smaller than a preset threshold value, unlocking the terminal equipment.
According to the terminal equipment unlocking device, the structured light equipment is adopted to project to a user, and a human face 3D model of the user is obtained; analyzing a human face 3D model of a user, and extracting feature point information in the human face 3D model; comparing the characteristic point information with the characteristic point information of a pre-stored standard human face 3D model to obtain the similarity between the human face 3D model of the user and the standard human face 3D model; whether the terminal equipment is unlocked is determined according to the similarity, so that the face recognition and the terminal equipment unlocking can be performed based on the three-dimensional feature point information of the face 3D model of the user, the accuracy of the face recognition and the terminal equipment unlocking is improved, the face 3D model of the user is difficult to copy, the possibility that sensitive information of the terminal equipment is stolen is avoided, and the safety of the terminal equipment is improved.
To achieve the above object, a third aspect of the present invention provides a terminal device, including:
the terminal equipment unlocking method comprises a shell, and a processor and a memory which are positioned in the shell, wherein the processor runs a program corresponding to an executable program code by reading the executable program code stored in the memory so as to realize the terminal equipment unlocking method.
To achieve the above object, a fourth embodiment of the present invention provides a non-transitory computer-readable storage medium, having a computer program stored thereon, where the computer program is executed by a processor to implement the terminal device unlocking method according to the first embodiment.
Additional aspects and advantages of the invention will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the invention.
Drawings
The foregoing and/or additional aspects and advantages of the present invention will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:
fig. 1 is a schematic flowchart of a method for unlocking a terminal device according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a different form of structured light projection provided by an embodiment of the present invention;
fig. 3 is a schematic flowchart of another method for unlocking a terminal device according to an embodiment of the present invention;
fig. 4 is a schematic flowchart of another method for unlocking a terminal device according to an embodiment of the present invention;
fig. 5 is a schematic structural diagram of an unlocking device for a terminal device according to an embodiment of the present invention;
fig. 6 is a schematic structural diagram of another unlocking device for terminal equipment according to an embodiment of the present invention;
fig. 7 is a schematic structural diagram of another unlocking device for a terminal device according to an embodiment of the present invention;
fig. 8 is a schematic structural diagram of another unlocking device for a terminal device according to an embodiment of the present invention;
fig. 9 is a schematic structural diagram of a terminal device according to an embodiment of the present invention.
Detailed Description
Reference will now be made in detail to embodiments of the present invention, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the drawings are illustrative and intended to be illustrative of the invention and are not to be construed as limiting the invention.
The following describes a method and an apparatus for unlocking a terminal device, and a terminal device according to an embodiment of the present invention with reference to the drawings.
The terminal equipment can be safely unlocked, and the problems of poor safety and low accuracy of the conventional unlocking method of the terminal equipment are solved.
Fig. 1 is a flowchart illustrating a method for unlocking a terminal device according to an embodiment of the present invention.
As shown in fig. 1, the terminal device unlocking method includes the following steps:
s101, projecting the user by adopting structured light equipment to obtain a human face 3D model of the user.
The main execution body of the terminal device unlocking method provided by this embodiment is a terminal device unlocking apparatus, and the terminal device unlocking apparatus may specifically be hardware or software installed on the terminal device. The terminal equipment can be a smart phone, a tablet computer, an ipad and the like.
In this embodiment, structured light refers to a set of projected rays of known spatial directions. Structured light device refers to a device that generates structured light that can be projected onto an object under test. The pattern of structured light may include: a point structured light mode, a line structured light mode, a multi-line structured light mode, a plane structured light mode, and a phase method mode. The point structured light mode refers to that a light beam emitted by structured light equipment is projected onto a measured object to generate a light spot; the light spot is imaged on an imaging plane of the camera through a lens of the camera to form a two-dimensional point. The line structured light mode refers to that light beams emitted by structured light equipment are projected onto a measured object to generate a light ray; the light is imaged on the imaging plane of the camera through the lens of the camera to form a light which can be distorted or broken. The multi-line structured light mode refers to that light beams emitted by the structured light equipment are projected onto a measured object to generate a plurality of light rays. The surface structured light mode refers to that light beams emitted by the structured light device are projected on a measured object to generate a light surface. Wherein the degree of light distortion is proportional to the depth of each part on the object to be measured. The degree of light breakdown is related to cracks and the like on the object to be measured.
Combining a light spot, a light ray or a smooth surface on an imaging plane of the camera and a position relation between the camera and the structured light equipment to obtain a triangular geometric constraint relation, and uniquely determining the spatial position of the light spot, the light ray or the smooth surface in a known world coordinate system by the triangular geometric constraint relation, namely determining the spatial position of each part and each feature point on the measured object in the known world coordinate system; and then the restoration of the three-dimensional space of the measured object can be completed by combining the color information acquired by the camera. As shown in fig. 2, different forms of structured light projections are formed on the measured object for different structured light devices.
In addition, in this embodiment, the mode of the structured light may further include: the speckle structure light mode refers to a mode that light beams emitted by the structured light equipment are projected on a measured object to generate a non-uniform light spot array.
In this embodiment, specifically, the terminal device unlocking apparatus may call the structured light device to perform structured light projection to the user; shooting a structured light projection of a user by adopting a camera to obtain a face depth image of the user; and calculating to obtain a human face 3D model of the user by combining the human face depth image of the user and the position relation between the structured light equipment and the camera.
In this embodiment, the terminal device unlocking apparatus may obtain the position of the user in advance, and adjust the projection angle and the projection range of the structured light device according to the position of the user, so as to bring the position of the user into the projection range; or the terminal equipment unlocking device can provide the projection range to the user in advance so that the user can move into the projection range.
In this embodiment, the terminal device unlocking apparatus may call the structured light device to perform structured light projection on the user, and call the camera to shoot the structured light projection within the projection range, so as to obtain a face depth image of the user, and further obtain a face 3D model of the user.
The method for acquiring the position of the user by the terminal equipment unlocking device can be that the terminal equipment unlocking device calls the structured light equipment and adjusts the projection angle and the projection range of the structured light equipment to acquire depth images of all surrounding objects, analyzes the depth images of all the objects, acquires feature point information in the depth images of all the objects, identifies the depth images of the human body based on the feature point information, and further determines the position of the human body and the position of the human face.
S102, analyzing the face 3D model of the user, and extracting feature point information in the face 3D model.
In this embodiment, the terminal device unlocking apparatus may extract feature point information in a partial region of the face, for example, feature point information in regions such as an eye region, an eyebrow region, a nose region, an ear region, and a mouth region, and perform comparison based on the feature point information, thereby reducing the number of feature points to be compared, reducing the amount of computation, increasing the face recognition speed, and increasing the terminal device unlocking speed and efficiency.
S103, comparing the characteristic point information with the characteristic point information of a pre-stored standard human face 3D model, and acquiring the similarity between the human face 3D model of the user and the standard human face 3D model.
In this embodiment, in an implementation manner, the terminal device unlocking apparatus may compare feature point information of all regions in the face 3D model of the user with feature point information of all regions in a pre-stored standard face 3D model, and obtain a similarity between the face 3D model of the user and the standard face 3D model. In another embodiment, the terminal device unlocking apparatus may compare the feature point information of the partial region in the user's face 3D model with the pre-stored feature point information of the partial region in the standard face 3D model, and obtain the similarity between the user's face 3D model and the standard face 3D model. The partial regions include, for example, the region of the eyes, the region of the eyebrows, the region of the nose, the region of the ears, the region of the mouth, and the like.
And S104, determining whether to unlock the terminal equipment according to the similarity.
In this embodiment, specifically, the terminal device unlocking apparatus may unlock the terminal device when the similarity between the face 3D model of the user and the standard face 3D model is greater than or equal to a preset threshold; and when the similarity between the human face 3D model of the user and the standard human face 3D model is smaller than a preset threshold value, unlocking the terminal equipment.
According to the terminal equipment unlocking method, the structured light equipment is adopted to project to a user, and a human face 3D model of the user is obtained; analyzing a human face 3D model of a user, and extracting feature point information in the human face 3D model; comparing the characteristic point information with the characteristic point information of a pre-stored standard human face 3D model to obtain the similarity between the human face 3D model of the user and the standard human face 3D model; whether the terminal equipment is unlocked is determined according to the similarity, so that the face recognition and the terminal equipment unlocking can be performed based on the three-dimensional feature point information of the face 3D model of the user, the accuracy of the face recognition and the terminal equipment unlocking is improved, the face 3D model of the user is difficult to copy, the possibility that sensitive information of the terminal equipment is stolen is avoided, and the safety of the terminal equipment is improved.
Fig. 3 is a flowchart illustrating another method for unlocking a terminal device according to an embodiment of the present invention. As shown in fig. 3, based on the embodiment shown in fig. 1, step 103 may specifically include:
and S1031, analyzing the feature point information of the face 3D model of the user, and acquiring the parameter information of each organ and the position information among the organs in the face 3D model of the user.
In this embodiment, the terminal device unlocking apparatus may analyze feature point information of the 3D model of the user's face, determine parameter information such as shapes and sizes of organs, for example, eyes, eyebrows, a nose, ears, and a mouth, in the 3D model of the user's face, and information such as distances and orientations between the organs, and determine features of the 3D model of the user's face according to the information.
S1032, analyzing the characteristic point information of the standard human face 3D model, and acquiring parameter information of each organ and position information among the organs in the standard human face 3D model.
S1033, comparing the parameter information of each organ and the position information among the organs in the human face 3D model of the user with the parameter information of each organ and the position information among the organs in the standard human face 3D model, and acquiring the similarity between the human face 3D model of the user and the standard human face 3D model.
For example, the terminal device unlocking apparatus may determine a difference between the inter-ocular distance in the face 3D model of the user and the inter-ocular distance in the standard face 3D model; and determining the difference value between the mouth size in the human face 3D model of the user and the mouth size in the standard human face 3D model, and the like, and calculating and determining the similarity between the human face 3D model of the user and the standard human face 3D model according to the difference values.
According to the terminal equipment unlocking method, the structured light equipment is adopted to project to a user, and a human face 3D model of the user is obtained; analyzing a human face 3D model of a user, and extracting feature point information in the human face 3D model; analyzing the characteristic point information of the face 3D model of the user to acquire the parameter information of each organ and the position information among the organs in the face 3D model of the user; analyzing the characteristic point information of the standard human face 3D model to obtain the parameter information of each organ and the position information among the organs in the standard human face 3D model; comparing the parameter information of each organ and the position information among the organs in the human face 3D model of the user with the parameter information of each organ and the position information among the organs in the standard human face 3D model to obtain the similarity between the human face 3D model of the user and the standard human face 3D model; whether the terminal equipment is unlocked is determined according to the similarity, so that the face recognition and the terminal equipment unlocking can be performed based on the three-dimensional feature point information of the face 3D model of the user, the accuracy of the face recognition and the terminal equipment unlocking is improved, the face 3D model of the user is difficult to copy, the possibility that sensitive information of the terminal equipment is stolen is avoided, and the safety of the terminal equipment is improved.
Fig. 4 is a flowchart illustrating another method for unlocking a terminal device according to an embodiment of the present invention. As shown in fig. 4, based on the embodiment shown in fig. 1, the method may further include:
and S105, projecting the terminal equipment user by adopting the structured light equipment to obtain a standard human face 3D model of the terminal equipment user.
The process of the terminal device unlocking apparatus to execute step 105 is similar to step 101 in the embodiment shown in fig. 1, and reference may be made to the process of acquiring the 3D model of the user's face in step 101.
And S106, analyzing the standard human face 3D model, and extracting feature point information in the standard human face 3D model.
And S107, storing feature point information in the standard human face 3D model.
In this embodiment, under the condition that the terminal device is unlocked, the terminal device may identify a user located within a projection range of the terminal device as a terminal device user according to a mode agreed with the terminal device user, for example, a mode of inputting a specific character string, a specific password, and the like, so as to project the user to obtain a standard human face 3D model of the terminal device user, further obtain and store feature point information of the standard human face 3D model, so as to perform subsequent comparison, and improve the security and accuracy of unlocking the terminal device.
According to the terminal equipment unlocking method, the structured light equipment is adopted to project to a terminal equipment user to obtain a standard human face 3D model of the terminal equipment user; further acquiring and storing feature point information in the standard human face 3D model; then, projecting the user by adopting structured light equipment to obtain a human face 3D model of the user; analyzing a human face 3D model of a user, and extracting feature point information in the human face 3D model; comparing the characteristic point information with the characteristic point information of a pre-stored standard human face 3D model to obtain the similarity between the human face 3D model of the user and the standard human face 3D model; whether the terminal equipment is unlocked is determined according to the similarity, so that the face recognition and the terminal equipment unlocking can be performed based on the three-dimensional feature point information of the face 3D model of the user, the accuracy of the face recognition and the terminal equipment unlocking is improved, the face 3D model of the user is difficult to copy, the possibility that sensitive information of the terminal equipment is stolen is avoided, and the safety of the terminal equipment is improved.
Fig. 5 is a schematic structural diagram of an unlocking device for a terminal device according to an embodiment of the present invention. As shown in fig. 5, the terminal device unlocking apparatus includes:
a first projection module 51, a first extraction module 52, a comparison module 53 and an unlocking module 54.
The first projection module 51 is configured to project a user by using a structured light device to obtain a 3D model of a face of the user;
a first extraction module 52, configured to analyze a 3D face model of the user, and extract feature point information in the 3D face model;
a comparison module 53, configured to analyze the 3D face model of the user and extract feature point information in the 3D face model;
and the unlocking module 54 is configured to determine whether to unlock the terminal device according to the similarity.
The unlocking device for the terminal device provided by this embodiment may specifically be hardware or software installed on the terminal device. The terminal equipment can be a smart phone, a tablet computer, an ipad and the like.
In this embodiment, the terminal device unlocking apparatus may extract feature point information in a partial region of the face, for example, feature point information in regions such as an eye region, an eyebrow region, a nose region, an ear region, and a mouth region, and perform comparison based on the feature point information, thereby reducing the number of feature points to be compared, reducing the amount of computation, increasing the face recognition speed, and increasing the terminal device unlocking speed and efficiency.
In this embodiment, in an implementation manner, the terminal device unlocking apparatus may compare feature point information of all regions in the face 3D model of the user with feature point information of all regions in a pre-stored standard face 3D model, and obtain a similarity between the face 3D model of the user and the standard face 3D model. In another embodiment, the terminal device unlocking apparatus may compare the feature point information of the partial region in the user's face 3D model with the pre-stored feature point information of the partial region in the standard face 3D model, and obtain the similarity between the user's face 3D model and the standard face 3D model. The partial regions include, for example, the region of the eyes, the region of the eyebrows, the region of the nose, the region of the ears, the region of the mouth, and the like.
In this embodiment, specifically, the terminal device unlocking apparatus may unlock the terminal device when the similarity between the face 3D model of the user and the standard face 3D model is greater than or equal to a preset threshold; and when the similarity between the human face 3D model of the user and the standard human face 3D model is smaller than a preset threshold value, unlocking the terminal equipment.
Based on fig. 5, fig. 6 is a schematic structural diagram of another unlocking device for terminal equipment according to an embodiment of the present invention. As shown in fig. 6, the first projection module 51 includes: a projection unit 511, an image capturing unit 512, and a calculation unit 513.
The projection unit 511 is configured to determine whether to unlock the terminal device according to the similarity.
The image capturing unit 512 is configured to capture an image of the structured light projection of the user by using a camera, and obtain a face depth image of the user;
and a calculating unit 513, configured to calculate and obtain a 3D model of the face of the user by combining the face depth image of the user and the positional relationship between the structured light device and the camera.
In this embodiment, the terminal device unlocking apparatus may obtain the position of the user in advance, and adjust the projection angle and the projection range of the structured light device according to the position of the user, so as to bring the position of the user into the projection range; or the terminal equipment unlocking device can provide the projection range to the user in advance so that the user can move into the projection range.
In this embodiment, the terminal device unlocking apparatus may call the structured light device to perform structured light projection on the user, and call the camera to shoot the structured light projection within the projection range, so as to obtain a face depth image of the user, and further obtain a face 3D model of the user.
The method for acquiring the position of the user by the terminal equipment unlocking device can be that the terminal equipment unlocking device calls the structured light equipment and adjusts the projection angle and the projection range of the structured light equipment to acquire depth images of all surrounding objects, analyzes the depth images of all the objects, acquires feature point information in the depth images of all the objects, identifies the depth images of the human body based on the feature point information, and further determines the position of the human body and the position of the human face.
According to the terminal equipment unlocking device, the structured light equipment is adopted to project to a user, and a human face 3D model of the user is obtained; analyzing a human face 3D model of a user, and extracting feature point information in the human face 3D model; comparing the characteristic point information with the characteristic point information of a pre-stored standard human face 3D model to obtain the similarity between the human face 3D model of the user and the standard human face 3D model; whether the terminal equipment is unlocked is determined according to the similarity, so that the face recognition and the terminal equipment unlocking can be performed based on the three-dimensional feature point information of the face 3D model of the user, the accuracy of the face recognition and the terminal equipment unlocking is improved, the face 3D model of the user is difficult to copy, the possibility that sensitive information of the terminal equipment is stolen is avoided, and the safety of the terminal equipment is improved.
Based on fig. 5, fig. 7 is a schematic structural diagram of another unlocking device for terminal equipment according to an embodiment of the present invention. As shown in fig. 7, the alignment module 53 includes: a first analyzing unit 531, a second analyzing unit 532 and a comparing unit 533.
The first analysis unit 531 is configured to analyze feature point information of the face 3D model of the user, and obtain parameter information of each organ and position information between the organs in the face 3D model of the user;
a second analysis unit 532, configured to analyze the feature point information of the standard face 3D model, and obtain parameter information of each organ and position information between the organs in the standard face 3D model;
a comparing unit 533, configured to compare the parameter information of each organ and the position information between the organs in the face 3D model of the user with the parameter information of each organ and the position information between the organs in the standard face 3D model, and obtain a similarity between the face 3D model of the user and the standard face 3D model.
In this embodiment, the terminal device unlocking apparatus may analyze feature point information of the 3D model of the user's face, determine parameter information such as shapes and sizes of organs, for example, eyes, eyebrows, a nose, ears, and a mouth, in the 3D model of the user's face, and information such as distances and orientations between the organs, and determine features of the 3D model of the user's face according to the information.
For example, the terminal device unlocking apparatus may determine a difference between the inter-ocular distance in the face 3D model of the user and the inter-ocular distance in the standard face 3D model; and determining the difference value between the mouth size in the human face 3D model of the user and the mouth size in the standard human face 3D model, and the like, and calculating and determining the similarity between the human face 3D model of the user and the standard human face 3D model according to the difference values.
According to the terminal equipment unlocking device, the structured light equipment is adopted to project to a user, and a human face 3D model of the user is obtained; analyzing a human face 3D model of a user, and extracting feature point information in the human face 3D model; analyzing the characteristic point information of the face 3D model of the user to acquire the parameter information of each organ and the position information among the organs in the face 3D model of the user; analyzing the characteristic point information of the standard human face 3D model to obtain the parameter information of each organ and the position information among the organs in the standard human face 3D model; comparing the parameter information of each organ and the position information among the organs in the human face 3D model of the user with the parameter information of each organ and the position information among the organs in the standard human face 3D model to obtain the similarity between the human face 3D model of the user and the standard human face 3D model; whether the terminal equipment is unlocked is determined according to the similarity, so that the face recognition and the terminal equipment unlocking can be performed based on the three-dimensional feature point information of the face 3D model of the user, the accuracy of the face recognition and the terminal equipment unlocking is improved, the face 3D model of the user is difficult to copy, the possibility that sensitive information of the terminal equipment is stolen is avoided, and the safety of the terminal equipment is improved.
Based on fig. 5, fig. 8 is a schematic structural diagram of another unlocking device for terminal equipment according to an embodiment of the present invention. As shown in fig. 8, the apparatus further comprises: a second projection module 55, a second extraction module 56 and a storage module 57.
The second projection module 55 is configured to project the structured light device to the terminal device user to obtain a standard human face 3D model of the terminal device user;
a second extraction module 56, configured to analyze the standard face 3D model, and extract feature point information in the standard face 3D model;
and the storage module 57 is configured to store the feature point information in the standard face 3D model.
In this embodiment, under the condition that the terminal device is unlocked, the terminal device may identify a user located within a projection range of the terminal device as a terminal device user according to a mode agreed with the terminal device user, for example, a mode of inputting a specific character string, a specific password, and the like, so as to project the user to obtain a standard human face 3D model of the terminal device user, further obtain and store feature point information of the standard human face 3D model, so as to perform subsequent comparison, and improve the security and accuracy of unlocking the terminal device.
According to the terminal equipment unlocking device, the structured light equipment is adopted to project to a terminal equipment user to obtain a standard human face 3D model of the terminal equipment user; further acquiring and storing feature point information in the standard human face 3D model; then, projecting the user by adopting structured light equipment to obtain a human face 3D model of the user; analyzing a human face 3D model of a user, and extracting feature point information in the human face 3D model; comparing the characteristic point information with the characteristic point information of a pre-stored standard human face 3D model to obtain the similarity between the human face 3D model of the user and the standard human face 3D model; whether the terminal equipment is unlocked is determined according to the similarity, so that the face recognition and the terminal equipment unlocking can be performed based on the three-dimensional feature point information of the face 3D model of the user, the accuracy of the face recognition and the terminal equipment unlocking is improved, the face 3D model of the user is difficult to copy, the possibility that sensitive information of the terminal equipment is stolen is avoided, and the safety of the terminal equipment is improved.
The embodiment of the invention also provides the terminal equipment. The terminal device includes therein an Image Processing circuit, which may be implemented by hardware and/or software components, and may include various Processing units defining an ISP (Image Signal Processing) pipeline. FIG. 9 is a schematic diagram of an image processing circuit in one embodiment. As shown in fig. 9, for convenience of explanation, only aspects of the image processing technique related to the embodiment of the present invention are shown.
As shown in fig. 9, the image processing circuit 900 includes an imaging device 910, an ISP processor 930, and control logic 940. The imaging device 910 may include a camera with one or more lenses 912, an image sensor 914, and a structured light projector 916. The structured light projector 916 projects the structured light to the object to be measured. The structured light pattern may be a laser stripe, a gray code, a sinusoidal stripe, or a randomly arranged speckle pattern. The image sensor 914 captures a structured light image projected onto the object to be measured and transmits the structured light image to the ISP processor 930, and the ISP processor 930 demodulates the structured light image to obtain depth information of the object to be measured. At the same time, the image sensor 914 may also capture color information of the object under test. Of course, the structured light image and the color information of the measured object may be captured by the two image sensors 914, respectively.
Taking speckle structured light as an example, the ISP processor 930 demodulates the structured light image, specifically including acquiring a speckle image of the measured object from the structured light image, performing image data calculation on the speckle image of the measured object and the reference speckle image according to a predetermined algorithm, and obtaining a moving distance of each scattered spot of the speckle image on the measured object relative to a reference scattered spot in the reference speckle image. And (4) converting and calculating by using a trigonometry method to obtain the depth value of each scattered spot of the speckle image, and obtaining the depth information of the measured object according to the depth value.
Of course, the depth image information and the like may be acquired by a binocular vision method or a method based on the time difference of flight TOF, and the method is not limited thereto, as long as the depth information of the object to be measured can be acquired or obtained by calculation, and all methods fall within the scope of the present embodiment.
After ISP processor 930 receives the color information of the object to be measured captured by image sensor 914, image data corresponding to the color information of the object to be measured may be processed. ISP processor 930 analyzes the image data to obtain image statistics that may be used to determine and/or image one or more control parameters of imaging device 910. Image sensor 914 may include an array of color filters (e.g., Bayer filters), and image sensor 914 may acquire light intensity and wavelength information captured with each imaging pixel of image sensor 914 and provide a set of raw image data that may be processed by ISP processor 930.
ISP processor 930 processes the raw image data pixel by pixel in a variety of formats. For example, each image pixel may have a bit depth of 8, 10, 12, or 14 bits, and ISP processor 930 may perform one or more image processing operations on the raw image data, collecting image statistics about the image data. Wherein the image processing operations may be performed with the same or different bit depth precision.
ISP processor 930 may also receive pixel data from image memory 920. The image memory 920 may be a portion of a memory device, a storage device, or a separate dedicated memory within an electronic device, and may include a DMA (direct memory access) feature.
Upon receiving the raw image data, ISP processor 930 may perform one or more image processing operations.
After the ISP processor 930 acquires the color information and the depth information of the object to be measured, they may be fused to obtain a three-dimensional image. The feature of the corresponding object to be measured can be extracted by at least one of an appearance contour extraction method or a contour feature extraction method. For example, the features of the object to be measured are extracted by methods such as an active shape model method ASM, an active appearance model method AAM, a principal component analysis method PCA, and a discrete cosine transform method DCT, which are not limited herein. And then the characteristics of the measured object extracted from the depth information and the characteristics of the measured object extracted from the color information are subjected to registration and characteristic fusion processing. The fusion processing may be a process of directly combining the features extracted from the depth information and the color information, a process of combining the same features in different images after weight setting, or a process of generating a three-dimensional image based on the features after fusion in other fusion modes.
The image data for the three-dimensional image may be sent to an image memory 920 for additional processing before being displayed. ISP processor 930 receives the processed data from image memory 920 and performs image data processing on the processed data in the raw domain and in the RGB and YCbCr color spaces. Image data for a three-dimensional image may be output to a display 960 for viewing by a user and/or further Processing by a Graphics Processing Unit (GPU). Further, the output of ISP processor 930 may also be sent to image memory 920 and display 960 may read the image data from image memory 920. In one embodiment, image memory 920 may be configured to implement one or more frame buffers. Further, the output of the ISP processor 930 may be transmitted to the encoder/decoder 950 to encode/decode image data. The encoded image data may be saved and decompressed before being displayed on the display 960 device. The encoder/decoder 950 may be implemented by a CPU or a GPU or a coprocessor.
The image statistics determined by ISP processor 930 may be sent to control logic 940 unit. Control logic 940 may include a processor and/or microcontroller that executes one or more routines (e.g., firmware) that may determine control parameters of imaging device 910 based on the received image statistics.
The following steps are implemented by using the image processing technology in fig. 9 to realize the unlocking method of the terminal device:
projecting to a user by adopting structured light equipment to obtain a human face 3D model of the user;
analyzing the face 3D model of the user, and extracting feature point information in the face 3D model;
comparing the characteristic point information with characteristic point information of a pre-stored standard human face 3D model to obtain the similarity between the human face 3D model of the user and the standard human face 3D model;
and determining whether to unlock the terminal equipment according to the similarity.
In order to implement the foregoing embodiments, the present invention also proposes a non-transitory computer-readable storage medium having stored thereon a computer program, which when executed by a processor is capable of implementing the terminal device unlocking method as described in the foregoing embodiments.
In the description herein, references to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, the schematic representations of the terms used above are not necessarily intended to refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, various embodiments or examples and features of different embodiments or examples described in this specification can be combined and combined by one skilled in the art without contradiction.
Furthermore, the terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include at least one such feature. In the description of the present invention, "a plurality" means at least two, e.g., two, three, etc., unless specifically limited otherwise.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing steps of a custom logic function or process, and alternate implementations are included within the scope of the preferred embodiment of the present invention in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the present invention.
The logic and/or steps represented in the flowcharts or otherwise described herein, e.g., an ordered listing of executable instructions that can be considered to implement logical functions, can be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions. For the purposes of this description, a "computer-readable medium" can be any means that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device. More specific examples (a non-exhaustive list) of the computer-readable medium would include the following: an electrical connection (electronic device) having one or more wires, a portable computer diskette (magnetic device), a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber device, and a portable compact disc read-only memory (CDROM). Additionally, the computer-readable medium could even be paper or another suitable medium upon which the program is printed, as the program can be electronically captured, via for instance optical scanning of the paper or other medium, then compiled, interpreted or otherwise processed in a suitable manner if necessary, and then stored in a computer memory.
It should be understood that portions of the present invention may be implemented in hardware, software, firmware, or a combination thereof. In the above embodiments, the various steps or methods may be implemented in software or firmware stored in memory and executed by a suitable instruction execution system. If implemented in hardware, as in another embodiment, any one or combination of the following techniques, which are known in the art, may be used: a discrete logic circuit having a logic gate circuit for implementing a logic function on a data signal, an application specific integrated circuit having an appropriate combinational logic gate circuit, a Programmable Gate Array (PGA), a Field Programmable Gate Array (FPGA), or the like.
It will be understood by those skilled in the art that all or part of the steps carried by the method for implementing the above embodiments may be implemented by hardware related to instructions of a program, which may be stored in a computer readable storage medium, and when the program is executed, the program includes one or a combination of the steps of the method embodiments.
In addition, functional units in the embodiments of the present invention may be integrated into one processing module, or each unit may exist alone physically, or two or more units are integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. The integrated module, if implemented in the form of a software functional module and sold or used as a stand-alone product, may also be stored in a computer readable storage medium.
The storage medium mentioned above may be a read-only memory, a magnetic or optical disk, etc. Although embodiments of the present invention have been shown and described above, it is understood that the above embodiments are exemplary and should not be construed as limiting the present invention, and that variations, modifications, substitutions and alterations can be made to the above embodiments by those of ordinary skill in the art within the scope of the present invention.

Claims (6)

1. A method for unlocking a terminal device is characterized by comprising the following steps:
projecting to a user by adopting structured light equipment to obtain a human face 3D model of the user;
analyzing the face 3D model of the user, and extracting feature point information in the face 3D model;
analyzing the feature point information of the human face 3D model of the user to acquire the parameter information of each organ and the position information among the organs in the human face 3D model of the user;
analyzing feature point information of a standard human face 3D model to obtain parameter information of each organ and position information among the organs in the standard human face 3D model;
comparing the parameter information of each organ and the position information among the organs in the human face 3D model of the user with the parameter information of each organ and the position information among the organs in the standard human face 3D model to obtain the similarity between the human face 3D model of the user and the standard human face 3D model;
if the similarity is greater than or equal to a preset threshold value, unlocking the terminal equipment;
if the similarity is smaller than a preset threshold value, the terminal equipment is not unlocked;
adopt structured light equipment to carry out the projection to the user, obtain user's people's face 3D model, include:
structured light projection is carried out on a user by adopting structured light equipment, the structured light mode comprises a speckle structured light mode, and the speckle structured light mode is that a light beam emitted by the structured light equipment is projected on a measured object to generate a non-uniform light spot array;
adopting a camera to shoot the structured light projection of the user, demodulating the structured light image, acquiring a speckle image of the measured object from the structured light image, performing image data calculation on the speckle image of the measured object and the reference speckle image according to a predetermined algorithm, and acquiring the moving distance of each scattered spot of the speckle image on the measured object relative to a reference scattered spot in the reference speckle image; converting and calculating by using a trigonometry method to obtain depth values of scattered spots of the speckle image, and obtaining depth information of a measured object according to the depth values so as to obtain a face depth image of the user;
and calculating and acquiring a human face 3D model of the user by combining the human face depth image of the user and the position relation between the structured light equipment and the camera.
2. The method of claim 1, wherein before the projecting to the user with the structured light device and obtaining the 3D model of the face of the user, further comprising:
projecting towards a terminal equipment user by adopting structured light equipment to obtain a standard human face 3D model of the terminal equipment user;
analyzing the standard human face 3D model, and extracting feature point information in the standard human face 3D model;
and storing the characteristic point information in the standard human face 3D model.
3. An unlocking device for a terminal device, comprising:
the first projection module is used for projecting to a user by adopting structured light equipment to obtain a human face 3D model of the user;
the first extraction module is used for analyzing the human face 3D model of the user and extracting feature point information in the human face 3D model;
the comparison module is used for analyzing the characteristic point information of the human face 3D model of the user to acquire the parameter information of each organ and the position information among the organs in the human face 3D model of the user;
analyzing feature point information of a standard human face 3D model to obtain parameter information of each organ and position information among the organs in the standard human face 3D model;
comparing the parameter information of each organ and the position information among the organs in the human face 3D model of the user with the parameter information of each organ and the position information among the organs in the standard human face 3D model to obtain the similarity between the human face 3D model of the user and the standard human face 3D model; the unlocking module is used for unlocking the terminal equipment when the similarity is greater than or equal to a preset threshold value; when the similarity is smaller than a preset threshold value, unlocking the terminal equipment;
the first projection module includes:
the projection unit is used for determining whether the terminal equipment is unlocked or not according to the similarity;
the camera shooting unit is used for shooting the structured light projection of the user by adopting a camera, demodulating the structured light image, acquiring a speckle image of the measured object from the structured light image, calculating the image data of the speckle image of the measured object and the reference speckle image according to a preset algorithm, and acquiring the moving distance of each scattered spot of the speckle image on the measured object relative to the reference scattered spot in the reference speckle image; the method comprises the steps of utilizing a trigonometry conversion calculation to obtain depth values of scattered spots of a speckle image, and obtaining depth information of a measured object according to the depth values so as to obtain a human face depth image of a user, wherein a structured light mode comprises a speckle structured light mode, and the speckle structured light mode is used for generating a non-uniform light spot array by projecting light beams emitted by structured light equipment onto a measured object;
and the calculating unit is used for calculating and acquiring a human face 3D model of the user by combining the human face depth image of the user and the position relation between the structured light equipment and the camera.
4. The apparatus of claim 3, further comprising:
the second projection module is used for projecting to the terminal equipment user by adopting the structured light equipment to obtain a standard human face 3D model of the terminal equipment user;
the second extraction module is used for analyzing the standard human face 3D model and extracting feature point information in the standard human face 3D model;
and the storage module is used for storing the characteristic point information in the standard human face 3D model.
5. A terminal device, comprising one or more of the following components: a housing, and a processor and a memory located in the housing, wherein the processor runs a program corresponding to an executable program code stored in the memory by reading the executable program code for implementing the terminal device unlocking method as claimed in claim 1 or 2.
6. A non-transitory computer-readable storage medium having a computer program stored thereon, wherein the computer program, when executed by a processor, implements the terminal device unlocking method as recited in claim 1 or 2.
CN201710677498.9A 2017-08-09 2017-08-09 Terminal equipment unlocking method and device and terminal equipment Active CN107563304B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710677498.9A CN107563304B (en) 2017-08-09 2017-08-09 Terminal equipment unlocking method and device and terminal equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710677498.9A CN107563304B (en) 2017-08-09 2017-08-09 Terminal equipment unlocking method and device and terminal equipment

Publications (2)

Publication Number Publication Date
CN107563304A CN107563304A (en) 2018-01-09
CN107563304B true CN107563304B (en) 2020-10-16

Family

ID=60974023

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710677498.9A Active CN107563304B (en) 2017-08-09 2017-08-09 Terminal equipment unlocking method and device and terminal equipment

Country Status (1)

Country Link
CN (1) CN107563304B (en)

Families Citing this family (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108322605A (en) * 2018-01-30 2018-07-24 上海摩软通讯技术有限公司 Intelligent terminal and its face unlocking method and system
CN108445643B (en) * 2018-03-12 2021-05-14 Oppo广东移动通信有限公司 Structured light projection module, detection method and device thereof, image acquisition structure and electronic device
CN108513706A (en) * 2018-04-12 2018-09-07 深圳阜时科技有限公司 Electronic equipment and its face recognition method
WO2019200573A1 (en) * 2018-04-18 2019-10-24 深圳阜时科技有限公司 Identity authentication method, identity authentication device, and electronic apparatus
CN108513662A (en) * 2018-04-18 2018-09-07 深圳阜时科技有限公司 Identification authentication method, identification authentication device and electronic equipment
CN110472459B (en) * 2018-05-11 2022-12-27 华为技术有限公司 Method and device for extracting feature points
CN110502953A (en) * 2018-05-16 2019-11-26 杭州海康威视数字技术股份有限公司 A kind of iconic model comparison method and device
CN109284591B (en) * 2018-08-17 2022-02-08 北京小米移动软件有限公司 Face unlocking method and device
CN109670407A (en) * 2018-11-26 2019-04-23 维沃移动通信有限公司 A kind of face identification method and mobile terminal
CN109800699A (en) * 2019-01-15 2019-05-24 珠海格力电器股份有限公司 Image-recognizing method, system and device
CN109919121B (en) * 2019-03-15 2021-04-06 百度在线网络技术(北京)有限公司 Human body model projection method and device, electronic equipment and storage medium
US10853631B2 (en) 2019-07-24 2020-12-01 Advanced New Technologies Co., Ltd. Face verification method and apparatus, server and readable storage medium
CN110532746B (en) * 2019-07-24 2021-07-23 创新先进技术有限公司 Face checking method, device, server and readable storage medium
CN112394524A (en) * 2019-08-19 2021-02-23 上海鲲游光电科技有限公司 Dodging element, manufacturing method and system thereof and electronic device

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103971405A (en) * 2014-05-06 2014-08-06 重庆大学 Method for three-dimensional reconstruction of laser speckle structured light and depth information
CN105488371A (en) * 2014-09-19 2016-04-13 中兴通讯股份有限公司 Face recognition method and device
CN106991377A (en) * 2017-03-09 2017-07-28 广东欧珀移动通信有限公司 With reference to the face identification method, face identification device and electronic installation of depth information

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9396382B2 (en) * 2012-08-17 2016-07-19 Flashscan3D, Llc System and method for a biometric image sensor with spoofing detection
CN107368730B (en) * 2017-07-31 2020-03-06 Oppo广东移动通信有限公司 Unlocking verification method and device
CN107423716A (en) * 2017-07-31 2017-12-01 广东欧珀移动通信有限公司 Face method for monitoring state and device

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103971405A (en) * 2014-05-06 2014-08-06 重庆大学 Method for three-dimensional reconstruction of laser speckle structured light and depth information
CN105488371A (en) * 2014-09-19 2016-04-13 中兴通讯股份有限公司 Face recognition method and device
CN106991377A (en) * 2017-03-09 2017-07-28 广东欧珀移动通信有限公司 With reference to the face identification method, face identification device and electronic installation of depth information

Also Published As

Publication number Publication date
CN107563304A (en) 2018-01-09

Similar Documents

Publication Publication Date Title
CN107563304B (en) Terminal equipment unlocking method and device and terminal equipment
CN107480613B (en) Face recognition method and device, mobile terminal and computer readable storage medium
CN107368730B (en) Unlocking verification method and device
CN108447017B (en) Face virtual face-lifting method and device
CN107025635B (en) Depth-of-field-based image saturation processing method and device and electronic device
CN107564050B (en) Control method and device based on structured light and terminal equipment
CN107491744B (en) Human body identity recognition method and device, mobile terminal and storage medium
CN108564540B (en) Image processing method and device for removing lens reflection in image and terminal equipment
US8917317B1 (en) System and method for camera calibration
CN107517346B (en) Photographing method and device based on structured light and mobile device
CN105740778B (en) Improved three-dimensional human face in-vivo detection method and device
CN107452034B (en) Image processing method and device
CN107392874B (en) Beauty treatment method and device and mobile equipment
CN107592449B (en) Three-dimensional model establishing method and device and mobile terminal
KR102476016B1 (en) Apparatus and method for determining position of eyes
US11138740B2 (en) Image processing methods, image processing apparatuses, and computer-readable storage medium
CN107491675B (en) Information security processing method and device and terminal
KR101444538B1 (en) 3d face recognition system and method for face recognition of thterof
CN107480615B (en) Beauty treatment method and device and mobile equipment
CN107610171B (en) Image processing method and device
KR20170092533A (en) A face pose rectification method and apparatus
CN107483428A (en) Auth method, device and terminal device
CN107659985B (en) Method and device for reducing power consumption of mobile terminal, storage medium and mobile terminal
CN107705278B (en) Dynamic effect adding method and terminal equipment
CN107493452B (en) Video picture processing method and device and terminal

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: Changan town in Guangdong province Dongguan 523860 usha Beach Road No. 18

Applicant after: OPPO Guangdong Mobile Communications Co., Ltd.

Address before: Changan town in Guangdong province Dongguan 523860 usha Beach Road No. 18

Applicant before: Guangdong OPPO Mobile Communications Co., Ltd.

CB02 Change of applicant information
GR01 Patent grant
GR01 Patent grant