CN113610051B - Face ranging method, equipment and computer readable medium based on face registration - Google Patents
Face ranging method, equipment and computer readable medium based on face registration Download PDFInfo
- Publication number
- CN113610051B CN113610051B CN202110986913.5A CN202110986913A CN113610051B CN 113610051 B CN113610051 B CN 113610051B CN 202110986913 A CN202110986913 A CN 202110986913A CN 113610051 B CN113610051 B CN 113610051B
- Authority
- CN
- China
- Prior art keywords
- face
- feature points
- local coordinates
- camera
- user
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 54
- 238000009826 distribution Methods 0.000 claims abstract description 36
- 238000013519 translation Methods 0.000 claims description 38
- 238000012544 monitoring process Methods 0.000 claims description 12
- 238000004590 computer program Methods 0.000 claims description 6
- 238000004364 calculation method Methods 0.000 abstract description 4
- 238000004891 communication Methods 0.000 description 8
- 238000003860 storage Methods 0.000 description 7
- 238000012986 modification Methods 0.000 description 6
- 230000004048 modification Effects 0.000 description 6
- 238000001514 detection method Methods 0.000 description 5
- 238000010586 diagram Methods 0.000 description 3
- 230000006872 improvement Effects 0.000 description 3
- 238000004519 manufacturing process Methods 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 3
- 230000000644 propagated effect Effects 0.000 description 3
- 230000006978 adaptation Effects 0.000 description 2
- 238000004422 calculation algorithm Methods 0.000 description 2
- 210000004709 eyebrow Anatomy 0.000 description 2
- 238000012545 processing Methods 0.000 description 2
- 210000001747 pupil Anatomy 0.000 description 2
- 241000579895 Chlorostilbon Species 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 230000006399 behavior Effects 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 230000009977 dual effect Effects 0.000 description 1
- 239000010976 emerald Substances 0.000 description 1
- 229910052876 emerald Inorganic materials 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000001815 facial effect Effects 0.000 description 1
- 239000000835 fiber Substances 0.000 description 1
- 230000014509 gene expression Effects 0.000 description 1
- 210000003128 head Anatomy 0.000 description 1
- 239000010977 jade Substances 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 239000003607 modifier Substances 0.000 description 1
- ZLIBICFPKPWGIZ-UHFFFAOYSA-N pyrimethanil Chemical compound CC1=CC(C)=NC(NC=2C=CC=CC=2)=N1 ZLIBICFPKPWGIZ-UHFFFAOYSA-N 0.000 description 1
- 230000002441 reversible effect Effects 0.000 description 1
- 239000010979 ruby Substances 0.000 description 1
- 229910001750 ruby Inorganic materials 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 230000001360 synchronised effect Effects 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/80—Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30196—Human being; Person
- G06T2207/30201—Face
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30244—Camera pose
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30248—Vehicle exterior or interior
- G06T2207/30268—Vehicle interior
Landscapes
- Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The application provides a face ranging method, equipment and a computer readable medium based on face registration. The method comprises the following steps: acquiring registration local coordinates of face feature points of a user based on face registration information of the user, wherein the registration local coordinates are coordinate values under a face coordinate system of the user; acquiring a face photo of a user shot by a camera; selecting face feature points with normal distribution from the face feature points based on registered local coordinates of the face feature points and the face photos, and determining actual local coordinates of the face feature points with normal distribution, wherein the actual local coordinates are coordinate values under a face coordinate system of a user; and determining camera coordinates of the face feature points with normal distribution based on the actual local coordinates of the face feature points with normal distribution, wherein the camera coordinates are used for calculating the distance between the face of the user and the camera. The method can perform high-precision face ranging with lower calculation force, lower cost and simple image pickup equipment, and improve the accuracy of face ranging.
Description
Technical Field
The application mainly relates to the technical field of face recognition, in particular to a face ranging method, equipment and a computer readable medium based on face registration.
Background
Face ranging is applied to numerous occasions, such as cabin face testing, driver behavior monitoring and the like, and in many occasions, such as vehicle-mounted or face detection and the like, accurate detection of the accurate position of face distance detection equipment is required.
The existing face ranging method based on a single camera generally estimates how far the face is from the camera roughly according to the face proportion, and cannot measure the long distance, so that misjudgment of a face detection algorithm is easy to occur.
Another existing face ranging method based on double cameras adopts double-camera stereo reconstruction, 2D face feature points are identified according to a face detection algorithm, and positions of face key points can be obtained by combining stereo 3D images. However, the face ranging method based on the double cameras has higher requirements on the stability of the camera structure and the consistency of products, and the double cameras need to be strictly synchronous, namely, face images can be synchronously captured. The dual cameras are required to be calibrated after being installed, but under the severe vibration environment, the structure can be displaced or rotated, the calibration is required to be carried out again, the online calibration cost is high, and the reasons result in less mass production of the existing vehicle-mounted dual-camera scheme. And the cost of the double cameras is high, the mass production difficulty is high, and the double cameras are difficult to popularize and use at the present stage.
Therefore, how to perform high-precision face ranging with low computational effort, low cost and simple image capturing equipment, and to improve the accuracy of face ranging is a problem that needs to be solved by those skilled in the art.
Disclosure of Invention
The application aims to provide a face ranging method, equipment and a computer readable medium based on face registration, which can perform high-precision face ranging with lower calculation force, lower cost and simple camera equipment and improve the accuracy of face ranging.
In order to solve the technical problems, the application provides a face ranging method based on face registration, which comprises the following steps: acquiring registration local coordinates of face feature points of a user based on face registration information of the user, wherein the registration local coordinates are coordinate values under a face coordinate system of the user; acquiring a face photo of the user shot by a camera; selecting face feature points with normal distribution from the face feature points based on the registered local coordinates of the face feature points and the face photo, and determining actual local coordinates of the face feature points with normal distribution, wherein the actual local coordinates are coordinate values under a face coordinate system of the user; and determining camera coordinates of the face feature points with normal distribution based on the actual local coordinates of the face feature points with normal distribution, wherein the camera coordinates are used for calculating the distance between the face of the user and the camera.
In an embodiment of the present application, the step of selecting the face feature points with normal distribution from the face feature points based on the registered local coordinates of the face feature points and the face photo, and determining the actual local coordinates of the face feature points with normal distribution is performed according to the following manner: determining a rotation parameter and a translation parameter of the face relative to the camera based on the face photo and the registered local coordinates of key feature points in the face feature points; calculating actual local coordinates of the face feature points based on the rotation parameters, the translation parameters and the face photos; and selecting the face feature points with normal distribution according to the actual local coordinates of the face feature points and the registered local coordinates of the face feature points.
In an embodiment of the present application, the step of determining the rotation parameter and the translation parameter of the face relative to the camera based on the registered local coordinates of the key feature points in the face photo and the face feature points is performed by:
lamda*[u, v, 1]’ = A * [R, T]*[X, Y, Z]’
wherein [ X, Y, Z ] 'is a registered local coordinate of the key feature point, R is the rotation parameter, T is the translation parameter, [ u, v, 1]' is a pixel coordinate of the key feature point in the face photo, A is a camera internal parameter calibrated in advance, and lamda is a preset constant.
In an embodiment of the present application, the step of selecting the face feature points with normal distribution from the face feature points based on the registered local coordinates of the face feature points and the face photo, and determining the actual local coordinates of the face feature points with normal distribution is performed according to the following manner: determining a first rotation parameter and a first translation parameter of the face relative to the camera based on the face photo and registered local coordinates of key feature points in the face feature points; calculating actual local coordinates of the key feature points based on the first rotation parameter, the first translation parameter and the face photo; selecting key feature points with small errors from the key feature points based on the registered local coordinates and the actual local coordinates of the key feature points; the key feature points with small errors comprise key feature points with errors of actual local coordinates of the key feature points and registered local coordinates thereof within a threshold range; determining a second rotation parameter and a second translation parameter of the face relative to the camera based on the face photo and the registered local coordinates of the key feature points with small errors; calculating actual local coordinates of the face feature points based on the second rotation parameters, the second translation parameters and the face photos; and selecting the face feature points with normal distribution according to the actual local coordinates of the face feature points and the registered local coordinates of the face feature points.
In an embodiment of the present application, the step of determining the first rotation parameter and the first translation parameter of the face relative to the camera based on the face photo and the registered local coordinates of key feature points of the face feature points is performed by:
lamda*[u, v, 1]’ = A * [R1, T1]*[X, Y, Z]’
wherein [ X, Y, Z ] 'is a registered local coordinate of the key feature point, R1 is the first rotation parameter, T1 is the first translation parameter, [ u, v, 1]' is a pixel coordinate of the key feature point in the face photo, A is a camera internal parameter calibrated in advance, and lamda is a preset constant;
the step of determining the second rotation parameter and the second translation parameter of the face relative to the camera based on the face photo and the registered local coordinates of the key feature points with small errors is performed by the following steps:
lamda*[u, v, 1]’’ = A * [R2, T2]*[X, Y, Z]’’
wherein [ X, Y, Z ] '' is a registered local coordinate of the key feature point with small error, R2 is the second rotation parameter, T2 is the second translation parameter, [ u, v, 1] '' is a pixel coordinate of the key feature point with small error in the face photo, A is a camera internal parameter calibrated in advance, and lamda is a preset constant.
In an embodiment of the present application, the face registration information is registered by: sending a face registration start prompt message to the user; and displaying an animation for attracting the attention of the user to the user at a preset position of the screen, wherein the preset position is used for guiding the user to adjust the face to a gesture suitable for the camera to perform face registration shooting.
In an embodiment of the present application, the method is applied to an automobile cockpit, the user is a driver, and the camera for face registration includes a camera of a driver monitoring system and a camera of a passenger monitoring system.
In an embodiment of the present application, the method is applied to an automobile cockpit, the user is a driver, and the camera is a camera of a driver monitoring system.
In order to solve the technical problem, the application also provides a face ranging device based on face registration, comprising: a memory for storing instructions executable by the processor; and a processor for executing the instructions to implement the method as described above.
To solve the above technical problem, the present application also provides a computer readable medium storing computer program code which, when executed by a processor, implements a method as described above.
Compared with the prior art, the face ranging method based on face registration judges the points with serious perspective distortion through the face registration information, screens out the face characteristic points with normal distribution, and then carries out face ranging according to the face characteristic points with normal distribution, thereby avoiding that certain characteristic points with inaccurate or even wrong identification influence the face ranging precision and greatly improving the accuracy of face ranging. Moreover, the method requires less calculation force, the used image pickup equipment is simple, and the high-precision face ranging is realized with lower cost.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this specification, illustrate embodiments of the application and together with the description serve to explain the principles of the application. In the accompanying drawings:
fig. 1 is a schematic flow chart of a face ranging method based on face registration according to an embodiment of the application.
Fig. 2 is a schematic flow chart illustrating step 103 of fig. 1 according to an embodiment of the present application.
Fig. 3 is a schematic diagram illustrating a specific flow of step 103 of fig. 1 according to another embodiment of the present application.
Fig. 4 is a block diagram illustrating a face ranging apparatus based on face registration according to an embodiment of the present application.
Detailed Description
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are used in the description of the embodiments will be briefly described below. It is apparent that the drawings in the following description are only some examples or embodiments of the present application, and it is apparent to those of ordinary skill in the art that the present application may be applied to other similar situations according to the drawings without inventive effort. Unless otherwise apparent from the context of the language or otherwise specified, like reference numerals in the figures refer to like structures or operations.
As used in the specification and in the claims, the terms "a," "an," "the," and/or "the" are not specific to a singular, but may include a plurality, unless the context clearly dictates otherwise. In general, the terms "comprises" and "comprising" merely indicate that the steps and elements are explicitly identified, and do not constitute an exclusive list, as other steps or elements may be included in a method or apparatus.
The relative arrangement of the components and steps, numerical expressions and numerical values set forth in these embodiments do not limit the scope of the present application unless it is specifically stated otherwise. Meanwhile, it should be understood that the sizes of the respective parts shown in the drawings are not drawn in actual scale for convenience of description. Techniques, methods, and apparatus known to one of ordinary skill in the relevant art may not be discussed in detail, but should be considered part of the specification where appropriate. In all examples shown and discussed herein, any specific values should be construed as merely illustrative, and not a limitation. Thus, other examples of the exemplary embodiments may have different values. It should be noted that: like reference numerals and letters denote like items in the following figures, and thus once an item is defined in one figure, no further discussion thereof is necessary in subsequent figures.
A flowchart is used in the present application to describe the operations performed by a system according to embodiments of the present application. It should be understood that the preceding or following operations are not necessarily performed in order precisely. Rather, the various steps may be processed in reverse order or simultaneously. At the same time, other operations are added to or removed from these processes.
The application provides a face ranging method based on face registration. In an embodiment of the present application, the face ranging method based on face registration of the present application may be applied to an automobile cockpit, a user may be a driver, and a camera may be a camera of a driver monitoring system (DMS, driver Monitoring System). In general, the DMS camera is relatively close to the driver, and can acquire a relatively clear image. In one example, the resolution of the camera is greater than 100 ten thousand pixels to ensure the quality of the face photo taken. Before the face ranging method based on face registration of the application is carried out, the user is required to register the face.
Fig. 1 is a schematic flow chart of a face ranging method based on face registration according to an embodiment of the application. As shown in fig. 1, a face ranging method based on face registration in this embodiment includes the following steps 101-104:
step 101, acquiring registration local coordinates of face feature points of a user based on face registration information of the user. Face registration refers to acquiring feature information of a face, such as a three-dimensional model, texture features, and the like. The registration local coordinates are coordinate values under the face coordinate system of the user, and refer to local coordinates of each face feature point obtained after the user finishes face registration relative to the face coordinate system. In one example, the face coordinate system uses the nose tip of the face as the origin of the coordinate system, the normal line of the plane formed by the two pupil centers and the chin center is the Z-axis direction, the connecting line direction of the two pupils is the X-axis, and the perpendicular line of the Z-axis and the X-axis is the Y-axis according to the right-hand rule.
In an embodiment of the present application, when the face ranging method based on face registration of the present application is applied to an automobile cockpit, a plurality of cameras for face registration may be used, including cameras of a driver monitoring system and cameras of a passenger monitoring system (OMS, occupancy Monitoring System). The DMS camera and the OMS camera are calibrated with respect to the vehicle body, respectively, to obtain internal parameters and external parameters (e.g., rotation parameters and translation parameters) of the two cameras.
Before a driver starts driving, the driver is registered with the face. When the face registration is carried out, the vehicle camera system simultaneously opens the DMS camera and the OMS camera, and the DMS camera and the OMS camera simultaneously capture a head portrait photo of a driver for the face registration. Because the DMS camera and the OMS camera are calibrated, the three-dimensional coordinates of the three-dimensional key points of the face relative to one of the cameras under the face coordinate system can be solved through the internal and external parameters of the cameras and the face recognition technology through triangular mapping.
In an embodiment of the present application, the face registration information may be registered by: and sending a face registration start prompt message to the user, and then displaying an animation for attracting the attention of the user to the user at a preset position of the screen. The preset position is used for guiding the user positioned at the specific position to adjust the face to be suitable for the camera to carry out the gesture of face registration shooting. That is, when a user located at a specific position is guided to see a preset position of the screen, a face photo taken by the camera is suitable for face registration. Since face registration involves multiple cameras, in order for multiple cameras to capture the best face registration picture, it is necessary to look at the face in a certain direction. If voice prompts are used, such as left-right, forward-backward, etc., the tester will typically be overwhelmed with how to adjust the facial pose. In one example, the user to be tested is seated at a certain position, the user to be tested is prompted to start face registration by voice, and then a dynamic pattern is displayed at a certain position on the car screen to attract the user to be tested to see the position. The dynamic pattern takes a circle as an example, when the circle on the screen is gradually reduced from a big circle to a small circle, people can see the screen and the attention is highly concentrated on the circle, and the gesture of the tested user can be adjusted to the position suitable for the optimal shooting of a plurality of cameras at the moment, so that the rapid non-sensing face registration is realized.
Step 102, obtaining a face photo of a user shot by a camera. In one embodiment of the present application, only one camera is required to take a picture of the face.
Step 103, selecting face feature points with normal distribution from the face feature points based on the registered local coordinates of the face feature points and the face photos, and determining the actual local coordinates of the face feature points with normal distribution. The actual local coordinates are coordinate values in the face coordinate system of the user, and refer to local coordinates of each face feature point obtained based on the face photograph obtained in step 102 with respect to the face coordinate system.
In an embodiment of the present application, the step of selecting the face feature points with normal distribution from the face feature points based on the registered local coordinates of the face feature points and the face photo in step 103 and determining the actual local coordinates of the face feature points with normal distribution may be performed according to the following manner, as shown in fig. 2, including steps 201 to 203:
step 201, determining a rotation parameter and a translation parameter of the face relative to the camera based on the face photo and the registered local coordinates of key feature points in the face feature points. The key feature points may be preset face feature points with obvious partial features, for example, the two ends of eyebrows, the corners of eyes, the tip of nose, the corners of mouth, the center points of upper and lower lips, and the center point of chin may be set as key feature points. The key feature points have the characteristics of small coordinate errors, high recognition accuracy and the like in different photos.
In an embodiment of the present application, the step of determining the rotation parameter and the translation parameter of the face relative to the camera based on the registered local coordinates of the key feature points in the face photo and the face feature points in step 201 may be performed by:
lamda*[u, v, 1]’ = A * [R, T]*[X, Y, Z]’
wherein [ X, Y, Z ] 'is a registered local coordinate of a key feature point, R is a rotation parameter, T is a translation parameter, [ u, v, 1]' is a pixel coordinate of the key feature point in a face photo, A is a camera internal parameter calibrated in advance, and lamda is a preset constant.
Step 202, calculating the actual local coordinates of the face feature points based on the rotation parameter, the translation parameter and the face photo.
And 203, selecting the face feature points with normal distribution according to the actual local coordinates of the face feature points and the registered local coordinates of the face feature points obtained in the step 202. In one example, whether a face feature point is normally distributed may be determined by whether an error of an actual local coordinate of the face feature point and a registered local coordinate thereof is within a threshold range.
In summary, in the steps 201-203, the key feature points in the face feature points are used to screen the face feature points with normal distribution, so that the face feature points with large errors can be removed, and especially in the case of large face gesture deflection, the influence of some feature points with inaccurate recognition or even error on the face ranging accuracy is avoided.
In another embodiment of the present application, the step of selecting the face feature points with normal distribution from the face feature points based on the registered local coordinates of the face feature points and the face photo in step 103 and determining the actual local coordinates of the face feature points with normal distribution may be performed according to the following manner, as shown in fig. 3, including steps 301 to 306:
step 301, determining a first rotation parameter and a first translation parameter of the face relative to the camera based on the face photo and the registered local coordinates of the key feature points in the face feature points. The key feature points may be preset face feature points with obvious partial features, for example, the two ends of eyebrows, the corners of eyes, the tip of nose, the corners of mouth, the center points of upper and lower lips, and the center point of chin may be set as key feature points. The key feature points have the characteristics of small coordinate errors, high recognition accuracy and the like in different photos.
Step 302, calculating actual local coordinates of the key feature points based on the first rotation parameter, the first translation parameter and the face photo.
In an embodiment of the present application, the step of determining the first rotation parameter and the first translation parameter of the face relative to the camera based on the registered local coordinates of the key feature points in the face photo and the face feature points in step 302 may be performed by:
lamda*[u, v, 1]’ = A * [R1, T1]*[X, Y, Z]’
wherein [ X, Y, Z ] 'is a registered local coordinate of a key feature point, R1 is a first rotation parameter, T1 is a first translation parameter, [ u, v, 1]' is a pixel coordinate of the key feature point in a face photo, A is a camera internal parameter calibrated in advance, and lamda is a preset constant.
And step 303, selecting the key feature points with small errors from the key feature points based on the registered local coordinates and the actual local coordinates of the key feature points. In one example, whether a key feature point has a small error may be determined by whether the error of the actual local coordinates of the key feature point and the registered local coordinates thereof are within a threshold range.
And step 304, determining a second rotation parameter and a second translation parameter of the face relative to the camera based on the face photo and the registered local coordinates of the key feature points with small errors. By recalculating the rotation parameters and the translation parameters according to the key feature points with small errors, the influence of the key points with large errors on the rotation parameters and the translation parameters can be eliminated, and the accuracy of the rotation parameters and the translation parameters can be improved.
In step 305, the actual local coordinates of the face feature points are calculated based on the second rotation parameter, the second translation parameter and the face photo.
In an embodiment of the present application, the step of determining the second rotation parameter and the second translation parameter of the face relative to the camera based on the face photo and the registered local coordinates of the key feature points with small errors in step 305 is performed by:
lamda*[u, v, 1]’’ = A * [R2, T2]*[X, Y, Z]’’
wherein [ X, Y, Z ] '' is a registered local coordinate of a key feature point with small error, R2 is a second rotation parameter, T2 is a second translation parameter, [ u, v, 1] '' is a pixel coordinate of the key feature point with small error in a face photo, A is a camera internal parameter calibrated in advance, and lamda is a preset constant.
And 306, selecting the face feature points with normal distribution according to the actual local coordinates of the face feature points and the registered local coordinates of the face feature points.
In summary, in steps 301-306, key feature points with small errors are selected from key feature points in the face feature points, and then the key feature points with small errors are used to screen the face feature points with normal distribution, so that the face feature points with large errors can be further removed, and especially under the condition of large face gesture deflection, the influence of certain face feature points with inaccurate or even wrong identification on the face ranging accuracy is avoided.
And 104, determining camera coordinates of the face feature points with normal distribution based on the actual local coordinates of the face feature points with normal distribution. The camera coordinates are used to calculate the distance of the user's face from the camera.
In summary, the face ranging method based on face registration in the embodiment of the application judges the points with serious perspective distortion through the face registration information, screens out the face feature points with normal distribution, and then performs face ranging according to the face feature points with normal distribution, thereby avoiding that certain feature points with inaccurate or even wrong identification affect the face ranging accuracy and greatly improving the accuracy of face ranging. Moreover, the method requires less calculation force, the used image pickup equipment is simple, and the high-precision face ranging is realized with lower cost.
The application also provides a face ranging device based on face registration, which comprises: a memory for storing instructions executable by the processor; and a processor for executing the instructions to implement the method as described above.
Fig. 4 illustrates an architecture diagram of a face ranging apparatus based on face registration according to an embodiment of the present application. Referring to fig. 4, the face registration-based face ranging apparatus 400 may include an internal communication bus 401, a Processor (Processor) 402, a Read Only Memory (ROM) 403, a Random Access Memory (RAM) 404, and a communication port 405. The face ranging apparatus 400 based on face registration may further include a hard disk 407 when applied on a personal computer. The internal communication bus 401 may enable data communication between components of the face ranging device 400 based on face registration. The processor 402 may make the determination and issue the prompt. In some embodiments, the processor 402 may be comprised of one or more processors. The communication port 405 may enable the face ranging apparatus 400 based on face registration to communicate data with the outside. In some embodiments, the face registration based face ranging device 400 may send and receive information and data from the network through the communication port 405. The face registration based face ranging device 400 may also include various forms of program storage units and data storage units, such as a hard disk 407, read-only memory (ROM) 403 and Random Access Memory (RAM) 404, capable of storing various data files for computer processing and/or communication, and possibly program instructions for execution by the processor 402. The processor executes these instructions to implement the main part of the method. The result processed by the processor is transmitted to the user equipment through the communication port and displayed on the user interface.
It will be appreciated that the face ranging method based on face registration of the present application is not limited to be implemented by one face ranging device based on face registration, but may be implemented cooperatively by a plurality of online face ranging devices based on face registration. The online face registration-based face ranging device may connect and communicate over a local area network or a wide area network.
Other implementation details of the face ranging apparatus based on face registration of the present embodiment may refer to the embodiments described in fig. 1 to 3, and will not be described here.
The application also provides a computer readable medium storing computer program code which, when executed by a processor, implements a method as described above.
For example, the face ranging method based on face registration of the present application may be implemented as a program of the face ranging method based on face registration, stored in a memory, and loadable into a processor for execution, so as to implement the face ranging method based on face registration of the present application.
When the face ranging method based on face registration is implemented as a computer program, it may also be stored in a computer-readable storage medium as an article of manufacture. For example, computer-readable storage media may include, but are not limited to, magnetic storage devices (e.g., hard disk, floppy disk, magnetic strips), optical disks (e.g., compact Disk (CD), digital Versatile Disk (DVD)), smart cards, and flash memory devices (e.g., electrically erasable programmable read-only memory (EPROM), cards, sticks, key drives). Moreover, various storage media described herein can represent one or more devices and/or other machine-readable media for storing information. The term "machine-readable medium" can include, without being limited to, wireless channels and various other media (and/or storage media) capable of storing, containing, and/or carrying code and/or instructions and/or data.
While the basic concepts have been described above, it will be apparent to those skilled in the art that the foregoing application disclosure is by way of example only and is not intended to be limiting. Although not explicitly described herein, various modifications, improvements and adaptations of the application may occur to one skilled in the art. Such modifications, improvements, and modifications are intended to be suggested within the present disclosure, and therefore, such modifications, improvements, and adaptations are intended to be within the spirit and scope of the exemplary embodiments of the present disclosure.
Meanwhile, the present application uses specific words to describe embodiments of the present application. Reference to "one embodiment," "an embodiment," and/or "some embodiments" means that a particular feature, structure, or characteristic is associated with at least one embodiment of the application. Thus, it should be emphasized and should be appreciated that two or more references to "an embodiment" or "one embodiment" or "an alternative embodiment" in various positions in this specification are not necessarily referring to the same embodiment. Furthermore, certain features, structures, or characteristics of one or more embodiments of the application may be combined as suitable.
Some aspects of the methods and systems of the present application may be performed entirely by hardware, entirely by software (including firmware, resident software, micro-code, etc.) or by a combination of hardware and software. The above hardware or software may be referred to as a "data block," module, "" engine, "" unit, "" component, "or" system. The processor may be one or more Application Specific Integrated Circuits (ASICs), digital Signal Processors (DSPs), digital signal processing devices (DAPDs), programmable Logic Devices (PLDs), field Programmable Gate Arrays (FPGAs), processors, controllers, microcontrollers, microprocessors, or a combination thereof. Furthermore, aspects of the application may take the form of a computer product, comprising computer-readable program code, embodied in one or more computer-readable media. For example, computer-readable media can include, but are not limited to, magnetic storage devices (e.g., hard disk, floppy disk, tape … …), optical disks (e.g., compact Disk (CD), digital Versatile Disk (DVD) … …), smart cards, and flash memory devices (e.g., card, stick, key drive … …).
The computer readable signal medium may comprise a propagated data signal with computer program code embodied therein, for example, on a baseband or as part of a carrier wave. The propagated signal may take on a variety of forms, including electro-magnetic, optical, etc., or any suitable combination thereof. A computer readable signal medium may be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code located on a computer readable signal medium may be propagated through any suitable medium including radio, cable, fiber optic cable, radio frequency signals, or the like, or a combination of any of the foregoing.
The computer program code necessary for operation of portions of the present application may be written in any one or more programming languages, including an object oriented programming language such as Java, scala, smalltalk, eiffel, JADE, emerald, C ++, C#, VB. NET, python, and the like, a conventional programming language such as C language, visual Basic, fortran 2003, perl, COBOL 2002, PHP, ABAP, a dynamic programming language such as Python, ruby, and Groovy, or other programming languages and the like. The program code may execute entirely on the user's computer or as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any form of network, such as a Local Area Network (LAN), or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet), or in a cloud computing environment, or as a service, such as software as a service (SaaS).
Furthermore, the order in which the elements and sequences are presented, the use of numerical letters, or other designations are used in the application is not intended to limit the sequence of the processes and methods unless specifically recited in the claims. While certain presently useful application embodiments have been discussed in the foregoing disclosure by way of various examples, it is to be understood that such details are for the purpose of illustration only and that the appended claims are not to be limited to the disclosed embodiments, but rather are intended to cover all modifications and equivalent combinations that fall within the spirit and scope of the embodiments of the present application. For example, while the system components described above may be implemented by hardware devices, they may also be implemented solely by software solutions, such as installing the described system on an existing server or mobile device.
Similarly, it should be appreciated that in order to simplify the present disclosure and thereby facilitate an understanding of one or more embodiments of the application, various features are sometimes grouped together in a single embodiment, figure, or description thereof. This method of disclosure, however, is not intended to imply that more features than are required by the subject application. Indeed, less than all of the features of a single embodiment disclosed above.
In some embodiments, numbers describing the components, number of attributes are used, it being understood that such numbers being used in the description of embodiments are modified in some examples by the modifier "about," approximately, "or" substantially. Unless otherwise indicated, "about," "approximately," or "substantially" indicate that the number allows for a 20% variation. Accordingly, in some embodiments, numerical parameters set forth in the specification and claims are approximations that may vary depending upon the desired properties sought to be obtained by the individual embodiments. In some embodiments, the numerical parameters should take into account the specified significant digits and employ a method for preserving the general number of digits. Although the numerical ranges and parameters set forth herein are approximations in some embodiments for use in determining the breadth of the range, in particular embodiments, the numerical values set forth herein are as precisely as possible.
While the application has been described with reference to the specific embodiments presently, it will be appreciated by those skilled in the art that the foregoing embodiments are merely illustrative of the application, and various equivalent changes and substitutions may be made without departing from the spirit of the application, and therefore, all changes and modifications to the embodiments are intended to be within the scope of the appended claims.
Claims (9)
1. A face ranging method based on face registration includes:
acquiring registration local coordinates of face feature points of a user based on face registration information of the user, wherein the registration local coordinates are coordinate values under a face coordinate system of the user;
acquiring a face photo of the user shot by a camera;
selecting face feature points with normal distribution from the face feature points based on the registered local coordinates of the face feature points and the face photo, and determining actual local coordinates of the face feature points with normal distribution, wherein the actual local coordinates are coordinate values under a face coordinate system of the user; and
determining camera coordinates of the face feature points with normal distribution based on actual local coordinates of the face feature points with normal distribution, wherein the camera coordinates are used for calculating the distance between the face of the user and the camera;
wherein, the face registration information is registered by the following means: sending a face registration start prompt message to the user; and displaying an animation for attracting the attention of the user to the user at a preset position of the screen, wherein the preset position is used for guiding the user to adjust the face to a gesture suitable for the camera to perform face registration shooting.
2. The method of claim 1, wherein the steps of selecting a normally distributed face feature point from the face feature points based on the registered local coordinates of the face feature points and the face photograph, and determining the actual local coordinates of the normally distributed face feature points are performed according to the following manner:
determining a rotation parameter and a translation parameter of the face relative to the camera based on the face photo and the registered local coordinates of key feature points in the face feature points;
calculating actual local coordinates of the face feature points based on the rotation parameters, the translation parameters and the face photos; and
and selecting the face feature points with normal distribution according to the actual local coordinates of the face feature points and the registered local coordinates of the face feature points.
3. The method of claim 2, wherein the step of determining rotational and translational parameters of the face relative to the camera based on registered local coordinates of key feature points of the face photograph and the face feature points is performed by:
lamda*[u, v, 1]’ = A * [R, T] *[X, Y, Z]’
wherein [ X, Y, Z ] 'is a registered local coordinate of the key feature point, R is the rotation parameter, T is the translation parameter, [ u, v, 1]' is a pixel coordinate of the key feature point in the face photo, A is a camera internal parameter calibrated in advance, and lamda is a preset constant.
4. The method of claim 1, wherein the steps of selecting a normally distributed face feature point from the face feature points based on the registered local coordinates of the face feature points and the face photograph, and determining the actual local coordinates of the normally distributed face feature points are performed according to the following manner:
determining a first rotation parameter and a first translation parameter of the face relative to the camera based on the face photo and registered local coordinates of key feature points in the face feature points;
calculating actual local coordinates of the key feature points based on the first rotation parameter, the first translation parameter and the face photo;
selecting key feature points with small errors from the key feature points based on the registered local coordinates and the actual local coordinates of the key feature points; the key feature points with small errors comprise key feature points with errors of actual local coordinates of the key feature points and registered local coordinates thereof within a threshold range;
determining a second rotation parameter and a second translation parameter of the face relative to the camera based on the face photo and the registered local coordinates of the key feature points with small errors;
calculating actual local coordinates of the face feature points based on the second rotation parameters, the second translation parameters and the face photos; and
and selecting the face feature points with normal distribution according to the actual local coordinates of the face feature points and the registered local coordinates of the face feature points.
5. The method of claim 4, wherein the step of determining the first rotational parameter and the first translational parameter of the face relative to the camera based on registered local coordinates of key feature points of the face photograph and the face feature points is performed by:
lamda*[u, v, 1]’ = A * [R1, T1] *[X, Y, Z]’
wherein [ X, Y, Z ] 'is a registered local coordinate of the key feature point, R1 is the first rotation parameter, T1 is the first translation parameter, [ u, v, 1]' is a pixel coordinate of the key feature point in the face photo, A is a camera internal parameter calibrated in advance, and lamda is a preset constant;
the step of determining the second rotation parameter and the second translation parameter of the face relative to the camera based on the face photo and the registered local coordinates of the key feature points with small errors is performed by the following steps:
lamda*[u, v, 1]’’ = A * [R2, T2] *[X, Y, Z]’’
wherein [ X, Y, Z ] '' is a registered local coordinate of the key feature point with small error, R2 is the second rotation parameter, T2 is the second translation parameter, [ u, v, 1] '' is a pixel coordinate of the key feature point with small error in the face photo, A is a camera internal parameter calibrated in advance, and lamda is a preset constant.
6. The method of claim 1, wherein the method is applied to an automobile cockpit, the user is a driver, and the cameras for face registration include a camera of a driver monitoring system and a camera of a passenger monitoring system.
7. The method of claim 1, wherein the method is applied to an automobile cockpit, the user is a driver, and the camera is a camera of a driver monitoring system.
8. A face ranging apparatus based on face registration, comprising: a memory for storing instructions executable by the processor; and a processor for executing the instructions to implement the method of any one of claims 1-7.
9. A computer readable medium storing computer program code which, when executed by a processor, implements the method of any of claims 1-7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110986913.5A CN113610051B (en) | 2021-08-26 | 2021-08-26 | Face ranging method, equipment and computer readable medium based on face registration |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110986913.5A CN113610051B (en) | 2021-08-26 | 2021-08-26 | Face ranging method, equipment and computer readable medium based on face registration |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113610051A CN113610051A (en) | 2021-11-05 |
CN113610051B true CN113610051B (en) | 2023-11-17 |
Family
ID=78342115
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110986913.5A Active CN113610051B (en) | 2021-08-26 | 2021-08-26 | Face ranging method, equipment and computer readable medium based on face registration |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113610051B (en) |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2006215743A (en) * | 2005-02-02 | 2006-08-17 | Toyota Motor Corp | Image processing apparatus and image processing method |
CN104978548A (en) * | 2014-04-02 | 2015-10-14 | 汉王科技股份有限公司 | Visual line estimation method and visual line estimation device based on three-dimensional active shape model |
JP2016173313A (en) * | 2015-03-17 | 2016-09-29 | 国立大学法人鳥取大学 | Visual line direction estimation system, visual line direction estimation method and visual line direction estimation program |
CN106355147A (en) * | 2016-08-26 | 2017-01-25 | 张艳 | Acquiring method and detecting method of live face head pose detection regression apparatus |
WO2018232717A1 (en) * | 2017-06-23 | 2018-12-27 | 中国科学院自动化研究所 | Method, storage and processing device for identifying authenticity of human face image based on perspective distortion characteristics |
CN110956066A (en) * | 2019-05-11 | 2020-04-03 | 初速度(苏州)科技有限公司 | Face part distance measurement method and device and vehicle-mounted terminal |
CN111160232A (en) * | 2019-12-25 | 2020-05-15 | 上海骏聿数码科技有限公司 | Front face reconstruction method, device and system |
CN111784885A (en) * | 2020-06-17 | 2020-10-16 | 杭州海康威视数字技术股份有限公司 | Passage control method and device, gate equipment and multi-gate system |
CN111780673A (en) * | 2020-06-17 | 2020-10-16 | 杭州海康威视数字技术股份有限公司 | Distance measurement method, device and equipment |
-
2021
- 2021-08-26 CN CN202110986913.5A patent/CN113610051B/en active Active
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2006215743A (en) * | 2005-02-02 | 2006-08-17 | Toyota Motor Corp | Image processing apparatus and image processing method |
CN104978548A (en) * | 2014-04-02 | 2015-10-14 | 汉王科技股份有限公司 | Visual line estimation method and visual line estimation device based on three-dimensional active shape model |
JP2016173313A (en) * | 2015-03-17 | 2016-09-29 | 国立大学法人鳥取大学 | Visual line direction estimation system, visual line direction estimation method and visual line direction estimation program |
CN106355147A (en) * | 2016-08-26 | 2017-01-25 | 张艳 | Acquiring method and detecting method of live face head pose detection regression apparatus |
WO2018232717A1 (en) * | 2017-06-23 | 2018-12-27 | 中国科学院自动化研究所 | Method, storage and processing device for identifying authenticity of human face image based on perspective distortion characteristics |
CN110956066A (en) * | 2019-05-11 | 2020-04-03 | 初速度(苏州)科技有限公司 | Face part distance measurement method and device and vehicle-mounted terminal |
CN111160232A (en) * | 2019-12-25 | 2020-05-15 | 上海骏聿数码科技有限公司 | Front face reconstruction method, device and system |
CN111784885A (en) * | 2020-06-17 | 2020-10-16 | 杭州海康威视数字技术股份有限公司 | Passage control method and device, gate equipment and multi-gate system |
CN111780673A (en) * | 2020-06-17 | 2020-10-16 | 杭州海康威视数字技术股份有限公司 | Distance measurement method, device and equipment |
Also Published As
Publication number | Publication date |
---|---|
CN113610051A (en) | 2021-11-05 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11120531B2 (en) | Method and device for image processing, vehicle head-up display system and vehicle | |
CN110427917B (en) | Method and device for detecting key points | |
JP6681729B2 (en) | Method for determining 3D pose of object and 3D location of landmark point of object, and system for determining 3D pose of object and 3D location of landmark of object | |
US20190371003A1 (en) | Monocular vision tracking method, apparatus and non-volatile computer-readable storage medium | |
CN110675489B (en) | Image processing method, device, electronic equipment and storage medium | |
CN111723691B (en) | Three-dimensional face recognition method and device, electronic equipment and storage medium | |
KR102169309B1 (en) | Information processing apparatus and method of controlling the same | |
CN111854620B (en) | Monocular camera-based actual pupil distance measuring method, device and equipment | |
CN111665513B (en) | Facial feature detection device and facial feature detection method | |
US8571303B2 (en) | Stereo matching processing system, stereo matching processing method and recording medium | |
EP3633606B1 (en) | Information processing device, information processing method, and program | |
CN111028205B (en) | Eye pupil positioning method and device based on binocular distance measurement | |
US20210334569A1 (en) | Image depth determining method and living body identification method, circuit, device, and medium | |
CN109934873B (en) | Method, device and equipment for acquiring marked image | |
EP3905195A1 (en) | Image depth determining method and living body identification method, circuit, device, and medium | |
CN111860292A (en) | Monocular camera-based human eye positioning method, device and equipment | |
CN113610051B (en) | Face ranging method, equipment and computer readable medium based on face registration | |
JP2000099760A (en) | Method for forming three-dimensional model and computer-readable recording medium recording three- dimensional model forming program | |
JP2018101212A (en) | On-vehicle device and method for calculating degree of face directed to front side | |
CN113902932A (en) | Feature extraction method, visual positioning method and device, medium and electronic equipment | |
CN109829401A (en) | Traffic sign recognition method and device based on double capture apparatus | |
CN110309074B (en) | Test method and device | |
JP2021051347A (en) | Distance image generation apparatus and distance image generation method | |
CN115908581A (en) | Vehicle-mounted camera pitch angle calibration method, device, equipment and storage medium | |
JP2020034525A (en) | Information processing apparatus and method, and computer program |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
CB02 | Change of applicant information |
Address after: 314500 988 Tong Tong Road, Wu Tong Street, Tongxiang, Jiaxing, Zhejiang Applicant after: United New Energy Automobile Co.,Ltd. Address before: 314500 988 Tong Tong Road, Wu Tong Street, Tongxiang, Jiaxing, Zhejiang Applicant before: Hezhong New Energy Vehicle Co.,Ltd. |
|
CB02 | Change of applicant information | ||
GR01 | Patent grant | ||
GR01 | Patent grant |