CN113034345B - Face recognition method and system based on SFM reconstruction - Google Patents

Face recognition method and system based on SFM reconstruction Download PDF

Info

Publication number
CN113034345B
CN113034345B CN201911357292.3A CN201911357292A CN113034345B CN 113034345 B CN113034345 B CN 113034345B CN 201911357292 A CN201911357292 A CN 201911357292A CN 113034345 B CN113034345 B CN 113034345B
Authority
CN
China
Prior art keywords
face image
image
face
camera
dimensional
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911357292.3A
Other languages
Chinese (zh)
Other versions
CN113034345A (en
Inventor
陈舒婷
余松森
陈远存
黄文俊
罗云
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oking Information Industry Co ltd
Original Assignee
Guangdong Oking Information Industry Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oking Information Industry Co ltd filed Critical Guangdong Oking Information Industry Co ltd
Priority to CN201911357292.3A priority Critical patent/CN113034345B/en
Publication of CN113034345A publication Critical patent/CN113034345A/en
Application granted granted Critical
Publication of CN113034345B publication Critical patent/CN113034345B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/08Projecting images onto non-planar surfaces, e.g. geodetic screens
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/30Noise filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a face recognition method and a face recognition system based on SFM reconstruction, which solve the problem of low multi-pose face recognition rate and accuracy by combining three-dimensional face reconstruction and recognition; the problems of operation speed and mismatching are effectively solved by adopting a stereo matching method and combining an SFM algorithm. The mobile phone camera can be used for accurately acquiring image reconstruction information, and a multi-view image of the interested object is processed by using a mixed feature extraction technology; three-dimensional face reconstruction is realized based on a two-dimensional face picture, and the mismatching rate is reduced by combining coarse matching and fine matching, so that the accuracy of the SFM algorithm is improved; the required matching points are quickly and accurately found out by using an SFM algorithm, and a three-dimensional face structure with reality is obtained by subsequent processing; the three-dimensional face is used as a verification means, so that the reliability of face recognition is improved, and the method can be well applied to scenes such as face brushing payment and face recognition.

Description

Face recognition method and system based on SFM reconstruction
Technical Field
The disclosure relates to the technical field of computer vision technology and three-dimensional image reconstruction, in particular to a face recognition method and system based on SFM reconstruction.
Background
3D face reconstruction can be classified as hardware based such as: stereo cameras, structured light or 3D laser scanners to reconstruct 3D faces can acquire accurate 3D face data, but at much more cost. Therefore, a monocular camera-based method is proposed to reconstruct a 3D face, which can be reconstructed from a multi-view face image.
Robust algorithms are needed for changes in illumination, changes in camera focal length and position, facial expression movements or head shakes, etc. Meanwhile, a new face image is rendered by using the three-dimensional face parametric model and the two-dimensional face parametric texture model, and the face area in the image can be manually identified; through some improvement works, the three-dimensional face model can be completely and automatically established, but different face expressions cannot be well expressed. In the existing three-dimensional face reconstruction technology, a three-dimensional face is restored mainly through an algorithm, and the result obtained when the face is processed is unstable and comprises some areas with low texture and repeated texture.
Disclosure of Invention
In order to solve the above problems, the present disclosure provides a technical scheme of a face recognition method and system based on SFM reconstruction, and provides a high-efficiency automatic 2D to 3D face reconstruction method, which solves the problems of manual marking work and reducing operation time; the problem of low multi-pose face recognition rate and accuracy is solved by combining three-dimensional face reconstruction and recognition.
In order to achieve the above object, according to an aspect of the present disclosure, there is provided a face recognition method based on SFM reconstruction, the method including the steps of:
step 1: shooting a face image through a common camera, calibrating the camera, and acquiring internal and external parameters of the camera; taking an image sequence of one image shot by a camera at an angle interval to a human face as a human face image sequence, for example, taking the first two images in the human face image sequence as a first image and a second image, wherein the angle interval is 10 degrees;
step 2: inputting a first face image for denoising, image downsampling and image blurring; adjusting the gray scale range of the image by using gamma conversion on the second face image;
and step 3: carrying out feature point detection on the first face image and the second face image by using an SURF algorithm, carrying out SIFT operator on the detected feature points, then carrying out stereo matching to obtain registration points, and processing the obtained registration points through triangular similarity to enable pixel coordinates to be the same;
and 4, step 4: performing feature point matching on all adjacent two images in the face image sequence by using a nearest neighbor and next nearest neighbor ratio method, and storing all matching points;
and 5: setting the camera coordinates of the first human face image as standard coordinates, and calculating projection matrixes of the first human face image and the second human face image through internal parameters, rotation matrix parameters and translational vector parameters;
step 6: performing triangularization reconstruction by using the projection matrix and the obtained matching points to obtain three-dimensional coordinates of space points of the first face image and the second face image;
and 7: sequentially taking each image in the face image sequence as a third face image, solving a translation vector and a rotation matrix through the three-dimensional coordinates obtained in the step 6, and performing three-dimensional reconstruction on the translation vector and the rotation matrix and the matching points obtained in the step 4 to obtain three-dimensional point clouds of the matching points of the first two face images of the third face image;
and 8: reconstructing a new three-dimensional point cloud through the internal parameters, the matching points of the third face image, the first face image and the second face image, the rotation matrix parameters and the translation vector parameters;
and step 9: combining the three-dimensional point cloud generated in the step 7 and the three-dimensional point cloud generated in the step 8, repeatedly executing the step 7 to the step 8, and realizing three-dimensional reconstruction of the time sequence face images after combining the three-dimensional point clouds of all the face image sequences when the three-dimensional point clouds of all the face image sequences are combined;
step 10: and performing face recognition through an ICP algorithm.
Further, in step 1, the internal and external parameters of the video camera include camera internal parameters and camera external parameters, the camera internal parameters are parameters related to the characteristics of the camera, including the focal length, the pixel size, and the like of the camera; camera extrinsic parameters are parameters in the world coordinate system, including the position, rotational direction, etc. of the camera.
Further, in step 1, the method for capturing an image sequence of an image taken of a human face by the camera at an angle interval as a human face image sequence comprises: in a time period, a camera shoots a face image at the face of the front side of the camera by rotating N degrees, all the face images form a face image sequence, N is an angle value and is set to be 10 degrees by default, the value range of N is 1-180 degrees, the time period is 5 minutes, and both N and the time period can be manually set.
Further, in step 2, the first face image and the second face image are images captured by the camera at two different angles, for example, the first face image is an image of the front face of the face, and the second face image is an image obtained by rotating the face clockwise by 10 degrees.
Further, in step 6, the method for performing triangularization reconstruction by using the projection matrix and the obtained matching points to obtain the three-dimensional coordinates of the spatial points of the first face image and the second face image includes:
let the abscissa denote x, and the homogeneous coordinate of the vector x denote
Figure BDA0002336276810000021
Three-dimensional point M = [ x, y, z)] T Is recorded as the homogeneous coordinate
Figure BDA0002336276810000022
Corresponding image point m = [ u, v =] T Is recorded as the homogeneous coordinate
Figure BDA0002336276810000023
Satisfy formula (1)
Figure BDA0002336276810000024
Wherein
Figure BDA0002336276810000031
Alpha, beta is the fusion of focal length and pixel aspect ratio, gamma is the radial distortion parameter, u 0 Are all horizontal coordinates v, v 0 Are all vertical coordinates, λ is a constant; the elements in the matrix a are internal parameters of the camera, and Ω is a three-dimensional coordinate transformation matrix including a rotation matrix R and a displacement matrix t.
Further, in step 7, the third face image is an image in the face image sequence, and the third face image is replaced by a next image of the current image in the face image sequence each time step 7 is executed.
Further, in step 10, the method for performing face recognition by using the ICP algorithm includes the following steps:
step 10.1: obtaining a target point cloud P in an input face image, and obtaining a point set P in the target point cloud P i Belongs to P; i =1,2, \8230 \8230n, n is the number of target points in the target point cloud;
step 10.2: finding out a corresponding point set Q in a source point cloud Q in a face image i Belongs to Q, so that | | | Q i -p i I | = min, min is a set minimum difference value, and the default is 0.3 × i;
step 10.3: calculating a rotation matrix R and a translation matrix t to minimize an error function;
step 10.4: to p i Carrying out rotation and translation transformation by using the rotation matrix and the translation matrix obtained in the step 10.3 to obtain a new corresponding point set p' i ={Rp i +t,p i ∈P};
Step 10.5: calculating p' i Corresponding point set q i Average distance of (d):
Figure BDA0002336276810000032
step 10.6: if d is smaller than a given threshold or larger than a preset maximum iteration number, stopping iteration and returning d as the final face model similarity; otherwise go to step 10.2 until the convergence condition is met, the threshold is set by default to
Figure BDA0002336276810000033
The maximum number of iterations is set to 100.
Where the term referred to is, SFM: structure-From-Motion (Motion Structure recovery algorithm); ICP: iterative Closest Point (Iterative Closest Point) algorithm.
The invention also provides a face recognition system based on SFM reconstruction, which comprises: a memory, a processor, and a computer program stored in the memory and executable on the processor, the processor executing the computer program to run in the units of the system:
the image sequence acquisition unit is used for shooting a face image through a common camera, calibrating the camera and acquiring internal and external parameters of the camera; the camera shoots a human face at an angle interval every time and acquires an image sequence of an image as a human face image sequence; taking the first two images in the human face image sequence as a first image and a second image;
the model image preprocessing unit is used for inputting a first human face image to perform denoising, image downsampling and image blurring; adjusting the gray scale range of the image by gamma conversion of the second face image;
the image feature registration unit is used for detecting feature points of the first face image and the second face image by using a SURF algorithm, performing stereo matching on the detected feature points after SIFT operators are performed on the detected feature points to obtain registration points, and processing the obtained registration points through triangular similarity to enable pixel coordinates to be the same;
the neighbor matching unit is used for performing feature point matching on all adjacent two images in the face image sequence by using a nearest neighbor and next nearest neighbor ratio method and storing all matching points;
the model image projection unit is used for setting the camera coordinates of the first human face image as standard coordinates, and calculating the projection matrixes of the first human face image and the second human face image through the internal parameters, the rotation matrix parameters and the translation vector parameters;
the model triangularization reconstruction unit is used for carrying out triangularization reconstruction by using the projection matrix and the obtained matching points to obtain three-dimensional coordinates of space points of the first face image and the second face image;
the three-dimensional point cloud construction unit is used for sequentially taking each image in the face image sequence as a third face image, solving a translation vector and a rotation matrix through three-dimensional coordinates, and performing three-dimensional reconstruction on the translation vector and the rotation matrix and matching points to obtain three-dimensional point cloud of matching points of the first two face images of the third face image;
the reconstruction three-dimensional point cloud unit is used for reconstructing a new three-dimensional point cloud through the internal parameters, the matching points of the third face image, the first face image and the second face image, the rotation matrix parameters and the translation vector parameters;
the point cloud merging completion unit is used for merging the three-dimensional point clouds and completing the merging of the three-dimensional point clouds of all the face image sequences with the first face image and the second face image;
and the face recognition unit is used for carrying out face recognition through an ICP algorithm.
The beneficial effect of this disclosure does: the invention provides a face recognition method and system based on SFM reconstruction, which effectively solve the problems of operation speed and mismatching by adopting a stereo matching method and combining with an SFM algorithm. The whole method is easy to realize, the mobile phone camera can be used for accurately acquiring image reconstruction information, and a multi-view image of an interested object is processed by using a mixed feature extraction technology; three-dimensional face reconstruction is realized based on a two-dimensional face picture, and the mismatching rate is reduced by combining coarse matching and fine matching, so that the accuracy of the SFM algorithm is improved; through the face image shot by a common camera, the required matching points are quickly and accurately found out by using an SFM algorithm, and a three-dimensional face structure with reality is obtained through subsequent processing; the three-dimensional face is used as a verification means, so that the reliability of face recognition is improved, and the method can be well applied to scenes such as face brushing payment and face recognition.
Drawings
The foregoing and other features of the present disclosure will become more apparent from the detailed description of the embodiments shown in conjunction with the drawings in which like reference characters designate the same or similar elements throughout the several views, and it is apparent that the drawings in the following description are merely some examples of the present disclosure and that other drawings may be derived therefrom by those skilled in the art without the benefit of any inventive faculty, and in which:
FIG. 1 is a flow chart of a face recognition method based on SFM reconstruction;
fig. 2 is a block diagram of a face recognition system based on SFM reconstruction.
Detailed Description
The conception, the specific structure and the technical effects produced by the present disclosure will be clearly and completely described in conjunction with the embodiments and the attached drawings, so that the purposes, the schemes and the effects of the present disclosure can be fully understood. It should be noted that, in the present application, the embodiments and features of the embodiments may be combined with each other without conflict.
Fig. 1 is a flow chart of a face recognition method based on SFM reconstruction according to the present disclosure, and a face recognition method based on SFM reconstruction according to an embodiment of the present disclosure is described below with reference to fig. 1.
The disclosure provides a face recognition method based on SFM reconstruction, which specifically comprises the following steps:
step 1: shooting a face image through a common camera, calibrating the camera, and acquiring internal and external parameters of the camera; taking an image sequence of one image shot by a camera at an angle interval to a human face as a human face image sequence, for example, taking the first two images in the human face image sequence as a first image and a second image, wherein the angle interval is 10 degrees;
and 2, step: inputting a first face image for denoising, image downsampling and image blurring; adjusting the gray scale range of the image by using gamma conversion on the second face image;
and step 3: carrying out feature point detection on the first face image and the second face image by using an SURF algorithm, carrying out SIFT operator on the detected feature points, then carrying out stereo matching to obtain registration points, and processing the obtained registration points through triangular similarity to enable pixel coordinates to be the same;
and 4, step 4: performing feature point matching on all adjacent two images in the face image sequence by using a nearest neighbor and next nearest neighbor ratio method, and storing all matching points;
and 5: setting the camera coordinates of the first human face image as standard coordinates, and calculating projection matrixes of the first human face image and the second human face image through internal parameters, rotation matrix parameters and translational vector parameters;
step 6: performing triangularization reconstruction (point cloud triangularization) by using the projection matrix and the obtained matching points to obtain space point three-dimensional coordinates of the first face image and the second face image;
and 7: sequentially taking each image in the face image sequence as a third face image, solving a translation vector and a rotation matrix through the three-dimensional coordinates obtained in the step 6, and performing three-dimensional reconstruction on the translation vector and the rotation matrix and the matching points obtained in the step 4 to obtain three-dimensional point clouds of the matching points of the first two face images of the third face image;
and 8: reconstructing a new three-dimensional point cloud through the internal parameters, the matching points of the third face image and the first face image and the second face image, the rotation matrix parameters and the translation vector parameters;
and step 9: combining the three-dimensional point cloud generated in the step 7 and the three-dimensional point cloud generated in the step 8, repeatedly executing the step 7 to the step 8, and realizing three-dimensional reconstruction of the time sequence face images after combining the three-dimensional point clouds of all the face image sequences when the three-dimensional point clouds of all the face image sequences are combined;
step 10: and performing face recognition through an ICP algorithm.
Further, in step 1, the internal and external parameters of the video camera include camera internal parameters and camera external parameters, the camera internal parameters are parameters related to the characteristics of the camera itself, including the focal length, pixel size, etc. of the camera; camera extrinsic parameters are parameters in the world coordinate system, including the position, rotational direction, etc. of the camera.
Further, in step 1, the method for capturing an image sequence of an image taken by the camera at every angle interval to the face as a face image sequence comprises: in a time period, a camera shoots a face image at the face of the front side of the camera by rotating N degrees, all the face images form a face image sequence, N is an angle value and is set to be 10 degrees by default, the value range of N is 1-180 degrees, the time period is 5 minutes, and both N and the time period can be manually set.
Further, in step 2, the first face image and the second face image are images captured by a camera at two different angles, for example, the first face image is an image of the front of a face, and the second face image is an image obtained by rotating the face clockwise by 10 degrees.
Further, in step 6, the method for performing triangularization reconstruction by using the projection matrix and the obtained matching points to obtain the three-dimensional coordinates of the spatial points of the first face image and the second face image includes:
denote the abscissa as x and the homogeneous coordinate of the vector x as x
Figure BDA0002336276810000061
Three-dimensional point M = [ x, y, z)] T Is recorded as the homogeneous coordinate
Figure BDA0002336276810000062
Corresponding image point m = [ u, v =] T Is recorded as a homogeneous coordinate
Figure BDA0002336276810000063
Satisfy formula (1)
Figure BDA0002336276810000064
Wherein
Figure BDA0002336276810000065
Alpha, beta is the fusion of focal length and pixel aspect ratio, gamma is the radial distortion parameter, u 0 Are all horizontal coordinates, v 0 Are all vertical coordinates, λ is a constant; the elements in the matrix a are internal parameters of the camera, and Ω is a three-dimensional coordinate transformation matrix including a rotation matrix R and a displacement matrix t.
Further, in step 7, the third face image is an image in the face image sequence, and the third face image is replaced by a next image of the current image in the face image sequence each time step 7 is executed.
Further, in step 10, the method for face recognition by ICP algorithm is as follows:
step 10.1: in thatObtaining a target point cloud P from an input face image, and obtaining a point set P in the target point cloud P i Belongs to P; i =1,2, \8230 \8230n, n is the number of target points in the target point cloud;
step 10.2: finding out a corresponding point set Q in a source point cloud Q in a face image i Belongs to Q, so that | | | Q i -p i I | = min, min is a set minimum difference value, and the default is 0.3 × i;
step 10.3: calculating a rotation matrix R and a translation matrix t to minimize an error function;
step 10.4: to p i Performing rotation and translation transformation by using the rotation matrix and the translation matrix obtained in the step 10.3 to obtain a new corresponding point set p' i ={Rp i +t,p i ∈P};
Step 10.5: calculating p' i Corresponding point set q i Average distance of (d):
Figure BDA0002336276810000071
step 10.6: if d is smaller than a given threshold or larger than a preset maximum iteration number, stopping iteration and returning d as the final face model similarity; otherwise go to step 10.2 until convergence is met, the threshold is set by default to be
Figure BDA0002336276810000072
The maximum number of iterations is set to 100.
The term referred to therein is, SFM: structure-From-Motion (Motion Structure recovery algorithm); ICP: iterative Closest Point algorithm.
An SFM reconstruction-based face recognition system provided in an embodiment of the present disclosure is a structure diagram of the SFM reconstruction-based face recognition system shown in fig. 2, and the SFM reconstruction-based face recognition system of the embodiment includes: a processor, a memory and a computer program stored in the memory and executable on the processor, wherein the processor when executing the computer program realizes the steps of the above-mentioned face recognition system embodiment based on SFM reconstruction.
The system comprises: a memory, a processor, and a computer program stored in the memory and executable on the processor, the processor executing the computer program to run in the units of the system:
the image sequence acquisition unit is used for shooting a face image through a common camera, calibrating the camera and acquiring internal and external parameters of the camera; the camera shoots a human face at an angle interval every time and acquires an image sequence of an image as a human face image sequence; taking the first two images in the face image sequence as a first image and a second image;
the model image preprocessing unit is used for inputting a first human face image to perform denoising, image downsampling and image blurring; adjusting the gray scale range of the image by using gamma conversion on the second face image;
the image feature registration unit is used for detecting feature points of the first face image and the second face image by using a SURF algorithm, performing stereo matching on the detected feature points after SIFT operators are performed on the detected feature points to obtain registration points, and processing the obtained registration points through triangular similarity to enable pixel coordinates to be the same;
the neighbor matching unit is used for performing feature point matching on all adjacent two images in the face image sequence by using a nearest neighbor and next nearest neighbor ratio method and storing all matching points;
the model image projection unit is used for setting the camera coordinates of the first human face image as standard coordinates, and calculating the projection matrixes of the first human face image and the second human face image through the internal parameters, the rotation matrix parameters and the translational vector parameters;
the model triangularization reconstruction unit is used for carrying out triangularization reconstruction by utilizing the projection matrix and the obtained matching points to obtain three-dimensional coordinates of space points of the first face image and the second face image;
the three-dimensional point cloud construction unit is used for sequentially taking each image in the face image sequence as a third face image, solving a translation vector and a rotation matrix through three-dimensional coordinates, and performing three-dimensional reconstruction on the translation vector and the rotation matrix and matching points to obtain three-dimensional point clouds of the matching points of the first two face images of the third face image;
the reconstruction three-dimensional point cloud unit is used for reconstructing a new three-dimensional point cloud through the internal parameters, the matching points of the third face image, the first face image and the second face image, the rotation matrix parameters and the translation vector parameters;
the point cloud merging completion unit is used for merging the three-dimensional point clouds and completing the merging of the three-dimensional point clouds of all the face image sequences with the first face image and the second face image;
and the face recognition unit is used for carrying out face recognition through an ICP algorithm.
The face recognition system based on SFM reconstruction can be operated in computing equipment such as desktop computers, notebooks, palm computers and cloud servers. The face recognition system based on SFM reconstruction can be operated by a system comprising but not limited to a processor and a memory. It will be appreciated by those skilled in the art that the example is merely an example of a face recognition system based on SFM reconstruction and is not intended to limit the face recognition system based on SFM reconstruction and may include more or less components than, or in combination with, certain components or different components, for example, the face recognition system based on SFM reconstruction may further include input and output devices, network access devices, buses, etc.
The Processor may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic device, discrete hardware component, or the like. The general purpose processor may be a microprocessor or the processor may be any conventional processor, etc., the processor is a control center of the SFM reconstruction based face recognition system operating system, and various interfaces and lines are used to connect various parts of the whole SFM reconstruction based face recognition system operating system.
The memory may be used for storing the computer programs and/or modules, and the processor may implement various functions of the face recognition system based on SFM reconstruction by executing or executing the computer programs and/or modules stored in the memory and calling the data stored in the memory. The memory can mainly comprise a storage program area and a storage data area, wherein the storage program area can store an operating system, application programs (including a sound playing function, an image playing function and the like) required by at least one function and the like; the storage data area may store data (including audio data, a phone book, etc.) created according to the use of the cellular phone, and the like. In addition, the memory may include high speed random access memory, and may also include non-volatile memory, such as a hard disk, a memory, a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), at least one magnetic disk storage device, a Flash memory device, or other volatile solid state storage device.
While the present disclosure has been described in considerable detail and with particular reference to several illustrated embodiments, it is not intended to be limited to any such details or embodiments or any particular embodiments, but it is to be construed as effectively covering the intended scope of the disclosure by providing a broad, potential interpretation of such claims in view of the prior art, with reference to the appended claims. Furthermore, the foregoing describes the disclosure in terms of embodiments foreseen by the inventor for which an enabling description was available, notwithstanding that insubstantial modifications of the disclosure, not presently foreseen, may nonetheless represent equivalent modifications thereto.

Claims (5)

1. A face recognition method based on SFM reconstruction is characterized by comprising the following steps:
step 1: shooting a face image through a common camera, calibrating the camera, and acquiring internal and external parameters of the camera; the method comprises the steps that a camera shoots a human face at an angle interval every time and collects an image sequence of one image as a human face image sequence, and the first two images in the human face image sequence are taken as a first image and a second image;
step 2: inputting a first face image for denoising, image downsampling and image blurring; adjusting the gray scale range of the image by using gamma conversion on the second face image;
and 3, step 3: feature point detection is carried out on the first face image and the second face image by utilizing a SURF algorithm, SIFT operators are carried out on the detected feature points, then stereo matching is carried out to obtain registration points, and the obtained registration points are processed through triangular similarity to enable pixel coordinates to be the same;
and 4, step 4: performing feature point matching on all adjacent two images in the face image sequence by using a nearest neighbor and next nearest neighbor ratio method, and storing all matching points;
and 5: setting the camera coordinates of the first human face image as standard coordinates, and calculating projection matrixes of the first human face image and the second human face image through internal parameters, rotation matrix parameters and translational vector parameters;
step 6: performing triangularization reconstruction by using the projection matrix and the obtained matching points to obtain three-dimensional coordinates of space points of the first face image and the second face image;
and 7: sequentially taking each image in the face image sequence as a third face image, solving a translation vector and a rotation matrix through the three-dimensional coordinates obtained in the step 6, and performing three-dimensional reconstruction on the translation vector and the rotation matrix and the matching points obtained in the step 4 to obtain three-dimensional point clouds of the matching points of the first two face images of the third face image;
and 8: reconstructing a new three-dimensional point cloud through the internal parameters, the matching points of the third face image, the first face image and the second face image, the rotation matrix parameters and the translation vector parameters;
and step 9: combining the three-dimensional point cloud generated in the step 7 and the three-dimensional point cloud generated in the step 8, repeatedly executing the step 7 to the step 8, and realizing three-dimensional reconstruction of the time sequence face images after combining the three-dimensional point clouds of all the face image sequences when the three-dimensional point clouds of all the face image sequences are combined;
step 10: carrying out face recognition through an ICP algorithm;
in step 6, the method for performing triangularization reconstruction by using the projection matrix and the obtained matching points to obtain the three-dimensional coordinates of the space points of the first face image and the second face image comprises the following steps:
let the abscissa be x, three-dimensional point M = [ x, y, z =] T Is recorded as a homogeneous coordinate
Figure FDA0003794835840000011
Corresponding image point m = [ u, v =] T Is recorded as the homogeneous coordinate
Figure FDA0003794835840000012
Satisfy the formula
Figure FDA0003794835840000013
Wherein
Figure FDA0003794835840000021
Alpha, beta is focal length and pixel aspect ratio, gamma is radial distortion parameter, u 0 Are all horizontal coordinates v, v 0 Are all vertical coordinates, λ is a constant; the elements in the matrix A are internal parameters of the camera, and omega is a three-dimensional coordinate transformation matrix which comprises a rotation matrix R and a displacement matrix t;
in step 10, the method for face recognition by ICP algorithm comprises the following steps:
step 10.1: obtaining a target point cloud P in an input face image, and obtaining a point set P in the target point cloud P i Belongs to P; i =1,2, \8230 \8230n, n is the number of target points in the target point cloud;
step 10.2: finding out a corresponding point set Q in a source point cloud Q in a face image i Belongs to Q, so that | | | Q i -p i I | = min, min is a set minimum difference;
step 10.3: calculating a rotation matrix R and a translation matrix t to minimize an error function;
step 10.4: to p is p i Using the rotation matrix and the translation matrix obtained in the step 10.3 to carry out rotation and translation transformation to obtain a new matrixCorresponding point set p' i ={Rp i +t,p i ∈P};
Step 10.5: calculating p' i Corresponding point set q i Average distance of (d):
Figure FDA0003794835840000022
step 10.6: if d is smaller than a given threshold or larger than a preset maximum iteration number, stopping iteration and returning d as the final face model similarity; otherwise go to step 10.2 until the convergence condition is met, the threshold is set by default to
Figure FDA0003794835840000023
The maximum number of iterations is set to 100.
2. The face recognition method based on SFM reconstruction as claimed in claim 1, wherein in step 1, the internal and external parameters of the video camera comprise camera internal parameters and camera external parameters, the camera internal parameters are parameters related to the self characteristics of the camera, including the focal length and pixel size of the camera; the camera extrinsic parameters are parameters in a world coordinate system, including the position and the rotational direction of the camera.
3. A method for face recognition based on SFM reconstruction as claimed in claim 1, wherein in step 1, the method for capturing an image sequence of an image taken by the camera at every angle interval as a face image sequence comprises: in a time period, a face image is shot by the camera at the face of the front side of the camera every N degrees of rotation, and all the face images form a face image sequence.
4. An SFM reconstruction-based face recognition method as claimed in claim 1, wherein in step 2, the first face image and the second face image are images taken by a camera at two different angles.
5. A face recognition system based on SFM reconstruction, the system comprising: a memory, a processor, and a computer program stored in the memory and executable on the processor, the processor executing the computer program to run in the units of the following system:
the image sequence acquisition unit is used for shooting a face image through a common camera, calibrating the camera and acquiring internal and external parameters of the camera; the camera shoots a human face at an angle interval every time and acquires an image sequence of an image as a human face image sequence; taking the first two images in the human face image sequence as a first image and a second image;
the model image preprocessing unit is used for inputting a first face image to perform denoising, image downsampling and image blurring; adjusting the gray scale range of the image by using gamma conversion on the second face image;
the image feature registration unit is used for detecting feature points of the first face image and the second face image by using a SURF algorithm, performing stereo matching on the detected feature points after SIFT operators are performed to obtain registration points, and processing the obtained registration points through triangular similarity to enable pixel coordinates to be the same;
a nearest neighbor matching unit for matching feature points of all two adjacent images in the face image sequence by using a nearest neighbor and next nearest neighbor ratio method and storing all the matching points;
the model image projection unit is used for setting the camera coordinates of the first human face image as standard coordinates, and calculating the projection matrixes of the first human face image and the second human face image through the internal parameters, the rotation matrix parameters and the translational vector parameters;
the model triangularization reconstruction unit is used for carrying out triangularization reconstruction by using the projection matrix and the obtained matching points to obtain three-dimensional coordinates of space points of the first face image and the second face image;
the three-dimensional point cloud construction unit is used for sequentially taking each image in the face image sequence as a third face image, solving a translation vector and a rotation matrix through three-dimensional coordinates, and performing three-dimensional reconstruction on the translation vector and the rotation matrix and matching points to obtain three-dimensional point clouds of the matching points of the first two face images of the third face image;
the reconstruction three-dimensional point cloud unit is used for reconstructing a new three-dimensional point cloud through the internal parameters, the matching points of the third face image, the first face image and the second face image, the rotation matrix parameters and the translation vector parameters;
the point cloud merging completion unit is used for merging the three-dimensional point clouds and completing the merging of the three-dimensional point clouds of all the face image sequences with the first face image and the second face image;
the face recognition unit is used for carrying out face recognition through an ICP (inductively coupled plasma) algorithm;
the method for performing triangularization reconstruction by using the projection matrix and the obtained matching points to obtain the three-dimensional coordinates of the space points of the first face image and the second face image comprises the following steps:
let the abscissa be x, three-dimensional point M = [ x, y, z =] T Is recorded as a homogeneous coordinate
Figure FDA0003794835840000031
Corresponding image point m = [ u, v =] T Is recorded as a homogeneous coordinate
Figure FDA0003794835840000032
Satisfy the formula
Figure FDA0003794835840000033
Wherein
Figure FDA0003794835840000041
Alpha, beta is focal length and pixel aspect ratio, gamma is radial distortion parameter, u 0 Are all horizontal coordinates, v 0 Are all vertical coordinates, λ is a constant; the elements in the matrix A are internal parameters of the camera, and omega is a three-dimensional coordinate transformation matrix which comprises a rotation matrix R and a displacement matrix t;
the method for carrying out face recognition through the ICP algorithm comprises the following steps:
step 10.1: obtaining a target point cloud P in an input face image, and obtaining a point set P in the target point cloud P i Belongs to P; i =1,2, \8230, n is the number of target points in the target point cloud;
step 10.2: finding out a corresponding point set Q in a source point cloud Q in a face image i Belongs to Q, so that | | | Q i -p i I | = min, min is a set minimum difference;
step 10.3: calculating a rotation matrix R and a translation matrix t to minimize an error function;
step 10.4: to p i Carrying out rotation and translation transformation by using the rotation matrix and the translation matrix obtained in the step 10.3 to obtain a new corresponding point set p' i ={Rp i +t,p i ∈P};
Step 10.5: calculating p' i Corresponding point set q i Average distance of (2):
Figure FDA0003794835840000042
step 10.6: if d is smaller than a given threshold or larger than a preset maximum iteration number, stopping iteration and returning d as the final face model similarity; otherwise go to step 10.2 until the convergence condition is met, the threshold is set by default to
Figure FDA0003794835840000043
The maximum number of iterations is set to 100.
CN201911357292.3A 2019-12-25 2019-12-25 Face recognition method and system based on SFM reconstruction Active CN113034345B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911357292.3A CN113034345B (en) 2019-12-25 2019-12-25 Face recognition method and system based on SFM reconstruction

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911357292.3A CN113034345B (en) 2019-12-25 2019-12-25 Face recognition method and system based on SFM reconstruction

Publications (2)

Publication Number Publication Date
CN113034345A CN113034345A (en) 2021-06-25
CN113034345B true CN113034345B (en) 2023-02-28

Family

ID=76458344

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911357292.3A Active CN113034345B (en) 2019-12-25 2019-12-25 Face recognition method and system based on SFM reconstruction

Country Status (1)

Country Link
CN (1) CN113034345B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113505717B (en) * 2021-07-17 2022-05-31 桂林理工大学 Online passing system based on face and facial feature recognition technology

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102663810A (en) * 2012-03-09 2012-09-12 北京航空航天大学 Full-automatic modeling approach of three dimensional faces based on phase deviation scanning
WO2017026839A1 (en) * 2015-08-12 2017-02-16 트라이큐빅스 인크. 3d face model obtaining method and device using portable camera
CN109919876A (en) * 2019-03-11 2019-06-21 四川川大智胜软件股份有限公司 A kind of true face model building of three-dimensional and three-dimensional true face photographic system
WO2019196308A1 (en) * 2018-04-09 2019-10-17 平安科技(深圳)有限公司 Device and method for generating face recognition model, and computer-readable storage medium

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170032565A1 (en) * 2015-07-13 2017-02-02 Shenzhen University Three-dimensional facial reconstruction method and system

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102663810A (en) * 2012-03-09 2012-09-12 北京航空航天大学 Full-automatic modeling approach of three dimensional faces based on phase deviation scanning
WO2017026839A1 (en) * 2015-08-12 2017-02-16 트라이큐빅스 인크. 3d face model obtaining method and device using portable camera
WO2019196308A1 (en) * 2018-04-09 2019-10-17 平安科技(深圳)有限公司 Device and method for generating face recognition model, and computer-readable storage medium
CN109919876A (en) * 2019-03-11 2019-06-21 四川川大智胜软件股份有限公司 A kind of true face model building of three-dimensional and three-dimensional true face photographic system

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
基于SFM算法的三维人脸模型重建;王琨等;《计算机学报》;20050612(第06期);全文 *
基于改进SFM的三维重建算法研究;蒋华强 等;《计算机技术与应用》;20190206;第45卷(第2期);正文第2-3页 *
基于改进SURF算法的人脸点云配准;郭昱等;《光学技术》;20180515;第44卷(第03期);正文第2-4页 *

Also Published As

Publication number Publication date
CN113034345A (en) 2021-06-25

Similar Documents

Publication Publication Date Title
CN111145238B (en) Three-dimensional reconstruction method and device for monocular endoscopic image and terminal equipment
CN110264509B (en) Method, apparatus, and storage medium for determining pose of image capturing device
CN109313799B (en) Image processing method and apparatus
CN110998659B (en) Image processing system, image processing method, and program
US10410089B2 (en) Training assistance using synthetic images
Lei et al. Depth map super-resolution considering view synthesis quality
US11348267B2 (en) Method and apparatus for generating a three-dimensional model
JP5739409B2 (en) Method for determining the relative position of a first image device and a second image device and these devices
US8452081B2 (en) Forming 3D models using multiple images
KR100793838B1 (en) Appratus for findinng the motion of camera, system and method for supporting augmented reality in ocean scene using the appratus
EP3598385B1 (en) Face deblurring method and device
CN111080776B (en) Human body action three-dimensional data acquisition and reproduction processing method and system
JP6515039B2 (en) Program, apparatus and method for calculating a normal vector of a planar object to be reflected in a continuous captured image
US20120162387A1 (en) Imaging parameter acquisition apparatus, imaging parameter acquisition method and storage medium
GB2567245A (en) Methods and apparatuses for depth rectification processing
CN111325828B (en) Three-dimensional face acquisition method and device based on three-dimensional camera
CN116012432A (en) Stereoscopic panoramic image generation method and device and computer equipment
CN113034345B (en) Face recognition method and system based on SFM reconstruction
JP6730695B2 (en) A method for reconstructing 3D multi-view by feature tracking and model registration.
JP6086491B2 (en) Image processing apparatus and database construction apparatus thereof
KR101673144B1 (en) Stereoscopic image registration method based on a partial linear method
KR20200122870A (en) Acquisition method for high quality 3-dimension spatial information using photogrammetry
CN116051736A (en) Three-dimensional reconstruction method, device, edge equipment and storage medium
Rotman et al. A depth restoration occlusionless temporal dataset
CN108426566B (en) Mobile robot positioning method based on multiple cameras

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant