CN109215061B - Face pore tracking method and system - Google Patents

Face pore tracking method and system Download PDF

Info

Publication number
CN109215061B
CN109215061B CN201811313361.6A CN201811313361A CN109215061B CN 109215061 B CN109215061 B CN 109215061B CN 201811313361 A CN201811313361 A CN 201811313361A CN 109215061 B CN109215061 B CN 109215061B
Authority
CN
China
Prior art keywords
dimensional
transformation matrix
block set
superpixel
pore
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811313361.6A
Other languages
Chinese (zh)
Other versions
CN109215061A (en
Inventor
冯省城
李东
王颖
王永华
庄洪生
汪生
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong University of Technology
Original Assignee
Guangdong University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong University of Technology filed Critical Guangdong University of Technology
Priority to CN201811313361.6A priority Critical patent/CN109215061B/en
Publication of CN109215061A publication Critical patent/CN109215061A/en
Application granted granted Critical
Publication of CN109215061B publication Critical patent/CN109215061B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/269Analysis of motion using gradient-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The application provides a method for tracking human face pores, which comprises the following steps: acquiring two continuous frames of pictures of an expression change video; calculating the two frames of pictures by using a dense optical flow algorithm to obtain an optical flow graph; SLIC superpixel segmentation is carried out on the two frames of pictures to obtain a two-dimensional superpixel block set; processing the two-dimensional super pixel block set by using a light flow diagram to respectively obtain a three-dimensional super pixel block set and a corresponding transformation matrix; processing the three-dimensional super pixel block set and the transformation matrix by using a combined optimization function to obtain an optimized transformation matrix; and respectively obtaining the three-dimensional change vector corresponding to each pore characteristic point by using the coordinates of each pore characteristic point and the optimization transformation matrix. Therefore, the method obtains the optimized transformation matrix of the pore characteristic point motion trail, namely the accurate motion trail parameters, and further improves the accuracy rate of pore characteristic point tracking. The application also provides an expression change face pore tracking system, a computer and a computer readable storage medium, which have the beneficial effects.

Description

Face pore tracking method and system
Technical Field
The present application relates to the field of pore tracking, and in particular, to a method, a system, a computer, and a computer-readable storage medium for tracking human face pores.
Background
At present, the related technology of face pore tracking is as follows: SIFT feature points and the like are extracted from the two-dimensional image to be used as pore feature points, then pore feature points with the highest matching degree are searched between the images of two adjacent frames through RANSAC algorithm to be used as optimal matching points, and finally the change of the optimal matching points between the two frames is the motion trail of the pore feature points. However, the parameters of the motion trajectory of the pore feature points acquired by the related art are not accurate enough, so that the accuracy of pore feature point tracking is not high.
Therefore, how to more accurately acquire the parameters of the motion trajectory of the pore feature points, and further improve the accuracy of pore feature point tracking is a technical problem that needs to be solved by those skilled in the art.
Disclosure of Invention
The invention aims to provide a method, a system, a computer and a computer readable storage medium for tracking a human face pore, which can more accurately acquire parameters of a pore characteristic point motion track, thereby improving the accuracy of pore characteristic point tracking.
In order to solve the above technical problem, the present application provides a method for tracking human face pores, including:
acquiring two continuous frames of pictures of an expression change video;
calculating the two frames of pictures by using a dense optical flow algorithm to obtain an optical flow graph;
SLIC superpixel segmentation is carried out on the two frames of pictures to obtain a two-dimensional superpixel block set;
processing the two-dimensional super pixel block set by using the light flow diagram to respectively obtain a three-dimensional super pixel block set and a corresponding transformation matrix;
processing the three-dimensional super pixel block set and the transformation matrix by using a combined optimization function to obtain an optimized transformation matrix;
and respectively obtaining the three-dimensional change vector corresponding to each pore characteristic point by using the coordinates of each pore characteristic point and the optimization transformation matrix.
Preferably, the processing the two-dimensional super pixel block set by using the light flow graph to obtain a three-dimensional super pixel block set and a corresponding transformation matrix respectively includes:
mapping all the two-dimensional superpixel blocks in the two-dimensional superpixel block set to a three-dimensional space by using the light flow graph to obtain a three-dimensional superpixel block set;
and combining the rotation matrix and the translation matrix of each three-dimensional superpixel block in the three-dimensional superpixel block set into a corresponding transformation matrix.
Preferably, the obtaining the three-dimensional variation vector corresponding to each pore feature point by using the coordinates of each pore feature point and the optimized transformation matrix respectively includes:
determining the middle points of all three-dimensional superpixel blocks in the three-dimensional superpixel block set as the pore characteristic points;
and respectively obtaining corresponding three-dimensional change vectors by using the coordinates of the pore characteristic points and the optimization transformation matrix.
Preferably, the processing the three-dimensional super-pixel block set and the transformation matrix by using the combinatorial optimization function to obtain an optimized transformation matrix includes:
adding the local rigidity optimization function and the reprojection optimization function to obtain the combined optimization function;
and substituting the three-dimensional super pixel block set and the transformation matrix into the combined optimization function to carry out iteration to obtain the optimized transformation matrix.
The present application further provides a face pore tracking system, comprising:
the obtaining module is used for obtaining two continuous frames of pictures of the expression change video;
the dense optical flow algorithm module is used for calculating the two frames of pictures by utilizing a dense optical flow algorithm to obtain an optical flow graph;
the SLIC super-pixel segmentation module is used for carrying out SLIC super-pixel segmentation on the two frames of pictures to obtain a two-dimensional super-pixel block set;
the processing module is used for processing the two-dimensional super pixel block set by utilizing the light flow graph to respectively obtain a three-dimensional super pixel block set and a corresponding transformation matrix;
the combined optimization function processing module is used for processing the three-dimensional super-pixel block set and the transformation matrix by using a combined optimization function to obtain an optimized transformation matrix;
and the three-dimensional change vector acquisition module is used for respectively obtaining the three-dimensional change vectors corresponding to the pore characteristic points by utilizing the coordinates of the pore characteristic points and the optimization transformation matrix.
Preferably, the processing module includes:
a mapping unit, configured to map all the two-dimensional super-pixel blocks in the two-dimensional super-pixel block set to a three-dimensional space by using the light flow graph, so as to obtain a three-dimensional super-pixel block set;
a synthesis unit for combining the rotation matrix and the translation matrix of each three-dimensional superpixel block in the three-dimensional superpixel block set into a corresponding transformation matrix.
Preferably, the three-dimensional change vector obtaining module includes:
a pore feature point determining unit, configured to determine midpoints of all three-dimensional superpixel blocks in the three-dimensional superpixel block set as each pore feature point;
and the three-dimensional change vector acquisition unit is used for respectively acquiring corresponding three-dimensional change vectors by utilizing the coordinates of the pore characteristic points and the optimization transformation matrix.
Preferably, the combinatorial optimization function processing module includes:
the superposition unit is used for adding the local rigidity optimization function and the reprojection optimization function to obtain the combined optimization function;
and the iteration unit is used for substituting the three-dimensional super pixel block set and the transformation matrix into the combined optimization function to carry out iteration to obtain the optimized transformation matrix.
The present application further provides a computer, comprising:
a memory and a processor; wherein the memory is used for storing a computer program, and the processor is used for implementing the steps of the human face pore tracking method when executing the computer program.
The present application further provides a computer-readable storage medium, which stores a computer program, and the computer program, when executed by a processor, implements the steps of the above-mentioned face pore tracking method.
The application provides a method for tracking human face pores, which comprises the following steps: acquiring two continuous frames of pictures of an expression change video; calculating the two frames of pictures by using a dense optical flow algorithm to obtain an optical flow graph; SLIC superpixel segmentation is carried out on the two frames of pictures to obtain a two-dimensional superpixel block set; processing the two-dimensional super pixel block set by using the light flow diagram to respectively obtain a three-dimensional super pixel block set and a corresponding transformation matrix; processing the three-dimensional super pixel block set and the transformation matrix by using a combined optimization function to obtain an optimized transformation matrix; and respectively obtaining the three-dimensional change vector corresponding to each pore characteristic point by using the coordinates of each pore characteristic point and the optimization transformation matrix.
The method comprises the steps of obtaining a light flow diagram and a two-dimensional super-pixel block set respectively by using two continuous frames of pictures of an obtained expression change video, then processing the two-dimensional super-pixel block set by using the light flow diagram to respectively obtain a three-dimensional super-pixel block set and a corresponding transformation matrix, then processing the three-dimensional super-pixel block set and the transformation matrix by using a combined optimization function to obtain an optimized transformation matrix, and finally obtaining three-dimensional change vectors corresponding to feature points of pores by using coordinates of the feature points of the pores and the optimized transformation matrix. Therefore, the method utilizes the combined optimization function to obtain the optimization transformation matrix of the pore characteristic point motion trail, namely the accurate motion trail parameters, so as to improve the accuracy rate of pore characteristic point tracking. The application also provides an expression change face pore tracking system, a computer and a computer readable storage medium, which have the beneficial effects and are not described herein again.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, it is obvious that the drawings in the following description are only embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the provided drawings without creative efforts.
Fig. 1 is a flowchart of a method for tracking human face pores according to an embodiment of the present application;
fig. 2 is a block diagram of a structure of a face pore tracking system according to an embodiment of the present application.
Detailed Description
The core of the application is to provide a human face pore tracking method, which can more accurately acquire parameters of pore characteristic point motion tracks, and further improve the accuracy of pore characteristic point tracking. At the other core of the application, a human face pore tracking system, a computer and a computer readable storage medium are provided.
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
At present, the related technology of face pore tracking is as follows: SIFT feature points and the like are extracted from the two-dimensional image to be used as pore feature points, then pore feature points with the highest matching degree are searched between the images of two adjacent frames through RANSAC algorithm to be used as optimal matching points, and finally the change of the optimal matching points between the two frames is the motion trail of the pore feature points. However, the parameters of the motion trajectory of the pore feature points acquired by the related art are not accurate enough, so that the accuracy of pore feature point tracking is not high. According to the embodiment of the application, the optimization transformation matrix of the pore characteristic point motion trail, namely the accurate motion trail parameters, is obtained by utilizing the combined optimization function, and therefore the accuracy rate of pore characteristic point tracking is improved. Referring to fig. 1 specifically, fig. 1 is a flowchart of a face pore tracking method according to an embodiment of the present application, where the face pore tracking method specifically includes:
s101, acquiring two continuous frames of pictures of an expression change video;
in the embodiment of the application, two continuous frames of pictures of the expression change video are obtained first, the type of the expression change video is not specifically limited, and a person skilled in the art should make corresponding settings according to actual conditions, wherein the expression change video may be a video with only expression change or a section of common video, but a section of the common video must be a video with expression change. Further, neither the duration nor the frame rate of the expression change video is specifically limited, and those skilled in the art should make corresponding settings according to actual situations, for example, the frame rate of the video is 30 hz. The time interval between two consecutive pictures depends on the frame rate of the video, and it is known that the frame rate of the video is not specifically limited, and accordingly, the time interval is not specifically limited. For example, when the frame rate of the video is 30 hz, the time interval is 1/30 seconds.
S102, calculating two frames of pictures by using a dense optical flow algorithm to obtain an optical flow graph;
after two continuous frames of pictures of the expression change video are obtained, a dense optical flow algorithm is used for calculating the two frames of pictures to obtain an optical flow graph. The dense optical flow is an image registration method for performing point-by-point matching on an image, and is different from a sparse optical flow which only aims at a plurality of feature points on the image, and the dense optical flow calculates the offset of all points on the image so as to form a dense optical flow graph. The precision requirement of the embodiment of the application on the dense optical flow is not very high, and the requirement on the precision can be met by using a corresponding library function in OpenCv, wherein the preliminary parameter is only used for initializing the algorithm of the embodiment of the application.
S103, SLIC superpixel segmentation is carried out on the two frames of pictures to obtain a two-dimensional superpixel block set;
after two continuous frames of pictures of the expression change video are obtained, SLIC superpixel segmentation is carried out on the two frames of pictures to obtain a two-dimensional superpixel block set. The two frames of pictures are respectively subjected to SLIC superpixel segmentation, namely, the two pictures are segmented into N small patches which are like a jigsaw puzzle and are of a block, but the patches segmented by the SLIC superpixel segmentation are irregular patches. The size of N is not particularly limited, and should be set by those skilled in the art according to actual circumstances. Because the deformation of the face surface is not large in one frame time of the change process of the face expression, the face expression can be regarded as small blocks on the face to perform rotation and translation motion. By SLIC superpixel segmentation, the human face in two frames of pictures is segmented into superpixel blocks, the superpixel blocks almost perform rigid-like motion, namely the segmented superpixel blocks between two frames hardly deform. And obtaining two-dimensional super pixel block sets s and s' corresponding to the two frames of pictures after the two frames of pictures are divided. Wherein s ═ s1,s2,...,si,...,sN},s'={s'1,s'2,...,s'i,...,s'N},siAnd s'iComprising two parameters xaiAnd xbi. Wherein, { xbi=[ubi,vbi,1]T|b=1,...,Bi-this parameter represents the boundary point of a superpixel block, which is used to describe the position, shape and size of the superpixel block. x is the number ofai=[Xai,Yai]TIs the midpoint coordinate of each two-dimensional superpixel block.
S104, processing the two-dimensional super pixel block set by using a light flow diagram to respectively obtain a three-dimensional super pixel block set and a corresponding transformation matrix;
in the embodiment of the present application, after obtaining the light flow graph and the two-dimensional super-pixel block in step S102 and step S103, respectively, the light flow graph is used to process the two-dimensional super-pixel block set to obtain a three-dimensional super-pixel block set and a corresponding transformation matrix, which generally includes: mapping all the two-dimensional superpixel blocks in the two-dimensional superpixel block set to a three-dimensional space by using a light flow graph to obtain a three-dimensional superpixel block set; and combining the rotation matrix and the translation matrix of each three-dimensional superpixel block in the three-dimensional superpixel block set into a corresponding transformation matrix.
As can be seen from the above description of step S103, the superpixel blocks almost all perform rigid-like motion, so each superpixel block has its corresponding rotation matrix RiAnd a translation matrix ti. In step S102, the embodiment of the present application obtains matching points of two super-pixel blocks associated with two frames of images of a dense light-beam pattern, so that the super-pixel blocks can be regarded as a single imaging plane, and according to the matching points of the two super-pixel blocks, a respective rotation matrix and translation matrix of each super-pixel block are estimated by a conventional multi-view geometric method, and the two-dimensional super-pixel blocks are mapped into three-dimensional spaces, so that three-dimensional coordinate sets S and S' of the super-pixel blocks in each three-dimensional space are obtained. Wherein S ═ { S ═ S1,S2,...,Si,...,SN},S'={S'1,S'2,...,S'i,...,S'N},SiAnd S'iComprises four parameters respectively Xai,Xbi,niAnd di. Wherein, { Xbi=[Xbi,Ybi,Zbi,1]T|b=1,...,Bi-the parameter represents a boundary point of a three-dimensional superpixel block, the boundary point describing the location, shape and size of the superpixel block; xai=[Xai,Yai,Zai]TIs the midpoint coordinate of each three-dimensional superpixel block; n isiIs normal to the three-dimensional superpixel block; diIs the depth of the point in each superpixel block.
Combining the rotation matrix and translation matrix of each three-dimensional superpixel block in the three-dimensional superpixel block set into a corresponding transformation matrix MiAs follows:
Figure BDA0001855607290000061
wherein λ isiIs an unknown scale proportion, is one of the optimization parameters, and has an initial value set to be 1/N, MiRepresenting a transformation matrix of motion changes of pixels of the ith three-dimensional superpixel block from a previous frame to a next frame.
S105, processing the three-dimensional super-pixel block set and the transformation matrix by using a combined optimization function to obtain an optimized transformation matrix;
after the two-dimensional super-pixel block set is processed by the optical flow graph to respectively obtain the three-dimensional super-pixel block set and the corresponding transformation matrix, the embodiment of the application processes the three-dimensional super-pixel block set and the transformation matrix by the combined optimization function to obtain the optimized transformation matrix, which generally comprises: adding the local rigidity optimization function and the reprojection optimization function to obtain a combined optimization function; and substituting the three-dimensional super pixel block set and the transformation matrix into a combined optimization function to iterate to obtain an optimized transformation matrix.
Usually, a K-NN graph is established by using a K nearest neighbor algorithm according to Euclidean distances of midpoints of three-dimensional superpixel blocks, namely, the superpixel blocks in a three-dimensional space are grouped according to the distances of the midpoints of the superpixel blocks, are closely divided into a group, and are connected by nodes. Let the total number of superpixel blocks divided into n groups, i-th groupIs niHere, n and n are pairediThe size of the compound is not particularly limited, and the compound should be set by those skilled in the art according to actual conditions.
As can be seen from the content of step S103, the superpixel blocks almost all perform rigid-like motion, that is, the superpixel blocks do not perform rigid motion completely. In order to ensure that the motion of the superpixel blocks locally is a rigid motion, i.e. only the rotational and translational motion of the superpixel blocks from the previous frame to the following frame. The embodiment of the application provides a local rigidity optimization function for optimizing the rigidity movement property of a superpixel block, and the formula is as follows:
Figure BDA0001855607290000071
wherein, w1(xai,xak)=w2(xai,xak)=exp(-3||xai-xakAnd | l), the optimization of the former part of the plus sign in the formula is to make the two frames of the super pixel block move smoothly, and the optimization of the latter part is to keep the distance between the nodes of the K nearest neighbors unchanged in the two frames.
Since the accuracy of the optical flow map obtained in step S102 is not sufficiently high, and the operation of two-dimensional plane mapping to a three-dimensional space is performed using the optical flow map in step S104, the spatial coordinates of the super-pixel block have a certain error. The embodiment of the application provides a reprojection optimization function, which limits a rotation matrix, a translation matrix and a normal of a superpixel block, and the formula is as follows:
Figure BDA0001855607290000072
wherein, | siAnd | represents the number of all the pixels of the ith two-dimensional superpixel block.
Figure BDA0001855607290000081
Represented is the jth point of the ith two-dimensional superpixel block. K is the internal parameter of the camera, and the internal parameter comprises the focal lengthParameters and camera principal point coordinates.
In the embodiment of the application, the local rigidity optimization function and the reprojection optimization function are combined, that is, the local rigidity optimization function and the reprojection optimization function are added to obtain a combined optimization function, and the formula is as follows:
Figure BDA0001855607290000082
through n times of iteration, when E is minimum, the optimal lambda can be obtainedi,di,Ri,ti. Further, an optimized transformation matrix between the front and back two frames at the midpoint of each superpixel block can be obtained
Figure BDA0001855607290000083
And S106, respectively obtaining three-dimensional change vectors corresponding to the pore characteristic points by utilizing the coordinates of the pore characteristic points and the optimization transformation matrix.
After the three-dimensional super-pixel block set and the transformation matrix are processed by using the combinatorial optimization function to obtain the optimized transformation matrix, the three-dimensional change vectors corresponding to the pore feature points are respectively obtained by using the coordinates of the pore feature points and the optimized transformation matrix, and the method generally comprises the following steps: determining the middle points of all three-dimensional superpixel blocks in the three-dimensional superpixel block set as the characteristic points of each pore; and respectively obtaining corresponding three-dimensional change vectors by utilizing the coordinates of the characteristic points of the pores and the optimization transformation matrix. Midpoint coordinates X of all superpixel blocks have been obtained in step S104aiThe optimized transformation matrix M has been obtained in step S105iObtaining three-dimensional change vector V of all pore characteristic points between two framesi=(R-I)Xaiiti. Further, repeating the steps until the whole expression change video is read, and obtaining the three-dimensional change vector V of the pore characteristic point of each frame in the expression change videoi j(pore feature point three-dimensional change vector of frame j).
The method comprises the steps of firstly, obtaining a light flow diagram and a two-dimensional super-pixel block set by utilizing two continuous frames of pictures of an obtained expression change video, then processing the two-dimensional super-pixel block set by utilizing the light flow diagram to respectively obtain a three-dimensional super-pixel block set and a corresponding transformation matrix, then processing the three-dimensional super-pixel block set and the transformation matrix by utilizing a combined optimization function to obtain an optimized transformation matrix, and finally, obtaining three-dimensional change vectors corresponding to all pore characteristic points by utilizing coordinates of all pore characteristic points and the optimized transformation matrix. Therefore, the method utilizes the combined optimization function to obtain the optimization transformation matrix of the pore characteristic point motion trail, namely the accurate motion trail parameters, so as to improve the accuracy rate of pore characteristic point tracking. In addition, the method can obtain more pore feature point information by taking the center point of the superpixel block as the pore feature point instead of common SIFT and SURF feature points, so that the problem that common pore feature points are easy to match and make mistakes on a deformation model is solved, the feature point tracking of a three-dimensional space is realized, and the integrity of the pore special diagnosis point tracking information is improved.
The method is mainly applied to human mood recognition and VR virtual reality. Such as: in the human mood recognition, the motion tracks of the characteristic points of the human face pores can be recognized through an algorithm, and then the judgment of corresponding rules is carried out according to the three-dimensional motion tracks of a large number of characteristic points of the human face pores, so that the current actual mood of the human is determined; in VR virtual reality, the expression change of the face can be shot, the track of the pore characteristic points is tracked by using an algorithm, and then the pore characteristic points are mapped to the face of a virtual character in virtual reality, so that the synchronous change of the face and the real face expression in the virtual reality is realized.
In addition, the tracking of the pore characteristic points on the three-dimensional space can also be realized by combining the traditional three-dimensional reconstruction method with the pore characteristic point tracking method of the common two-dimensional image. However, the conventional three-dimensional reconstruction is relatively difficult to implement because it generates a large error when applied to a dynamic scene.
In the following, a face pore tracking system, a computer and a computer readable storage medium provided by the embodiments of the present application are introduced, and the face pore tracking system, the computer and the computer readable storage medium described below and the face pore tracking method described above may be referred to in correspondence with each other.
Referring to fig. 2, fig. 2 is a block diagram illustrating a structure of a face pore tracking system according to an embodiment of the present disclosure; the face pore tracking system comprises:
an obtaining module 201, configured to obtain two continuous frames of pictures of an expression change video;
the dense optical flow algorithm module 202 is configured to calculate two frames of pictures by using a dense optical flow algorithm to obtain an optical flow graph;
the SLIC super-pixel segmentation module 203 is used for performing SLIC super-pixel segmentation on the two frames of pictures to obtain a two-dimensional super-pixel block set;
the processing module 204 is configured to process the two-dimensional super-pixel block set by using a light flow graph to obtain a three-dimensional super-pixel block set and a corresponding transformation matrix respectively;
a combined optimization function processing module 205, configured to process the three-dimensional super-pixel block set and the transformation matrix by using a combined optimization function to obtain an optimized transformation matrix;
and a three-dimensional change vector obtaining module 206, configured to obtain three-dimensional change vectors corresponding to the pore feature points by using the coordinates of the pore feature points and the optimized transformation matrix.
Based on the above embodiments, the processing module 204 generally includes:
the mapping unit is used for mapping all the two-dimensional superpixel blocks in the two-dimensional superpixel block set to a three-dimensional space by using a light flow graph to obtain a three-dimensional superpixel block set;
and the synthesis unit is used for combining the rotation matrix and the translation matrix of each three-dimensional superpixel block in the three-dimensional superpixel block set into a corresponding transformation matrix.
Based on the above embodiment, the three-dimensional change vector obtaining module 206 generally includes:
a pore characteristic point determining unit, which is used for determining the middle points of all the three-dimensional superpixel blocks in the three-dimensional superpixel block set as the characteristic points of each pore;
and the three-dimensional change vector acquisition unit is used for respectively acquiring corresponding three-dimensional change vectors by utilizing the coordinates of the characteristic points of the pores and the optimization transformation matrix.
Based on the above embodiments, the combinatorial optimization function processing module 205 generally includes:
the superposition unit is used for adding the local rigidity optimization function and the reprojection optimization function to obtain a combined optimization function;
and the iteration unit is used for substituting the three-dimensional super pixel block set and the transformation matrix into a combined optimization function to carry out iteration to obtain an optimized transformation matrix.
The present application further provides a computer, comprising: a memory and a processor; wherein the memory is used for storing a computer program, and the processor is used for implementing the steps of the face pore tracking method of any of the above embodiments when executing the computer program.
The present application further provides a computer-readable storage medium, in which a computer program is stored, and the computer program, when executed by a processor, implements the steps of the face pore tracking method according to any of the embodiments described above.
The computer-readable storage medium may include: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
The embodiments are described in a progressive manner in the specification, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other. For the system provided by the embodiment, the description is relatively simple because the system corresponds to the method provided by the embodiment, and the relevant points can be referred to the method part for description.
Those of skill would further appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both, and that the various illustrative components and steps have been described above generally in terms of their functionality in order to clearly illustrate this interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. A software module may reside in Random Access Memory (RAM), memory, Read Only Memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art.
A method, a system, a computer and a computer-readable storage medium for tracking human face pores provided by the present application are described in detail above. The principles and embodiments of the present application are explained herein using specific examples, which are provided only to help understand the method and the core idea of the present application. It should be noted that, for those skilled in the art, it is possible to make several improvements and modifications to the present application without departing from the principle of the present application, and such improvements and modifications also fall within the scope of the claims of the present application.

Claims (6)

1. A method for tracking pores in a human face, comprising:
acquiring two continuous frames of pictures of an expression change video;
calculating the two frames of pictures by using a dense optical flow algorithm to obtain an optical flow graph;
SLIC superpixel segmentation is carried out on the two frames of pictures to obtain a two-dimensional superpixel block set;
processing the two-dimensional super pixel block set by using the light flow diagram to respectively obtain a three-dimensional super pixel block set and a corresponding transformation matrix;
processing the three-dimensional super pixel block set and the transformation matrix by using a combined optimization function to obtain an optimized transformation matrix;
respectively obtaining three-dimensional change vectors corresponding to the pore characteristic points by utilizing the coordinates of the pore characteristic points and the optimization transformation matrix;
wherein, the obtaining the three-dimensional change vector corresponding to each pore feature point by using the coordinates of each pore feature point and the optimization transformation matrix respectively comprises:
determining the middle points of all three-dimensional superpixel blocks in the three-dimensional superpixel block set as the pore characteristic points;
obtaining corresponding three-dimensional change vectors by using the coordinates of the pore characteristic points and the optimization transformation matrix respectively;
the processing the three-dimensional super-pixel block set and the transformation matrix by using the combined optimization function to obtain an optimized transformation matrix comprises the following steps:
adding the local rigidity optimization function and the reprojection optimization function to obtain the combined optimization function;
substituting the three-dimensional super pixel block set and the transformation matrix into the combined optimization function to carry out iteration to obtain the optimized transformation matrix;
the local stiffness optimization function is:
Figure FDA0003493945550000011
wherein x isai=[Xai,Yai]TIs the midpoint coordinate, X, of each two-dimensional superpixel blockai=[Xai,Yai,Zai]TIs the midpoint coordinate, M, of each three-dimensional superpixel blockiRepresenting a transformation matrix, w, of the motion change of the pixels of the ith voxel block from the previous frame to the next frame1(xai,xak)=w2(xai,xak)=exp(-3||xai-xakI), in the formula, the optimization of the former part of the plus sign is to enable the super pixel block to move smoothly between two frames, and the optimization of the latter part is to enable the distance between the nodes of the K nearest neighbor points to be kept unchanged in the former frame and the latter frame;
the reprojection optimization function is:
Figure FDA0003493945550000021
wherein, | siI represents the number of all the pixels of the ith two-dimensional superpixel block,
Figure FDA0003493945550000022
the representative is the jth point of the ith two-dimensional superpixel block, K is the internal parameter of the camera, and the internal parameter comprises a focal length parameter and the coordinates of the principal point of the camera.
2. The method for tracking human face pores according to claim 1, wherein said processing said two-dimensional super-pixel block set with said light flow map to obtain a three-dimensional super-pixel block set and a corresponding transformation matrix respectively comprises:
mapping all the two-dimensional superpixel blocks in the two-dimensional superpixel block set to a three-dimensional space by using the light flow graph to obtain a three-dimensional superpixel block set;
and combining the rotation matrix and the translation matrix of each three-dimensional superpixel block in the three-dimensional superpixel block set into a corresponding transformation matrix.
3. A face pore tracking system, comprising:
the obtaining module is used for obtaining two continuous frames of pictures of the expression change video;
the dense optical flow algorithm module is used for calculating the two frames of pictures by utilizing a dense optical flow algorithm to obtain an optical flow graph;
the SLIC super-pixel segmentation module is used for carrying out SLIC super-pixel segmentation on the two frames of pictures to obtain a two-dimensional super-pixel block set;
the processing module is used for processing the two-dimensional super pixel block set by utilizing the light flow graph to respectively obtain a three-dimensional super pixel block set and a corresponding transformation matrix;
the combined optimization function processing module is used for processing the three-dimensional super-pixel block set and the transformation matrix by using a combined optimization function to obtain an optimized transformation matrix;
the three-dimensional change vector acquisition module is used for respectively obtaining three-dimensional change vectors corresponding to the pore characteristic points by utilizing the coordinates of the pore characteristic points and the optimized transformation matrix;
wherein, the three-dimensional change vector acquisition module comprises:
a pore feature point determining unit, configured to determine midpoints of all three-dimensional superpixel blocks in the three-dimensional superpixel block set as each pore feature point;
the three-dimensional change vector acquisition unit is used for respectively acquiring corresponding three-dimensional change vectors by utilizing the coordinates of the pore characteristic points and the optimization transformation matrix;
the combined optimization function processing module comprises:
the superposition unit is used for adding the local rigidity optimization function and the reprojection optimization function to obtain the combined optimization function;
the iteration unit is used for substituting the three-dimensional super pixel block set and the transformation matrix into the combined optimization function to carry out iteration to obtain the optimized transformation matrix;
the local stiffness optimization function is:
Figure FDA0003493945550000031
wherein x isai=[Xai,Yai]TIs the midpoint coordinate, X, of each two-dimensional superpixel blockai=[Xai,Yai,Zai]TIs the midpoint coordinate, M, of each three-dimensional superpixel blockiRepresenting a transformation matrix, w, of the motion change of the pixels of the ith voxel block from the previous frame to the next frame1(xai,xak)=w2(xai,xak)=exp(-3||xai-xak| |) in the formulaThe optimization of the former part of plus sign is to make the super pixel block move smoothly between two frames, and the latter part is to make the distance between the nodes of K nearest neighbors keep unchanged in the former and latter frames;
the reprojection optimization function is:
Figure FDA0003493945550000032
wherein, | siI represents the number of all the pixels of the ith two-dimensional superpixel block,
Figure FDA0003493945550000033
the representative is the jth point of the ith two-dimensional superpixel block, K is the internal parameter of the camera, and the internal parameter comprises a focal length parameter and the coordinates of the principal point of the camera.
4. The face pore tracking system of claim 3, wherein the processing module comprises:
a mapping unit, configured to map all the two-dimensional super-pixel blocks in the two-dimensional super-pixel block set to a three-dimensional space by using the light flow graph, so as to obtain a three-dimensional super-pixel block set;
a synthesis unit for combining the rotation matrix and the translation matrix of each three-dimensional superpixel block in the three-dimensional superpixel block set into a corresponding transformation matrix.
5. A computer, comprising:
a memory and a processor; wherein the memory is used for storing a computer program and the processor is used for implementing the steps of the face pore tracking method according to claim 1 or 2 when executing the computer program.
6. A computer-readable storage medium, characterized in that the computer-readable storage medium stores a computer program which, when executed by a processor, implements the steps of the face pore tracking method according to claim 1 or 2.
CN201811313361.6A 2018-11-06 2018-11-06 Face pore tracking method and system Active CN109215061B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811313361.6A CN109215061B (en) 2018-11-06 2018-11-06 Face pore tracking method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811313361.6A CN109215061B (en) 2018-11-06 2018-11-06 Face pore tracking method and system

Publications (2)

Publication Number Publication Date
CN109215061A CN109215061A (en) 2019-01-15
CN109215061B true CN109215061B (en) 2022-04-19

Family

ID=64994666

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811313361.6A Active CN109215061B (en) 2018-11-06 2018-11-06 Face pore tracking method and system

Country Status (1)

Country Link
CN (1) CN109215061B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112767453B (en) * 2021-01-29 2022-01-21 北京达佳互联信息技术有限公司 Face tracking method and device, electronic equipment and storage medium
CN113011324B (en) * 2021-03-18 2023-03-24 安徽大学 Target tracking method and device based on feature map matching and super-pixel map sorting

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1920886A (en) * 2006-09-14 2007-02-28 浙江大学 Video flow based three-dimensional dynamic human face expression model construction method
CN102254154A (en) * 2011-07-05 2011-11-23 南京大学 Method for authenticating human-face identity based on three-dimensional model reconstruction
CN108090919A (en) * 2018-01-02 2018-05-29 华南理工大学 Improved kernel correlation filtering tracking method based on super-pixel optical flow and adaptive learning factor

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10803299B2 (en) * 2017-03-16 2020-10-13 Echo-Sense, Inc. System to overcome the two-dimensional nature of the captured images when attempting to generate three-dimensional measurement data

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1920886A (en) * 2006-09-14 2007-02-28 浙江大学 Video flow based three-dimensional dynamic human face expression model construction method
CN102254154A (en) * 2011-07-05 2011-11-23 南京大学 Method for authenticating human-face identity based on three-dimensional model reconstruction
CN108090919A (en) * 2018-01-02 2018-05-29 华南理工大学 Improved kernel correlation filtering tracking method based on super-pixel optical flow and adaptive learning factor

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于加速度平滑约束的非刚体三维重建研究;汪亚明等;《浙江理工大学学报(自然科学版)》;20171130;第37卷(第6期);正文第831-832页第0-3节 *

Also Published As

Publication number Publication date
CN109215061A (en) 2019-01-15

Similar Documents

Publication Publication Date Title
CN111243093B (en) Three-dimensional face grid generation method, device, equipment and storage medium
Stoll et al. Fast articulated motion tracking using a sums of gaussians body model
US8781161B2 (en) Image processing method and apparatus for generating a 3D model of a target object
CN112001926B (en) RGBD multi-camera calibration method, system and application based on multi-dimensional semantic mapping
CN112002014A (en) Three-dimensional face reconstruction method, system and device for fine structure
CN108876814B (en) Method for generating attitude flow image
CN117115256A (en) image processing system
CN103733226A (en) Fast articulated motion tracking
CN106778628A (en) A kind of facial expression method for catching based on TOF depth cameras
CN109147025B (en) RGBD three-dimensional reconstruction-oriented texture generation method
CN111080776B (en) Human body action three-dimensional data acquisition and reproduction processing method and system
CN111862299A (en) Human body three-dimensional model construction method and device, robot and storage medium
CN106875437A (en) A kind of extraction method of key frame towards RGBD three-dimensional reconstructions
CN109215061B (en) Face pore tracking method and system
Fua et al. Reconstructing surfaces from unstructured 3d points
Gao et al. Pose refinement with joint optimization of visual points and lines
CN101395613A (en) 3D face reconstruction from 2D images
CN111680573A (en) Face recognition method and device, electronic equipment and storage medium
JP2007025863A (en) Photographing system, photographing method, and image processing program
CN117315153A (en) Human body reconstruction and rendering method and device for cooperative light field and occupied field
CN112562067A (en) Method for generating large-batch point cloud data sets
CN111105489A (en) Data synthesis method and apparatus, storage medium, and electronic apparatus
CN113920270B (en) Layout reconstruction method and system based on multi-view panorama
CN115457171A (en) Efficient expression migration method adopting base expression space transformation
Jäger et al. A comparative Neural Radiance Field (NeRF) 3D analysis of camera poses from HoloLens trajectories and Structure from Motion

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant